It's just automated copy-pasting of commands you don't understand from the internet, which is something everyone who runs Linux (and is not a wizard) does all the time.
It's really really bad, but people will continue doing it until commands/things become so easy we can actually understand what we're doing. Unfortunately, this has never been a priority in Unix-land as far as I've gathered.
It's really really bad, but people will continue doing it until commands/things become so easy we can actually understand what we're doing.
But it isn't all that hard to understand a clean Unix. I have never copied or typed a command that I don't understand.
One problem may be that most Unices these days is not as clean anymore as, say OpenBSD or NetBSD. E.g. the recent X stack, with D-BUS, various *Kits, etc. is quite opaque. This madness was primarily contained to the desktop and proprietary Unices, but seems to spread through server Linuxes these days as well (and no, this is not an anti-systemd rant).
> But it isn't all that hard to understand a clean Unix. I have never copied or typed a command that I don't understand.
Well, good for you. I can assure you that it's not the case for almost anyone who approached Linux after the likes of Mandrake were released and/or tried to make it work on anything different from a traditional server.
I'm all for trying to understand what one is doing (and I wholeheartedly agree with TFA's point), but the reality is that very few people in the world really understand all intricacies of one's operating system. This does not excuse poor security practices, but it explains their background.
That's why you get someone who is capable of understanding it.
You wouldn't hire some high school kid who's just about taught themselves HTML by reading a book for a week, and get them to write your web application from ground up. You'd hire someone who knows what they're doing.
Why is it seen as any different for Operations work? There is a reason systems administration is a skilled field, and a reason they're paid on a par with developers.
I think the reason this happens less and less is that sysadmins are cost centers, not revenue generators. When you have developers do that work (poorly or not), you don't have a group that's purely cost. Those costs get hidden in the development group.
However yes the issue of a team that "doesn't make money" is very real. Maybe you it should be "marketed" like legal or accounting: it doesn't make money, it saves money caused by SNAFUBAR situations.
I'm all for trying to understand what one is doing (and I wholeheartedly agree with TFA's point), but the reality is that very few people in the world really understand all intricacies of one's operating system.
One of the problems (as I tried to argue) is that most Unices have become far more complex. The question is if the extra complexity is warranted on a server system, especially if bare Unix (OpenBSD serves as a good example here) was not that hard to understand.
Of course, that doesn't necessarily mean that we should look back. Another possibility would be to deploy services as unikernels (see Mirage OS) that use a small, thin, well-understood library layer on top of e.g. Xen, so that there isn't really an exploitable operating system underneath.
What seems to be the source of this push is that some entity wants Windows Group Policy like control over what users can and can't do etc.
This because they want to retain their ability to shop for off the shelf hardware, while getting away from a platform that has proves less than functional for mission critical operations (never mind being locked to a single vendor).
What seems to be happening is that there is a growing disdain for power users and "admins". The only two classes that seems to count are developers and users, and the latter needs to be protected from themselves for their own good (and developer sanity).
> I have never copied or typed a command that I don't understand.
To note that it's trivial to change what goes into the clipboard too. Copying and pasting commands from potentially untrustworthy sites should be ruled out too, even if understood
https://xkcd.com/1168/ comes to mind. And yes, I Google half of the command invocations too (but usually type them in by hand so that I can remember them faster instead of copy-pasting).
x = eXtract files from an archive
f = File path to the archive
c = Create a new archive from files
v = print Verbose output
z = apply gZip the input or output
That's 99% of common tar right there. The remaining one percent is:
j = apply bzip2 to the input or output
(I admit, j is a weird one here, though that has made it stick in my memory)
--list = does what's on the tin
--exclude = does what's on the tin
--strip-components = shortcut for dropping a leading directory from the extracted
I haven't used a flag outside of these in recent memory.
It isn't, but so aren't dozens or hundreds of other commands you encounter when working with the command line. I managed to memorize a few invocations of tar (I listed them in another comment) but, for instance, I very rarely create a new archive so I'm never sure what flag I need to use.
Part of the problem is that each command line utility has its own flag language, and equivalent functions often have different letters. For instance, very often one command has "recursive" as "-r" while another has it as "-R". It's impossible to remember it all unless you're a sysadmin.
Those case differences have meaning, -r is generally not dangerous while -R is; it's capitalized to make you stop and say hmmm, should I do this. All commands have the same flag language, command -options, and are all easily documented by man command; it quite literally couldn't get any simpler and unnecessary to memorize since you can look up any flag on any command with the same man command. Those who find it confusing haven't spent the least bit of effort actually trying because it's actually very simple and extremely consistent.
> Those case differences have meaning, -r is generally not dangerous while -R is; it's capitalized to make you stop and say hmmm, should I do this. All commands have the same flag language
Except with cp , -R is the safe one and -r is the dangerous one. And there are tons of little inconsistencies like this.
It may be more consistent, but is not easier - humans are generous with regard to input, they can infer intentions from context. I could type in "please unbork this" to a human and he'd know precisely that he has to a) untargzip it, b) change the directory structure and c) upload it to a shared directory for our team.
Welcome to working with computers that can't think; easier is not an option, they can't infer your intentions, so your point is what? Consistency is what matters when working with machines and the command line is a damn consistent language relative to other available options.
Frankly, if you're going to rely on a magic recipe from the web for production, you should absolutely document it locally and go through the process of understanding each commands.
As a former sys admin, I did that all the time. Who the hell can remember how to convert an SSL certificate to load it into a Glassfish app server? Didn't mean I couldn't step through all commands and figure out why it did that before I loaded the new cert... And next time, I just need to go to my quick hack repo for the magic incantation.
I agree with this. Despite my familiarity with so many command line tools, I do forget invocations. And so I have a wiki page I share with my coworkers to share particularly useful (or correct) invocations of dangerous tools.
On a Unix based system, tar is just used so frequently and for so many purposes, that not understanding it feels a bit like working in a shop and not knowing how to use a roll of tape.
You don't have to be a sysadmin to be comfortable with command line tools. If you want to fully utilize your *NIX system you have to learn how to use that shit, it really isn't that hard.
I am comfortable with command line tools. I just don't remember every switch and flag I happen to use twice a year, and the fact that command line utilities are totally inconsistent in subtle but significant ways, coupled with the overall unreadability of man pages and lack of examples in them makes this process difficult.
I'm a very proficient user of command line tools, but I don't remember everything: my shell history is set to 50,000 lines, and it's the first thing I search if I've forgotten something.
Sequences of commands sometimes get pasted into a conveniently-located text file; if I find myself repeating the operation I might turn it into a script, a shell function for my .zshrc, or an alias.
Just 10 minutes ago:
mysqldump [args] | nc -v w.x.y.z 1234
nc -v -l 1234 | pv | mysql [args]
(after an initial test that showed adding "gzip -1" was slower than uncompressed gigabit ethernet.)
One way to remember these commands without necessarily going "full sysadmin" is to use them on a daily basis. Whether I am developing, managing files, debugging, or really doing anything other than mindlessly browsing the web, I always have at least one (and often many) xterms open. The huge selection of tools and speed of invocation provided by a modern *nix command line is invaluable for many tasks that are not directly related to administrating a system.
That second one will create a tarbomb[1], which isn't necessarily wrong and maybe it's what's right for your application, but for more general usage this is friendlier:
I would argue that anyone who is reasonably comfortable in a command line would resort to `man command`, `command --help` or `command -h` before googling for usage.
I think, occasionally, it's a lot easier to grok a command through googling than reading the built-in help. A fair amount of built-in *nix documentation I have run across is mediocre or unhelpful.
I often find that GNU man pages are heavy on explanation of options and light on purpose and practical usage (the latter is tucked away in info pages). That's not necessarily the wrong way to do manpages, but I much prefer OpenBSD-style manpages, which seem to be better at providing practical information.
Recursively searching through all files in the current folder (aka the normal use case for grep) is accomplished by using "grep -r". It's on line 270 in "man grep". And that assumes that you know what grep is at all. Would it have hurt so much to call grep "regexsearch" instead? Maybe -r could be the default?
Recursion is caused either by -R or -r on nearly all commands and is pretty standard, and r is virtually never the default on any command because that would be a bad idea. And yes, having to type regexsearch rather than grep would have been a bad idea; while grep isn't a great name it's far preferable to someone who types constantly. Search or find would have been better names, names need to be both short and descriptive on the command line, and short comes first.
$ man grep | grep recursive
directory, recursively, following symbolic links only if they
Exclude directories matching the pattern DIR from recursive
-r, --recursive
Read all files under each directory, recursively, following
-R, --dereference-recursive
Read all files under each directory, recursively. Follow all
Nah, man pages are usually completely useless. I use man when I remember exactly what I want to do and just aren't sure if the flag was -f or -F. For everything else there's google.
Being a few years gone from working purely in tech, and having a decade of OSX desktop usage finally made me feel I'd gotten complacent. So I installed OpenBSD. Two things of note have happened:
1. I routinely need to look things up that are a bit murky in the deep recesses of my memory.
2. I am reminded continually of how nice it is to have man pages that are well written, are easily searchable, reference appropriate other pages, and are helpful enough to remind you of big picture considerations that you didn't realize you were facing when looking for a commandline flag.
Google query: git display file at revision. Immediate answer (without even having to click any links, it's in the result description): `git show revision:file`
Total time: 5 seconds
Trying to reproduce with man and help:
man git
search for display, finds nothing
start scrolling down
notice git-show (show various types of objects); sounds like a likely candidate
git show <revision> <file>
..no output
git show -h
usage: git log [<options>] [<since>..<until>] [[--] <path>...]
or: git show [options] <object>...
.. useful
man git show
man git-show
OPTIONS
<object>...
The names of objects to show. For a more complete list of ways to spell object names, see "SPECIFYING REVISIONS" section in git-rev-parse(1).
man git-rev-parse
a lot about specifying revisions, nothing about how actually specify a file
One reason to keep reading man pages is because you will likely discover new thing you did not expected. Also reading man pages help you to understand the tool philosophy/workflow, if the man page is well writen (which is often the case). This hold for any kind of documentation as well.
When I google something, I usually do not remember the answer to my question, the only thing I remember is the keyword to put in my futur query to get the same answer. You will get your answer quicker, but you wont learn much. So personally, I prefer reading man pages (when I can) than use google.
I never use man pages, to be honest, and I'm quite comfortable on a command line. Reading long-ish things in a terminal kind of sucks, for me, and even if I end up reading a man page in Chrome it's nicely formatted and has readable serif fonts and is easily scrolled with the trackpad on my laptop.
I probably haven't read a man page "cover to cover" since high school. Usually I just need to read a couple lines about a specific flag or the location of some configuration file which I can find quickly with a simple search or by scanning the document with my eyes.
The wheel or trackpad scrolls the terminal's scrollback, not the pager program that happens to be running in it.
(I can imagine some sort of hackery that determines if less or something is running and scrolls that, but it sounds like a huge mess. Is that actually what you're doing? Does it send keypresses? What if you're in a mode where those keypresses do something besides scrolling?)
No I'm talking about scrolling in the actual program running - it's most useful in a pager obviously, but it also works for editors, and it works both locally (OS X, built-in Terminal.app) and over SSH on Debian hosts.
I'll be honest - I have ~no idea~ (edit: apparently there are xterm control sequences for mouse scrolling) how it's actually implemented, but several tools have some reference to mouse support (tmux, vim, etc) in option/config files, so it's probably available for your distro/platform and just needs to be enabled.
Further edit: (or PS. or whatever):
`less` pager supports mouse scrolling. `more` pager does not!
It can do continuous scrolling of the terminal or line-by-line scrolling of the pager. Both are poor options for trying to actually read prose content inside the terminal, IMO, and opening a browser is easier.
What do you mean by "continuous" versus "line-by-line" scrolling? When I use the mousewheel to scroll a man page in xterm it behaves and appears the same as when I use the mousewheel to scroll a webpage in Chrome (the content moves smoothly up and down, disappearing at the top and bottom edges of the viewport).
Do you think the average user copying and pasting administrative commands into their shell will stop to check the content encoding of the document they are copying from? Do you trust your browser not to try rendering an ill-defined document with an ambiguous extension?
Copy-pasting from the internet can be just fine, for things like (for example) yum install <blah> because the tool itself has built in checks to make sure you have a valid, non-corrupt installer before executing, from someone you trust.
The point is that what ends up on your clipboard can be different from what you see and if a new line is there, then the command executes before you have a chance to change your mind.
No, it's not :(