Data Wrangling
Regular expressions and sed
Regular expressions are common and useful enough that itâs worthwhile to take some time to understand how they work. Letâs start by looking at the one we used above: /.*Disconnected from /. Regular expressions are usually (though not always) surrounded by /. Most ASCII characters just carry their normal meaning, but some characters have âspecialâ matching behavior. Exactly which characters do what vary somewhat between different implementations of regular expressions, which is a source of great frustration. Very common patterns are:
.means âany single characterâ except newline*zero or more of the preceding match+one or more of the preceding match[abc]any one character ofa,b, andc(RX1|RX2)either something that matchesRX1orRX2^the start of the line$the end of the line
sedâs regular expressions are somewhat weird, and will require you to put a \ before most of these to give them their special meaning. Or you can pass -E.
* and + are, by default, âgreedyâ
you can just suffix * or + with a ? to make them non-greedy, but sadly sed doesnât support that. We could switch to perlâs command-line mode though, which does support that construct:
perl -pe 's/.*?Disconnected from //'We can use âcapture groupsâ. Any text matched by a regex surrounded by parentheses is stored in a numbered capture group. These are available in the substitution (and in some engines, even in the pattern itself!) as \1, \2, \3, etc. So:
sed -E 's/.*Disconnected from (invalid |authenticating )?user (.*) [^ ]+ port [0-9]+( \[preauth\])?$/\2/'sort -n will sort in numeric (instead of lexicographic) order. -k1,1 means âsort by only the first whitespace-separated columnâ. The ,n part says âsort until the nth field, where the default is the end of the line. In this particular example, sorting by the whole line wouldnât matter, but weâre here to learn!
What if weâd like these extract only the usernames as a comma-separated list instead of one per line, perhaps for a config file?
ssh myserver journalctl
| grep sshd
| grep "Disconnected from"
| sed -E 's/.*Disconnected from (invalid |authenticating )?user (.*) [^ ]+ port [0-9]+( \[preauth\])?$/\2/'
| sort | uniq -c
| sort -nk1,1 | tail -n10
| awk '{print $2}' | paste -sd,Letâs start with paste: it lets you combine lines (-s) by a given single-character delimiter (-d; , in this case). But whatâs this awk business?
awk
awk is a programming language that just happens to be really good at processing text streams. There is a lot to say about awk if you were to learn it properly, but as with many other things here, weâll just go through the basics.
First, what does {print $2} do? Well, awk programs take the form of an optional pattern plus a block saying what to do if the pattern matches a given line. The default pattern (which we used above) matches all lines. Inside the block, $0 is set to the entire lineâs contents, and $1 through $n are set to the nth field of that line, when separated by the awk field separator (whitespace by default, change with -F). In this case, weâre saying that, for every line, print the contents of the second field, which happens to be the username!
Letâs see if we can do something fancier. Letâs compute the number of single-use usernames that start with c and end with e:
| awk '$1 == 1 && $2 ~ /^c[^ ]*e$/ { print $2 }' | wc -lThereâs a lot to unpack here. First, notice that we now have a pattern (the stuff that goes before {...}). The pattern says that the first field of the line should be equal to 1 (thatâs the count from uniq -c), and that the second field should match the given regular expression. And the block just says to print the username. We then count the number of lines in the output with wc -l.
However, awk is a programming language, remember?
BEGIN { rows = 0 }
$1 == 1 && $2 ~ /^c[^ ]*e$/ { rows += $1 }
END { print rows }BEGIN is a pattern that matches the start of the input (and END matches the end). Now, the per-line block just adds the count from the first field (although itâll always be 1 in this case), and then we print it out at the end. In fact, we could get rid of grep and sed entirely, because awk can do it all, but weâll leave that as an exercise to the reader.
Analyzing data
You can do math directly in your shell using bc, a calculator that can read from STDIN! For example, add the numbers on each line together by concatenating them together, delimited by +:
some_command(s) | paste -sd+ | bc -lOr produce more elaborate expressions:
echo "2*($(data | paste -sd+))" | bc -lYou can get stats in a variety of ways. st is pretty neat, but if you already have R:
ssh myserver journalctl
| grep sshd
| grep "Disconnected from"
| sed -E 's/.*Disconnected from (invalid |authenticating )?user (.*) [^ ]+ port [0-9]+( \[preauth\])?$/\2/'
| sort | uniq -c
| awk '{print $1}' | R --slave -e 'x <- scan(file="stdin", quiet=TRUE); summary(x)'R is another (weird) programming language thatâs great at data analysis and plotting. We wonât go into too much detail, but suffice to say that summary prints summary statistics for a vector, and we created a vector containing the input stream of numbers, so R gives us the statistics we wanted!
If you just want some simple plotting, gnuplot is your friend:
ssh myserver journalctl
| grep sshd
| grep "Disconnected from"
| sed -E 's/.*Disconnected from (invalid |authenticating )?user (.*) [^ ]+ port [0-9]+( \[preauth\])?$/\2/'
| sort | uniq -c
| sort -nk1,1 | tail -n10
| gnuplot -p -e 'set boxwidth 0.5; plot "-" using 1:xtic(2) with boxes'Data wrangling to make arguments
Sometimes you want to do data wrangling to find things to install or remove based on some longer list. The data wrangling weâve talked about so far + xargs can be a powerful combo.
For example, as seen in lecture, I can use the following command to uninstall old nightly builds of Rust from my system by extracting the old build names using data wrangling tools and then passing them via xargs to the uninstaller:
rustup toolchain list | grep nightly | grep -vE "nightly-x86" | sed 's/-x86.*//' | xargs rustup toolchain uninstallWrangling binary data
So far, we have mostly talked about wrangling textual data, but pipes are just as useful for binary data. For example, we can use ffmpeg to capture an image from our camera, convert it to grayscale, compress it, send it to a remote machine over SSH, decompress it there, make a copy, and then display it.
ffmpeg -loglevel panic -i /dev/video0 -frames 1 -f image2 -
| convert - -colorspace gray -
| gzip
| ssh mymachine 'gzip -d | tee copy.jpg | env DISPLAY=:0 feh -'Last updated
Was this helpful?