Data Wrangling
Regular expressions and sed
Regular expressions are common and useful enough that it’s worthwhile to take some time to understand how they work. Let’s start by looking at the one we used above: /.*Disconnected from /
. Regular expressions are usually (though not always) surrounded by /
. Most ASCII characters just carry their normal meaning, but some characters have “special” matching behavior. Exactly which characters do what vary somewhat between different implementations of regular expressions, which is a source of great frustration. Very common patterns are:
.
means “any single character” except newline*
zero or more of the preceding match+
one or more of the preceding match[abc]
any one character ofa
,b
, andc
(RX1|RX2)
either something that matchesRX1
orRX2
^
the start of the line$
the end of the line
sed
’s regular expressions are somewhat weird, and will require you to put a \
before most of these to give them their special meaning. Or you can pass -E
.
*
and +
are, by default, “greedy”
you can just suffix *
or +
with a ?
to make them non-greedy, but sadly sed
doesn’t support that. We could switch to perl’s command-line mode though, which does support that construct:
We can use “capture groups”. Any text matched by a regex surrounded by parentheses is stored in a numbered capture group. These are available in the substitution (and in some engines, even in the pattern itself!) as \1
, \2
, \3
, etc. So:
sort -n
will sort in numeric (instead of lexicographic) order. -k1,1
means “sort by only the first whitespace-separated column”. The ,n
part says “sort until the n
th field, where the default is the end of the line. In this particular example, sorting by the whole line wouldn’t matter, but we’re here to learn!
What if we’d like these extract only the usernames as a comma-separated list instead of one per line, perhaps for a config file?
Let’s start with paste
: it lets you combine lines (-s
) by a given single-character delimiter (-d
; ,
in this case). But what’s this awk
business?
awk
awk
is a programming language that just happens to be really good at processing text streams. There is a lot to say about awk
if you were to learn it properly, but as with many other things here, we’ll just go through the basics.
First, what does {print $2}
do? Well, awk
programs take the form of an optional pattern plus a block saying what to do if the pattern matches a given line. The default pattern (which we used above) matches all lines. Inside the block, $0
is set to the entire line’s contents, and $1
through $n
are set to the n
th field of that line, when separated by the awk
field separator (whitespace by default, change with -F
). In this case, we’re saying that, for every line, print the contents of the second field, which happens to be the username!
Let’s see if we can do something fancier. Let’s compute the number of single-use usernames that start with c
and end with e
:
There’s a lot to unpack here. First, notice that we now have a pattern (the stuff that goes before {...}
). The pattern says that the first field of the line should be equal to 1 (that’s the count from uniq -c
), and that the second field should match the given regular expression. And the block just says to print the username. We then count the number of lines in the output with wc -l
.
However, awk
is a programming language, remember?
BEGIN
is a pattern that matches the start of the input (and END
matches the end). Now, the per-line block just adds the count from the first field (although it’ll always be 1 in this case), and then we print it out at the end. In fact, we could get rid of grep
and sed
entirely, because awk
can do it all, but we’ll leave that as an exercise to the reader.
Analyzing data
You can do math directly in your shell using bc
, a calculator that can read from STDIN! For example, add the numbers on each line together by concatenating them together, delimited by +
:
Or produce more elaborate expressions:
You can get stats in a variety of ways. st
is pretty neat, but if you already have R:
R is another (weird) programming language that’s great at data analysis and plotting. We won’t go into too much detail, but suffice to say that summary
prints summary statistics for a vector, and we created a vector containing the input stream of numbers, so R gives us the statistics we wanted!
If you just want some simple plotting, gnuplot
is your friend:
Data wrangling to make arguments
Sometimes you want to do data wrangling to find things to install or remove based on some longer list. The data wrangling we’ve talked about so far + xargs
can be a powerful combo.
For example, as seen in lecture, I can use the following command to uninstall old nightly builds of Rust from my system by extracting the old build names using data wrangling tools and then passing them via xargs
to the uninstaller:
Wrangling binary data
So far, we have mostly talked about wrangling textual data, but pipes are just as useful for binary data. For example, we can use ffmpeg to capture an image from our camera, convert it to grayscale, compress it, send it to a remote machine over SSH, decompress it there, make a copy, and then display it.
Last updated