By Taisa
In this post, we’ll walk through how to solve the Leaping Log Analysis challenge using only command-line tools (plus a little OSINT). This post is the second in a series. (Part 3 drops on 02/17.)
Prerequisites
- Command-line Log Analysis FOR THE WIN (1/3): How to Approach a Wild Log – Explains the methodologies and tools used in this walkthrough.
- Leaping – A sample challenge representative of an Easy Log Analysis challenge within the National Cyber League (NCL) Games.
In Kali:
- Save the Leaping log file to the desktop of your Kali virtual machine
- Rename the file leaping.log
- Open a Terminal window in Kali
- In the Terminal, type:
cd Desktop
- To verify that the file is present, type:
ls
Chromebook:
- Log into the Chromebook (no guest support for Linux)
- Save the Leaping log file to the Linux files folder
- Rename the file leaping.log
- Open a Terminal window
- To verify the file has been shared with Linux, type:
ls
Related Posts
- Leaping into Log Analysis – WebWitch’s writeup for the Leaping challenge, which is an essential introduction to command-line tools if you are brand new to command-line.
- Sharpening the Axe: How to Cut and Carve Logs in the NCL – John “Mako” McGill’s introduction to using
grep
, regex,awk
,sed
,uniq
,sort
, and the pipe tool to remove unwanted data as a means of exposing target data.
Always Make a Table
To our dismay, we did not find explicit explanations of the fields for this type of log in Part 1. The fields are fairly easy to intuit, though, and we did find documented samples of the entries generated by different kinds of events. That’s enough to make us confident about what we can generally expect to find in this 10,650 line log.
In case you missed it, here’s a review of that process:
The Leaping challenge tells us we are looking at a VSFTPD log. What happens when you Google VSFTPD log format? Nothing useful? How about when you refine the search by appending terms that appear repeatedly within the log, like “CONNECT” and “FAIL LOGIN“? Try this Google search: vsftpd log format connect “fail login”—and notice the result that has “documentation” in the title.
Command-line Log Analysis FOR THE WIN (1/3): How to Approach a Wild Log
Log Format Table for “Leaping”
Sample Log Entries:
Sun Mar 19 03:38:38 2017 [pid 24540] CONNECT: Client "59.188.221.110" Sun Mar 19 03:38:42 2017 [pid 24539] [anonymous] FAIL LOGIN: Client "59.188.221.110"
Field Description | Example |
---|---|
Time in format: weekday month day hour:minute:second year | Sun Mar 19 03:38:42 2017 |
Identifier labeled pid | [pid 24539] |
Field not always present: username | [anonymous] |
Event | FAIL LOGIN: |
Client / remote host IP address | Client "59.188.221.110" |
Field not always present: additional information | (This field doesn’t appear in our sample log entries above, but we know it exists from OSINT[*].) |
Leaping #1

What is the username of the person who uploaded documents to the server?
Tools We’ll Use
Utility | Switch | Description |
---|---|---|
cat | outputs file content, e.g. cat leaping.log | |
| | “pipe” – takes the output of the command on the left and uses it as input for the command on the right | |
grep | conducts a case-sensitive search | |
-i | makes the search case-insensitive | |
-v | excludes lines that match the search term | |
\| | the “or” operator, being escaped by a backslash so that the system doesn’t try to interpret it as pipe |
Approach
There are a few good ways to approach this question.
Option 1: Show Me What I Want
We know this is a VSFTPD log, and OSINT showed us what the log entry for an upload definitely looks like. This was the sample(*):
Sun Aug 27 16:28:20 2006 [pid 13962] [xx] OK UPLOAD: Client "1.2.3.4", "/a.php", 8338 bytes, 18.77Kbyte/sec
Knowing the exact case and syntax for an upload, then, either of these commands will isolate the log entries for uploads:
cat leaping.log | grep UPLOAD cat leaping.log | grep "OK UPLOAD"
Option 2: Take Away What I Don’t Want
Examining the first few log entries, we see entry types that we know won’t help us answer a question about uploads:
Sun Mar 19 03:38:38 2017 [pid 24540] CONNECT: Client "59.188.221.110" Sun Mar 19 03:38:42 2017 [pid 24539] [anonymous] FAIL LOGIN: Client "59.188.221.110"
The Mako Method of using grep -v
allows us to eliminate uninteresting entries until the only thing left behind is the element we’re looking for. Try out these commands for removing entries that contain the words CONNECT or FAIL:
cat leaping.log | grep -v CONNECT | grep -v FAIL cat leaping.log | grep -v "CONNECT\|FAIL"
That eliminated over 10,600 log entries! What’s left in the 44 lines that remain?
Option 3: I’m Feeling Lucky!
It could be reasonably deduced, just from the question, that the log entries we’re looking for will contain the string upload. The grep
utility will find the string upload for us even if it’s part of longer string, like uploaded, uploading, or even *_&%47-!uploadBOOGER:{).
But grep
is case-sensitive by default. See how many results are returned when you run this command:
cat leaping.log | grep upload
Absolutely none! Does that mean, if we don’t know the exact case ahead of time, we have to run multiple searches to check for all possibilities, like upload, Upload, and UPLOAD?
Heck, nah! There’s a switch for that! (Whew!) Try it out:
cat leaping.log | grep -i upload
(If there was an instance of uPlOaD, it would return that, too!)
The question indicates that we should expect to find just one person who uploaded documents. What field should we look in for their username? Refer back to the Log Format Table if you’re unsure!
Leaping #2

How many bytes were downloaded off the server?
Tools We’ll Use
Utility | Switch | Description |
---|---|---|
cat | outputs file content, e.g. cat leaping.log | |
| | “pipe” – takes the output of the command on the left and uses it as input for the command on the right | |
| | “or” operator – yes, it’s the same as the “pipe” character, but in the way we’re about to use it we won’t need to escape it | |
grep | conducts a case-sensitive search | |
-i | makes the search case-insensitive | |
cut | extracts fields (columns that you specify) from lines | |
-d | specifies the use of a delimiter other than the default tab; it can only be one character | |
-f | denotes which field(s) to display, indicated by their numerical order of appearance | |
tr | “translate” – a tool for modifying output | |
-d | deletes characters, either specific strings or entire classes of characters | |
paste | merges lines of files into parallel columns | |
-s | “serial” – merges items from a column-style list onto a single line and separates them by a tab | |
-d | specifies the use of a delimiter other than the default tab | |
bc | a calculator; may need to be installed (sudo apt-get install bc ) | |
echo | generates the string you specify as output |
APPROACH
STEP 1 – ISOLATE THE LOG ENTRIES FOR DOWNLOADS
Now that we know how to identify uploads, the same methodology can be used to identify downloads. Refer back to Leaping #1 and try isolating the downloads this time!
There are just 16 log entries, and they include additional fields that weren’t present in our sample log entries from the head of leaping.log. Fortunately, these fields are clearly labeled, so we know exactly where to find the byte values:
Tue Mar 21 21:23:27 2017 [pid 7818] [polimer] OK DOWNLOAD: Client "5.138.42.134", "/santa3.15.zip", 6583053 bytes, 993.96Kbyte/sec
We could add these values up by hand—but that would be error-prone and wouldn’t teach us anything new! These challenges that can be done by hand are perfect for testing unfamiliar tools on, because we’re able to check our solutions before risking our accuracy. We may just find that our manual calculations were off, and it was our command-line tools that got it right!
STEP 2 – ISOLATE THE FIELD FOR BYTE VALUES
Once you’ve isolated the 16 log entries for downloads, the next step is to isolate the field that holds the bytes. The cut
command comes in handy for this. Here’s a couple of different approaches to illustrate how it works:
Option A: Cutting Fields Separated by a Space
cat leaping.log | grep -i download | cut -d " " -f 14
This approach uses a space ( -d " "
) as the delimiter—the character which cut
will use to determine how to separate fields. When a space is used as the delimiter, bytes are found in the 14th field.
This example demonstrates how to count the fields when you designate a space as the delimiter:
1 2 3 4 5 6 7 8 9 10 11 Tue Mar 21 21:27:02 2017 [pid 7818] [polimer] OK DOWNLOAD: Client "5.138.42.134", "/design/template1.html", 2838 bytes, 29.30Kbyte/sec 12 13 14 15 16
Option B: Cutting Fields Separated by a Comma
cat leaping.log | grep -i download | cut -d "," -f 3
This approach uses a comma ( -d ","
) as the delimiter between fields. Now bytes are found in the 3rd field, but in this case the field includes everything between the commas—spaces and the word bytes
.
Returning to the previous example, here’s how the fields are counted when they’re delimited by a comma instead of a space:
1 Tue Mar 21 21:27:02 2017 [pid 7818] [polimer] OK DOWNLOAD: Client "5.138.42.134", "/design/template1.html", 2838 bytes, 29.30Kbyte/sec , 2 , 3 , 4
While separating by spaces is clearly the better way to isolate the bytes in this log, what if we had a more complicated log where one cut
, well, simply wouldn’t cut it?
WHAT IF cut
REMOVED TOO MUCH? – PRESERVING MORE FIELDS
The -f
(“field”) switch for the cut
utility is pretty flexible! You can tell it you want multiple fields, a range of fields, or all fields after a certain point, and even mix and match.
Try each of these commands to see what the different outputs look like:
cat leaping.log | grep -i download | cut -d " " -f 8,12,14 cat leaping.log | grep -i download | cut -d " " -f 8-15 cat leaping.log | grep -i download | cut -d " " -f 8,14-15 cat leaping.log | grep -i download | cut -d "," -f 3-
WHAT IF cut
DIDN’T REMOVE ENOUGH? – TRIMMING EXCESS CHARACTERS
Pretend that the size field was presented as 100bytes
(no space) instead of 100 bytes
, or maybe there’s a trailing comma stuck to your numbers, like 100,
. There are a few ways to strip off unwanted characters.
Run this command to see the troublesome data set we’ll work with:
cat leaping.log | grep -i download | cut -d "," -f 3
Option 1: One More cut
An additional cut
could be used to isolate the bytes and trim off the excess:
cat leaping.log | grep -i download | cut -d "," -f 3 | cut -d " " -f 2
Notice that the position of the bytes is no longer in field 14 in the second cut
. The new cut
is based on the output of the command on the left side of the pipe.
Now we’re going to intentionally make a mess (yay!!) to show you how to get out of it. You don’t need to understand all the elements of this next command yet, but check out its output because that’s our new data set:
cat leaping.log | grep -i download | cut -d " " -f 14-15 | tr -d " "

4356bytes, 52722bytes, 6583053bytes, 2838bytes, 2270bytes, 2382bytes, 2471bytes, 2383bytes, ...
How can one more cut
isolate those digits?
A delimiter can only be one character, and we’ve used spaces and commas as delimiters so far, but the delimiter doesn’t have to be punctuation. Try telling cut
to treat the letter b as a delimiter:
cat leaping.log | grep -i download | cut -d " " -f 14-15 | tr -d " " | cut -d "b" -f 1
Option 2: Deleting with the tr
Tool
Back to that tr
tool we used to make our mess! Turns out, it can be used to unmake the mess as well by deleting any strings or character classes that we tell it to delete. Try these commands:
cat leaping.log | grep -i download | cut -d " " -f 14-15 | tr -d " " | tr -d "bytes," cat leaping.log | grep -i download | cut -d " " -f 14-15 | tr -d " " | tr -d "[:alpha:]|[:punct:]"
Check out tr --help
to see all its useful options.
STEP 3 – SUM THE BYTE VALUES
(I did not come up with this clever trick! I had Googled something like “how to add a list of numbers together in command-line,” and this gem popped up. There are no bad questions to ask Google!)
Now that we’ve isolated the bytes, how do we add them up if not by hand? (CryptoKait would propose Excel here[*].)
First, run this command and notice what the paste
utility does to our column of isolated byte values when the -s
switch is used:
cat leaping.log | grep -i download | cut -d " " -f 14 | paste -s
The paste
utility is normally used to collate multiple files into parallel columns. The -s
(“serial”) switch tells paste
to present input as perpendicular rows instead. The command above moved all our byte values onto one line, separated by tabs.
Just as with the cut
utility, paste
‘s default tab delimiter can be set to another character that we specify. See how the output from this command looks:
cat leaping.log | grep -i download | cut -d " " -f 14 | paste -s -d "+"
Now we have a list of numbers with plus signs between them instead of tabs.
The bc
tool is a calculator. Note that you might have to install it (sudo apt-get install bc
). You can throw a few different operations at it, but we’re only interested in performing addition for now. Run this command for a sneak peek at how bc
works on numbers with plus signs between them:
echo "1+1" | bc
Now try feeding it the byte values!
cat leaping.log | grep -i download | cut -d " " -f 14 | paste -s -d "+" | bc
Does the total match your manual calculations? Might that come in handy when you’re adding 16,000 values together instead of just 16?
Leaping #3

How many unique IP addresses tried to connect to the server?
Tools We’ll Use
Utility | Switch | Description |
---|---|---|
cat | outputs file content, e.g. cat leaping.log | |
| | “pipe” – takes the output of the command on the left and uses it as input for the command on the right | |
grep | conducts a case-sensitive search | |
-E | adds support for a more extensive regular expression language (regex); we’ll use it to unlock match repetition (“{ } “) | |
-o | extracts matching strings from their lines | |
cut | extracts fields (columns that you specify) from lines | |
-d | specifies the use of a delimiter other than the default tab; it can only be one character | |
-f | denotes which field(s) to display, indicated by their numerical order of appearance | |
head | displays the first 10 lines of input; use head -[number] to change the default number | |
sort | alphabetizes input, e.g. a, b, c, 1, 101, 2 | |
-n | sorts input numerically instead of alphabetically (so instead of 1, 101, 2 you get 1, 2, 101) | |
-r | sorts in reverse order (handy for showing largest numbers first) | |
uniq | deduplicates – detects repeated lines if they are adjacent and removes the duplicates (deduplication) | |
-c | prefixes each line with the number of adjacent occurrences, so you know how many were found in that position before the duplicates were removed | |
wc | “word count” – counts the number of lines, words, and characters | |
-l | displays the line count only |
Approach
STEP 1 – ISOLATE THE IP ADDRESSES
Time to isolate the IP addresses! In this case, it can be done using the tricks we’ve learned so far.
Since we’re only interested in IP addresses that tried to connect to the server, we could just focus on log entries with a CONNECT event and make quick work of cut
ting the IP field this way:
cat leaping.log | grep CONNECT | cut -d " " -f 10 | head
(We’re using the head
command here to display just a sample set of the first 10 log entries. The output would be 5,409 lines otherwise.)
And we’re done with Step 1!
But let’s pretend we needed to check every log entry for unique IPs—not just the CONNECT events. In this case, it doesn’t change the outcome, because every IP had to connect before performing any other actions (so there are no wild IPs that appear out of nowhere and start downloading, etc.). We’ll still end up with the same number of unique IPs at the end of this exercise. But we’ll also be so much smarter!
If you use the cut
tool now—on all the log entries, not just the CONNECT events—with the space as the delimiter, you find that variations in entry types prevent this from working to isolate the IP addresses. Try it to see what happens:
cat leaping.log | cut -d " " -f 10 | head
The output looks like this:
"59.188.221.110" LOGIN: "121.206.121.31" LOGIN: ...
The IP address doesn’t occur in the same position in every log entry, at least not when using the space as the delimiter.
Is there another delimiter you could use in this case to put the IP in a predictable location in every log entry?
Just as a learning exercise, let’s stare at one of each of the different log entry types to see if we can figure out a good delimiter for isolating the IP addresses:
Sun Mar 19 03:38:38 2017 [pid 24540] CONNECT: Client "59.188.221.110" Sun Mar 19 03:38:42 2017 [pid 24539] [anonymous] FAIL LOGIN: Client "59.188.221.110" Mon Mar 20 17:46:16 2017 [pid 29740] [polimer] OK LOGIN: Client "94.25.62.32" Mon Mar 20 17:46:42 2017 [pid 29769] [polimer] OK DOWNLOAD: Client "94.25.62.32", "/modules/gallery/templates_user/template_user.html", 4356 bytes, 29.33Kbyte/sec Tue Mar 21 21:27:45 2017 [pid 7818] [polimer] OK UPLOAD: Client "5.138.42.134", "/design/template1.html", 2270 bytes, 13.15Kbyte/sec
The colon ( :
) would actually work, but it’s a little sloppy. Run this command to see what using a colon as the delimiter looks like:
cat leaping.log | cut -d ":" -f 4 | head
How about the quotation mark ( "
)?
Notice that when you specify the quotation mark as the delimiter, you have to let the system know not to interpret it as anything other than a literal character. Here are two ways to do that:
cat leaping.log | cut -d '"' -f 2 | head
cat leaping.log | cut -d "\"" -f 2 | head
And we’re done with Step 1… again!
But what if there wasn’t a good delimiter that could place the IP address in a predictable field for every line? (You want to punch me right now, I know, but I swear you’ll thank me for this later!)
REPRESENTING IP ADDRESSES IN REGEX
When you look at these log entries, your eyes can quickly pick out the IP address on every line, no matter where it is, because IPv4 addresses have a distinct look—a pattern—that’s predictable and unique.
IPv4 addresses always have:
- Four sets of numbers
- The numbers are 1-3 digits in length
- The numbers are separated by periods
There’s a way to represent this pattern and extract its matches using command-line, regardless of position in the log!
Mako introduces regular expressions (regex) in his blog post here and recommends a tutorial. In this post, we’ll skip straight to an example of what an IPv4 address looks like in regex.
Either of these next commands will extract IPv4 addresses from log entries based on pattern matching. Try them out with the head
command (as shown here) so that you only see the first 10:
cat leaping.log | grep -Eo "[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}" | head cat leaping.log | grep -Eo "[[:digit:]]{1,3}\.[[:digit:]]{1,3}\.[[:digit:]]{1,3}\.[[:digit:]]{1,3}" | head
Here’s what the regex we’re using means:
[0-9] | the character set to match: numbers 0 through 9 |
[[:digit:]] | an alternative designation method for the character set to match: the character class of [:digit:], which is numbers 0 through 9 |
{1,3} | the minimum and maximum number of times to make a match from the preceding character set; in this case, we’re saying “match any set of at least 1 but no more than 3 digits in a row” |
\. | escapes the period so that the system doesn’t try to interpret it as a special regex character; in this case, we want a literal period |
In this manner, we’re able to isolate the IP addresses so that they’re the only things we see from the log. The problem now is that there’s 10,650 of them—how do we find out how many of those are unique?
STEP 2 – SORT AND REMOVE THE DUPLICATE IP ADDRESSES
WebWitch introduces the sort
and uniq
commands here.
Let’s play around with them!
First, notice the difference in output between these two commands:
cat leaping.log | grep -Eo "[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}" | sort | head cat leaping.log | grep -Eo "[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}" | sort -n | head
In the first command, sort
is working alphabetically by default. In its dictionary, a 0 comes before a period, so the string “101.
” comes before the string “1.
” when sorting alphabetically.
Using the -n
switch, though, we can tell sort
to work numerically instead, listing “1.
” before “101.
“.
Another useful switch for sort
is -r
, which will list results in reverse order. This comes in handy when you need to find the largest number. We’ll drive this concept home later with better examples, but for now just take it out for a spin:
cat leaping.log | grep -Eo "[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}" | sort -nr
| head
The sort
utility actually even has its own switch for removing duplicates, -u
for unique. Try it, but don’t get too attached (because there’s a better habit to get into):
cat leaping.log | grep -Eo "[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}" | sort -nu | head
If sort
has this -u
switch, then what’s the point of having a separate uniq
utility? I’m so glad you asked!
What if the question didn’t just ask how many unique IP addresses there are in the log? What if it asked which unique IP address appears most frequently? You’d need a way to not only sort out the unique IP addresses but to count them, too! The sort
utility can’t help us count, but the uniq
utility can, and it renders sort
‘s -u
switch redundant at best and an impediment at worst, so we can just forget that -u
exists.
sort | uniq
> sort -u
, every time.
Before we talk about uniq
, though, let’s talk about how not to use it. We’ll begin by making an important mistake!

Compare the results when you run these two commands:
cat leaping.log | grep -Eo "[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}" | sort | uniq -c | sort -nr | head cat leaping.log | grep -Eo "[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}" | uniq -c | sort -nr | head
Output of the first command:
470 91.76.5.173 312 120.76.159.230 211 109.61.159.39 208 222.223.143.107 193 117.71.59.240 186 46.246.89.153 186 218.161.75.126 182 58.20.20.162 182 223.247.241.34 166 59.188.221.110
Output of the second command:
470 91.76.5.173 211 109.61.159.39 186 46.246.89.153 154 223.74.68.198 104 61.133.251.37 104 59.188.221.110 104 39.148.150.235 104 27.2.185.208 104 223.246.181.99 104 222.74.74.74
The numbers in the first columns, highlighted in red above, are how many times uniq -c
counted the presence of the indicated IP addresses. Both commands above used sort
and uniq
, so why was their output different?
- Hint #1: Notice that the first command has an extra
sort
in it.
- Hint #2: Swap out the
head
command for thewc -l
command to count the number of lines being produced as output:
cat leaping.log | grep -Eo "[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}" | sort | uniq -c | sort -nr | wc -l cat leaping.log | grep -Eo "[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}" | uniq -c | sort -nr | wc -l
The first command suggests there are 322 unique IP addresses. The second command suggests there are 1,495.
Which command has it right?
The way uniq
works, it only removes adjacent duplicate entries! When multiple entries right next to each other are the same, uniq
condenses them down to a single entry. The -c
switch then counts how many times that entry originally occurred in that position.
We have to sort
the list first, before running uniq
, to make sure that all matching entries are adjacent to one another. So the first command above has it right, and the second command has it wrong.
After isolating the IP addresses, this is the best habit to get into for sorting and removing duplicates from a list—always a sort
before a uniq
(instead of a sort -u
), even if that means you’ll need one more sort
later, because it leaves valuable options like counting open:
cat leaping.log | grep CONNECT | cut -d '"' -f 2 | sort | uniq
STEP 3 – COUNT THE NUMBER OF UNIQUE IP ADDRESSES
Time to return to the original question: How many unique IP addresses tried to connect to the server?
Now that we’ve isolated the IP addresses, sorted them, and removed the duplicate IPs, we have a list of unique IP addresses. All that’s left is to count the number of lines by using the wc -l
utility.
This command is arguably the simplest way we could have arrived at the solution:
cat leaping.log | grep CONNECT | cut -d '"' -f 2 | sort | uniq | wc -l
But now you also know what this means, and its handy elements will have much broader applications well beyond this one log:
cat leaping.log | grep -Eo "[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}" | sort | uniq -c | sort -nr | wc -l
Leaping #4

What is the name of the user who failed to connect to the server the 3rd most times?
Recommended Tools
(Your Method May Vary)
Utility | Switch | Description |
---|---|---|
cat | outputs file content, e.g. cat leaping.log | |
| | “pipe” – takes the output of the command on the left and uses it as input for the command on the right | |
grep | conducts a case-sensitive search | |
-i | makes the search case-insensitive | |
cut | extracts fields (columns that you specify) from lines | |
-d | specifies the use of a delimiter other than the default tab; it can only be one character | |
-f | denotes which field(s) to display, indicated by their numerical order of appearance | |
sort | alphabetizes input, e.g. a, b, c, 1, 101, 2 | |
-n | sorts input numerically instead of alphabetically (so instead of 1, 101, 2 you get 1, 2, 101) | |
-r | sorts in reverse order (handy for showing largest numbers first) | |
uniq | deduplicates – detects repeated lines if they are adjacent and removes the duplicates (deduplication) | |
-c | prefixes each line with the number of adjacent occurrences, so you know how many were found in that position before the duplicates were removed | |
head | displays the first 10 lines of input; use head -[number] to change the default number |
Approach
After all the edifying detours we took on the way here, you have everything you need to answer this last question! You’ve got this!!
Follow these steps to construct your command:
- What does a failed attempt to connect look like?
Hint: “fail” feels lucky!
- How can you isolate the user name from those log entries?
Hint: Refer to the Log Format Table, and try thecut
utility.
- How can you sort, remove duplicates, and count the number of times each user name appears?
Hint: Review Leaping #3, Step 2.
- How can you sort the number of those appearances from greatest to least numerically?
Hint: Thesort
utility has switches for this!
- What’s the third item down?
Highlight this box to check your method:
cat leaping.log | grep -i fail | cut -d " " -f 8 | sort | uniq -c | sort -rn | head -3
What’s Next?

Your feedback matters! Please share your thoughts below.