Linux Hands-On Lab: Mastering Text Files, Networking & Archiving Commands

šŸ• Estimated Lab Time: 90–120 minutes

šŸ’» Difficulty Level: Beginner-Friendly

šŸ–„ļø Requirements: A Linux VM (CentOS/Ubuntu), terminal access via SSH or console


Introduction

Knowing Linux commands on paper is one thing. Being able to actually use them under pressure — to find a broken config file at midnight, archive logs before a server migration, or diagnose why a network connection is failing — is the real skill that gets you hired and keeps production systems running.

This hands-on lab guide takes your study notes and turns them into practical exercises you can run right now on your Linux VM. Every command includes the exact syntax, expected output, and a real-world context for why it matters.

What You’ll Practice:

  • Navigating the file system and viewing file contents
  • Checking system info, users, and resources
  • Working with text: sorting, filtering, counting, and extracting
  • Networking: checking configs, testing connections, downloading files
  • Archiving and compressing files with zip, tar, and unzip

šŸ’” Pro Tip: Before starting, take a snapshot of your VM. This gives you a clean rollback point if anything goes wrong during the exercises.


Lab Setup: Prepare Your Environment

Before running any commands, set up a dedicated workspace so your lab files stay organized.

# Step 1: Open your terminal and verify who you are
whoami

Expected output:

admin
# Step 2: Confirm your current location
pwd

Expected output:

/home/admin
# Step 3: Create a dedicated lab directory
mkdir ~/linux-lab
cd ~/linux-lab
pwd

Expected output:

/home/admin/linux-lab
# Step 4: Create sample files for the lab
touch file1.txt file2.txt file3.txt notes.txt config.bak
​
# Step 5: Verify files were created
ls -l

Expected output:

-rw-rw-r-- 1 admin admin 0 Feb 22 18:00 config.bak
-rw-rw-r-- 1 admin admin 0 Feb 22 18:00 file1.txt
-rw-rw-r-- 1 admin admin 0 Feb 22 18:00 file2.txt
-rw-rw-r-- 1 admin admin 0 Feb 22 18:00 file3.txt
-rw-rw-r-- 1 admin admin 0 Feb 22 18:00 notes.txt

Your lab workspace is ready. Let’s begin.


Module 1: System Information Commands

These commands give you an instant snapshot of who you are, what machine you’re on, and how the system is performing — essential for the first 60 seconds of any troubleshooting session.


1.1 whoami — Identify the Current User

What it does: Prints the username of the currently logged-in user.1

whoami

Expected output:

admin

Why it matters: Before running any privileged command, always confirm who you are. Running destructive commands as root by mistake is one of the most common beginner disasters.


1.2 id — View Full User and Group Identity

What it does: Displays the user ID (UID), group ID (GID), and all group memberships.21

id

Expected output:

uid=1000(admin) gid=1000(admin) groups=1000(admin),10(wheel)

Breaking down the output:

  • uid=1000(admin) → Your numeric user ID and username
  • gid=1000(admin) → Your primary group ID
  • groups=... → All groups you belong to (wheel = sudo access on CentOS/RHEL)

Real-world use: When a user reports “Permission denied,” the first thing you check is id — are they in the right group to access that resource?2


1.3 uname — Operating System Information

What it does: Prints information about the system kernel and OS.3

# Basic OS name
uname
​
# Full system information (most useful version)
uname -a

Expected output:

Linux
Linux linuxbox 5.14.0-503.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Fri Jan 10 14:00:00 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux

Breaking down uname -a:

FieldExample ValueMeaning
Kernel nameLinuxOS type
HostnamelinuxboxComputer name
Kernel version5.14.0-503.el9Running kernel
Architecturex86_64CPU type (64-bit)

Flags to know:

uname -r    # Kernel version only → 5.14.0-503.el9.x86_64
uname -n    # Hostname only → linuxbox
uname -m    # Machine hardware → x86_64

1.4 df — Disk Space Usage

What it does: Displays how much disk space is used and available on all mounted filesystems.41

# Human-readable output (use this always)
df -h

Expected output:

Filesystem      Size  Used Avail Use% Mounted on
devtmpfs       4.0M     0 4.0M   0% /dev
/dev/sda2       60G 5.1G   55G   9% /
/dev/sda1       960M 261M 700M 28% /boot
tmpfs           1.9G     0 1.9G   0% /dev/shm

Critical columns explained:

  • Size → Total partition size
  • Used → Space consumed
  • Avail → Space remaining
  • Use% → Percentage used — if this hits 95%+, take action immediately

āš ļø Warning: A full disk (Use% = 100%) causes applications to crash, log files to stop writing, and databases to corrupt. Monitor disk space daily in production.


1.5 ps — View Running Processes

What it does: Shows a snapshot of currently running processes.1

# Show all processes for current user
ps
​
# Show ALL system processes (most useful)
ps aux

Expected output (ps aux):

USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
root         1 0.0 0.1 171884 13312 ?       Ss   09:00   0:01 /usr/lib/systemd/systemd
root       632 0.0 0.0 18276 6656 ?       Ss   09:00   0:00 sshd: /usr/sbin/sshd
admin     2341 0.0 0.0   8896 5760 pts/0   Ss   18:00   0:00 -bash

Column breakdown:

  • PID → Process ID (unique number for each process)
  • %CPU → CPU usage percentage
  • %MEM → Memory usage percentage
  • COMMAND → The program running

Commonly combined with grep:

# Find if Apache is running
ps aux | grep httpd
​
# Find a specific user's processes
ps aux | grep admin

1.6 top — Live System Monitor

What it does: Displays real-time CPU, memory, and process information — Linux’s built-in Task Manager.1

top

Expected output (live, updating every 3 seconds):

top - 18:15:22 up 3 days,  2:41,  1 user,  load average: 0.01, 0.01, 0.00
Tasks: 156 total,   1 running, 155 sleeping,   0 stopped,   0 zombie
%Cpu(s): 0.7 us, 0.3 sy, 0.0 ni, 99.0 id, 0.0 wa
MiB Mem :   3784.0 total,   2900.0 free,   500.0 used,   384.0 buff/cache
MiB Swap:   2048.0 total,   2048.0 free,     0.0 used.   3100.0 avail Mem
​
PID USER     PR NI   VIRT   RES   SHR S %CPU %MEM     TIME+ COMMAND
1234 admin     20   0   65532   5472   4096 S   0.0   0.1   0:00.03 bash

Essential keyboard shortcuts inside top:

KeyAction
qQuit top
kKill a process (type PID when prompted)
MSort by memory usage
PSort by CPU usage
hHelp screen

šŸ’” For a more visual alternative, install htop: sudo yum install htop -y (CentOS) or sudo apt install htop -y (Ubuntu)3


1.7 echo — Print Text or Variables

What it does: Prints a string of text or the value of a variable to the terminal.1

# Print plain text
echo "Hello, Linux World!"
​
# Print a variable value
echo $HOME
​
# Print current user
echo $USER
​
# Write text into a file (> overwrites, >> appends)
echo "This is my first line" > file1.txt
echo "This is the second line" >> file1.txt

Expected output:

Hello, Linux World!
/home/admin
admin

Verify the file was written:

cat file1.txt
This is my first line
This is the second line

1.8 date — Display and Format Date/Time

What it does: Prints the current date and time with flexible formatting options.1

# Default output
date

Expected output:

Sun Feb 22 18:00:00 IST 2026
# Useful format options
date +"%Y-%m-%d"           # 2026-02-22
date +"%H:%M:%S"           # 18:00:00
date +"%Y-%m-%d_%H:%M:%S"  # 2026-02-22_18:00:00

Real-world use — timestamp your backups:

# Create a backup with today's date in the filename
cp /etc/ssh/sshd_config /root/sshd_config.backup.$(date +%Y-%m-%d)
ls /root/
# Output: sshd_config.backup.2026-02-22

1.9 man — The Built-in Manual

What it does: Opens the manual page for any Linux command — your most important learning tool.3

# Open the manual for any command
man ls
man grep
man tar

Navigation inside man:

KeyAction
SpaceNext page
bPrevious page
/keywordSearch for keyword
nNext search match
qQuit

Quick reference shortcut:

# Get a one-line description of any command
whatis ls
# Output: ls (1) - list directory contents
​
whatis grep
# Output: grep (1) - print lines that match patterns

šŸ’” Career Tip: Never memorize every flag. Use man constantly. Senior engineers use man daily — it’s not a sign of weakness, it’s professional practice.


Module 2: File System Navigation and Management

2.1 ls — List Directory Contents

What it does: Lists files and directories.41

# Basic list
ls
​
# Detailed list (long format)
ls -l
​
# Show hidden files too
ls -la
​
# Human-readable file sizes
ls -lh
​
# Sort by newest file at bottom (most useful for logs)
ls -ltr

Expected output (ls -lh):

total 8.0K
-rw-rw-r-- 1 admin admin   0 Feb 22 18:00 config.bak
-rw-rw-r-- 1 admin admin   37 Feb 22 18:01 file1.txt
-rw-rw-r-- 1 admin admin   0 Feb 22 18:00 file2.txt
-rw-rw-r-- 1 admin admin   0 Feb 22 18:00 file3.txt
-rw-rw-r-- 1 admin admin   0 Feb 22 18:00 notes.txt

2.2 cd — Change Directory

What it does: Navigates between directories.41

# Go to home directory (fastest way)
cd ~
pwd     # /home/admin
​
# Go to root of filesystem
cd /
pwd     # /
​
# Navigate to absolute path
cd /var/log
pwd     # /var/log
​
# Go back to previous directory
cd -
pwd     # /home/admin (back where you were)
​
# Go up one level
cd ..
​
# Go up two levels
cd ../..

Practice exercise:

cd /etc
pwd             # /etc
ls | head -10   # View first 10 config files
cd -            # Return to previous directory
pwd             # Back to /home/admin/linux-lab

2.3 find — Locate Files in Your System

What it does: Searches the filesystem for files matching specific criteria.54

# Find a file by name in current directory
find . -name "file1.txt"
​
# Find all .txt files in home directory
find ~ -name "*.txt"
​
# Find files modified in last 24 hours
find ~ -mtime -1
​
# Find files larger than 1MB
find / -size +1M -type f 2>/dev/null

Expected output:

./file1.txt

Real-world use:

# Find all Apache config files
sudo find /etc -name "*.conf" | grep httpd
​
# Find log files older than 7 days (cleanup candidates)
sudo find /var/log -name "*.log" -mtime +7

2.4 touch and mkdir — Create Files and Directories

# Create empty files
touch report.txt
​
# Create multiple files at once
touch log{1..5}.txt
ls
# log1.txt log2.txt log3.txt log4.txt log5.txt
​
# Create a directory
mkdir projects
​
# Create nested directories (parent + child in one command)
mkdir -p projects/webapp/config
ls -R projects/

Expected output:

projects/:
webapp
​
projects/webapp:
config

2.5 cp, mv, rm — Copy, Move, Delete

# Copy a file
cp file1.txt file1-backup.txt
ls *.txt
​
# Copy a directory (requires -R flag)
cp -R projects/ projects-backup/
​
# Move (rename) a file
mv notes.txt notes-renamed.txt
ls
​
# Delete a file
rm file2.txt
​
# Delete a directory and all its contents
rm -rf projects-backup/
​
# Verify
ls

āš ļø Critical Warning: rm -rf is permanent — there is no Recycle Bin in Linux. Always double-check with ls before running rm. This is one of the most dangerous commands for beginners.


Module 3: Viewing and Analyzing File Contents

First, let’s add meaningful content to work with:

# Populate files with sample data
cat > file1.txt << 'EOF'
banana
apple
cherry
date
elderberry
apple
fig
grape
banana
cherry
EOF
​
cat > file2.txt << 'EOF'
Name,Department,Salary
Alice,Engineering,85000
Bob,Marketing,72000
Carol,Engineering,91000
Dave,HR,65000
Eve,Marketing,78000
EOF
​
cat > file3.txt << 'EOF'
ERROR: Connection failed at 08:00
INFO: Server started at 09:00
WARNING: High memory usage at 10:30
ERROR: Disk space low at 11:15
INFO: Backup completed at 12:00
ERROR: Connection failed at 13:00
EOF

3.1 cat — Display Full File Contents

What it does: Concatenates and displays file contents.51

cat file1.txt

Expected output:

banana
apple
cherry
date
elderberry
apple
fig
grape
banana
cherry

Combine multiple files:

cat file1.txt file3.txt

Append one file to another:

cat file2.txt >> notes-renamed.txt
cat notes-renamed.txt

3.2 head and tail — View Start or End of Files

What it does: head shows the first N lines; tail shows the last N lines.51

# First 3 lines
head -3 file1.txt
banana
apple
cherry
# Last 3 lines
tail -3 file1.txt
grape
banana
cherry
# Real-world killer feature: watch logs in real time
tail -f /var/log/messages
# (Press Ctrl+C to stop)

šŸ’” Career Skill: tail -f is how every sysadmin monitors live log files during deployments. Learn this one well.


3.3 more — Page Through Long Files

What it does: Displays file content one screen at a time.5

more /etc/passwd

Navigation:

KeyAction
SpaceNext page
EnterNext line
qQuit

3.4 wc — Word, Line, and Character Count

What it does: Counts lines, words, and characters in a file.45

wc file1.txt

Expected output:

10  10 66 file1.txt
# (lines) (words) (characters) (filename)
# Count only lines
wc -l file1.txt     # 10 file1.txt
​
# Count only words
wc -w file1.txt     # 10 file1.txt
​
# Count files in a directory
ls /etc | wc -l     # Total number of items in /etc

Real-world use:

# How many failed login attempts today?
grep "Failed password" /var/log/secure | wc -l

3.5 sort — Sort File Contents

What it does: Sorts lines in a file alphabetically or numerically.54

# Alphabetical sort
sort file1.txt

Expected output:

apple
apple
banana
banana
cherry
cherry
date
elderberry
fig
grape
# Reverse sort (Z to A)
sort -r file1.txt
​
# Sort numerically (for numbers)
sort -n file1.txt
​
# Sort by second column (salary in CSV)
sort -t',' -k3 -n file2.txt

3.6 uniq — Remove Duplicate Lines

What it does: Filters out consecutive duplicate lines (always sort first!).45

# Sort first, then remove duplicates
sort file1.txt | uniq

Expected output:

apple
banana
cherry
date
elderberry
fig
grape
# Show only duplicates with count
sort file1.txt | uniq -c | sort -rn

Expected output:

      2 banana
    2 cherry
    2 apple
    1 grape
    1 fig
    1 elderberry
    1 date

3.7 grep — Search for Text Patterns

What it does: Finds and prints lines that match a pattern — the most powerful text tool in Linux.65

# Find all ERROR lines
grep "ERROR" file3.txt

Expected output:

ERROR: Connection failed at 08:00
ERROR: Disk space low at 11:15
ERROR: Connection failed at 13:00
# Case-insensitive search
grep -i "error" file3.txt
​
# Count matches
grep -c "ERROR" file3.txt      # 3
​
# Show line numbers
grep -n "ERROR" file3.txt
​
# Search recursively across all files
grep -r "ERROR" ~/linux-lab/
​
# Invert match (show lines WITHOUT the pattern)
grep -v "ERROR" file3.txt

Real-world scenarios:

# Find failed SSH logins
sudo grep "Failed password" /var/log/secure
​
# Find all users with /bin/bash shell
grep "/bin/bash" /etc/passwd
​
# Check if Apache is in the process list
ps aux | grep httpd

3.8 cut — Extract Columns from Text

What it does: Extracts specific fields or character ranges from each line.54

# Extract the 1st column from the CSV (comma delimiter)
cut -d',' -f1 file2.txt

Expected output:

Name
Alice
Bob
Carol
Dave
Eve
# Extract 2nd and 3rd columns
cut -d',' -f2,3 file2.txt
​
# Extract characters 1-5 from each line
cut -c1-5 file1.txt

3.9 paste — Merge Lines from Multiple Files

What it does: Combines lines from two files side by side.5

# Create two aligned files first
echo -e "Alice\nBob\nCarol" > names.txt
echo -e "Engineering\nMarketing\nHR" > departments.txt
​
# Merge them
paste names.txt departments.txt

Expected output:

Alice   Engineering
Bob     Marketing
Carol   HR
# Use custom delimiter
paste -d',' names.txt departments.txt
Alice,Engineering
Bob,Marketing
Carol,HR

Module 4: Networking Commands

4.1 hostname — View Your Machine’s Network Name

What it does: Displays or temporarily sets the system’s hostname.1

hostname

Expected output:

centos-lab01
# Show full hostname with domain
hostname -f
​
# Show the machine's IP address
hostname -I

4.2 ip — View Network Interface Configuration

What it does: Shows IP addresses, network interfaces, and routing information.7

# View all network interfaces and IP addresses
ip addr show

Expected output (abbreviated):

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536
  inet 127.0.0.1/8 scope host lo
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500
  inet 192.168.116.128/24 brd 192.168.116.255 scope global dynamic ens33
# Compact version (IPs only)
ip addr show | grep "inet "
​
# View routing table
ip route show

Note: 192.168.116.128 is your VM’s IP address — this is what you’ll use for SSH connections.


4.3 ping — Test Network Connectivity

What it does: Sends ICMP packets to test if a host is reachable.41

# Test internet connectivity (press Ctrl+C to stop)
ping google.com
​
# Send exactly 4 packets then stop automatically
ping -c 4 google.com

Expected output:

PING google.com (142.250.185.46) 56(84) bytes of data.
64 bytes from ord37s36-in-f14.1e100.net: icmp_seq=1 ttl=117 time=11.4 ms
64 bytes from ord37s36-in-f14.1e100.net: icmp_seq=2 ttl=117 time=11.1 ms
64 bytes from ord37s36-in-f14.1e100.net: icmp_seq=3 ttl=117 time=11.3 ms
64 bytes from ord37s36-in-f14.1e100.net: icmp_seq=4 ttl=117 time=11.2 ms
​
--- google.com ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3005ms
rtt min/avg/max/mdev = 11.1/11.2/11.4/0.1 ms

Interpreting results:

  • 0% packet loss → Network connection is healthy āœ…
  • 100% packet loss → Host unreachable or firewall blocking āŒ
  • High time values (>200ms) → Network latency issue āš ļø

4.4 curl — Transfer Data from URLs

What it does: Fetches content from URLs — can download files, test APIs, or display web page source.14

# Display content from a URL
curl https://www.example.com
​
# Download a file (save with original filename)
curl -O https://releases.ubuntu.com/22.04/SHA256SUMS
​
# Download and save with custom filename
curl -o myfile.txt https://www.example.com
​
# Follow redirects
curl -L https://example.com
​
# Show HTTP response headers
curl -I https://www.google.com

Expected output (curl -I):

HTTP/2 200
content-type: text/html; charset=ISO-8859-1
date: Sun, 22 Feb 2026 12:30:00 GMT
server: gws
x-xss-protection: 0

Real-world use — test if a web server is responding:

curl -I http://localhost
# 200 = working, 403 = forbidden, 503 = server down

4.5 wget — Download Files from the Web

What it does: Downloads files from URLs and saves them to disk.41

# Download a file
wget https://releases.ubuntu.com/22.04/SHA256SUMS
​
# Download quietly (no progress output)
wget -q https://releases.ubuntu.com/22.04/SHA256SUMS
​
# Save with a different filename
wget -O checksums.txt https://releases.ubuntu.com/22.04/SHA256SUMS
​
# Verify the download
ls -lh checksums.txt

curl vs wget — When to use which:

Featurecurlwget
Download filesāœ… Yesāœ… Yes
Test REST APIsāœ… Best choiceāŒ Not ideal
Recursive downloadāŒ Noāœ… Yes
Resume broken downloadsWith -C - flagWith -c flag
View HTTP headersāœ… Yes (-I)āŒ No

Module 5: Archiving and Compression

Why does this matter? Compression preserves storage space, speeds data transfer, and reduces system load. Every sysadmin archives logs, backs up configs, and transfers data daily using these tools.89

First, let’s create more content to archive:

cd ~/linux-lab
mkdir archive-practice
cd archive-practice
​
# Create sample files with content
for i in {1..5}; do
 echo "This is log file $i from server $(hostname)" > log$i.txt
 echo "Contains $(date) timestamp" >> log$i.txt
done
​
# Create a config directory
mkdir configs
echo "ServerName localhost" > configs/httpd.conf
echo "Port=5432" > configs/database.conf
echo "HOST=192.168.1.1" > configs/network.conf
​
ls -lR

5.1 zip — Compress Files into a ZIP Archive

What it does: Compresses files and folders into .zip format.1011

# Zip multiple files
zip logs-archive.zip log1.txt log2.txt log3.txt
​
# Verify it was created
ls -lh *.zip

Expected output:

  adding: log1.txt (deflated 8%)
adding: log2.txt (deflated 8%)
adding: log3.txt (deflated 8%)
# Zip an entire directory (-r = recursive)
zip -r configs-backup.zip configs/
​
# Verify contents without extracting
unzip -l configs-backup.zip

Expected output:

Archive:  configs-backup.zip
Length     Date   Time   Name
--------- ---------- -----   ----
      29 2026-02-22 18:15   configs/httpd.conf
      11 2026-02-22 18:15   configs/database.conf
      20 2026-02-22 18:15   configs/network.conf
---------                     -------
      60                     3 files

5.2 unzip — Extract ZIP Archives

What it does: Unpacks and decompresses .zip archives.10

# Create a test directory and extract there
mkdir extracted-zip
unzip configs-backup.zip -d extracted-zip/
​
# Verify
ls extracted-zip/

Expected output:

Archive:  configs-backup.zip
inflating: extracted-zip/configs/httpd.conf
inflating: extracted-zip/configs/database.conf
inflating: extracted-zip/configs/network.conf

5.3 tar — Archive with tar (No Compression)

What it does: Groups multiple files and directories into a single .tar file (tarball) without compression.12138

Core tar flags reference:

FlagMeaning
-cCreate a new archive
-xExtract from archive
-tList contents of archive
-fSpecify archive filename
-vVerbose (show files being processed)
-zUse gzip compression
# Create a tar archive (no compression)
tar -cvf logs-archive.tar log1.txt log2.txt log3.txt log4.txt log5.txt

Expected output:

log1.txt
log2.txt
log3.txt
log4.txt
log5.txt
# List contents without extracting
tar -tvf logs-archive.tar

Expected output:

-rw-rw-r-- admin/admin      62 2026-02-22 18:15 log1.txt
-rw-rw-r-- admin/admin     62 2026-02-22 18:15 log2.txt
-rw-rw-r-- admin/admin     62 2026-02-22 18:15 log3.txt
-rw-rw-r-- admin/admin     62 2026-02-22 18:15 log4.txt
-rw-rw-r-- admin/admin     62 2026-02-22 18:15 log5.txt

5.4 tar -czf — Create Compressed Archive (tar.gz)

What it does: Archives AND compresses into .tar.gz format — the most common archive format in Linux.13812

# Archive entire directory with gzip compression
tar -czf configs-backup.tar.gz configs/
​
# Verify compression savings
ls -lh configs-backup.tar.gz configs-backup.zip

Expected output:

-rw-rw-r-- 1 admin admin  312 Feb 22 18:20 configs-backup.tar.gz
-rw-rw-r-- 1 admin admin 498 Feb 22 18:15 configs-backup.zip

šŸ’” .tar.gz is often smaller than .zip for the same data — gzip compression is typically more efficient for text files.8

# List contents of tar.gz without extracting
tar -tvf configs-backup.tar.gz

5.5 tar -xzf — Extract a tar.gz Archive

What it does: Decompresses and unpacks a .tar.gz archive.1213

# Create extraction directory
mkdir extracted-tar
cd extracted-tar
​
# Extract the archive here
tar -xzf ../configs-backup.tar.gz
​
# Verify
ls -lR

Expected output:

.:
configs
​
./configs:
database.conf httpd.conf network.conf
# Extract to a specific directory (without cd)
tar -xzf configs-backup.tar.gz -C /tmp/
​
# Extract just ONE specific file from archive
tar -xzf configs-backup.tar.gz configs/httpd.conf

5.6 Real-World Archiving Scenario: Back Up Your Web Server Config

This is the exact workflow a sysadmin uses before making changes to a production server:913

# Step 1: Navigate to the lab directory
cd ~/linux-lab/archive-practice
​
# Step 2: Create a timestamped backup archive
tar -czf "etc-backup-$(date +%Y-%m-%d).tar.gz" configs/
​
# Step 3: Verify the backup was created with correct timestamp
ls -lh *.tar.gz
# etc-backup-2026-02-22.tar.gz
​
# Step 4: Confirm archive integrity (list contents)
tar -tvf "etc-backup-$(date +%Y-%m-%d).tar.gz"
​
# Step 5: Simulate disaster (delete the configs directory)
rm -rf configs/
ls
# configs/ is gone!
​
# Step 6: Restore from backup
tar -xzf "etc-backup-$(date +%Y-%m-%d).tar.gz"
ls -l configs/
# All files restored!

Congratulations — you just performed a professional backup and restore workflow. šŸŽ‰


Module 6: Bringing It All Together — Mini Capstone Project

Now combine everything you’ve learned into one realistic sysadmin scenario.

Scenario: You’re a junior Linux admin. You’ve been asked to:

  1. Check the system’s health
  2. Analyze some log files
  3. Filter out only critical errors
  4. Archive the logs for storage
# ─── STEP 1: System Health Check ───
echo "=== System Health Report: $(date) ===" > health-report.txt
echo "" >> health-report.txt
​
echo "--- Hostname ---" >> health-report.txt
hostname >> health-report.txt
​
echo "--- Disk Usage ---" >> health-report.txt
df -h >> health-report.txt
​
echo "--- Memory ---" >> health-report.txt
free -h >> health-report.txt
​
echo "--- Logged-in Users ---" >> health-report.txt
who >> health-report.txt
​
cat health-report.txt
# ─── STEP 2: Analyze the Log File ───
cd ~/linux-lab
​
# View full log
cat file3.txt
​
# Count total entries
wc -l file3.txt
​
# Count ERROR entries
grep -c "ERROR" file3.txt
​
# Sort by time
sort file3.txt
# ─── STEP 3: Filter Critical Errors ───
​
# Extract only ERROR lines
grep "ERROR" file3.txt > error-report.txt
​
# Extract just the timestamp using cut
grep "ERROR" file3.txt | cut -d' ' -f5 > error-times.txt
​
cat error-report.txt
cat error-times.txt
# ─── STEP 4: Archive Everything ───
​
# Create final archive with timestamp
tar -czf "lab-report-$(date +%Y-%m-%d).tar.gz" \
  health-report.txt \
  error-report.txt \
  error-times.txt \
  file3.txt
​
# Verify archive
ls -lh *.tar.gz
tar -tvf "lab-report-$(date +%Y-%m-%d).tar.gz"
​
echo ""
echo "āœ… Lab Complete! Archive created successfully."

Lab Completion Checklist

Use this to verify you’ve successfully completed every module:

MODULE 1 - System Information
āœ… whoami   → Shows your current username
āœ… id       → Shows UID, GID, group memberships
āœ… uname -a → Shows full system information
āœ… df -h   → Shows disk usage in readable format
āœ… ps aux   → Shows all running processes
āœ… top     → Opens live system monitor (quit with q)
āœ… echo     → Printed text and wrote to a file
āœ… date     → Shows current date/time with formatting
āœ… man ls   → Opened the manual for the ls command
​
MODULE 2 - File System Navigation
āœ… ls -lh   → Listed files with sizes
āœ… cd       → Navigated between directories
āœ… find     → Located files by name and pattern
āœ… touch   → Created new files
āœ… mkdir -p → Created nested directories
āœ… cp -R   → Copied files and directories
āœ… mv       → Moved/renamed files
āœ… rm -rf   → Deleted files and directories
​
MODULE 3 - File Content Analysis
āœ… cat     → Displayed file content
āœ… head -3 → Showed first 3 lines
āœ… tail -3 → Showed last 3 lines
āœ… wc -l   → Counted lines
āœ… sort     → Sorted file contents
āœ… uniq -c → Removed duplicates and counted occurrences
āœ… grep     → Filtered lines matching a pattern
āœ… cut     → Extracted specific columns
āœ… paste   → Merged two files side by side
​
MODULE 4 - Networking
āœ… hostname → Showed machine name
āœ… ip addr → Showed network interfaces and IPs
āœ… ping -c4 → Tested network connectivity
āœ… curl     → Fetched URL content
āœ… wget     → Downloaded a file
​
MODULE 5 - Archiving
āœ… zip -r   → Compressed a directory to .zip
āœ… unzip   → Extracted a .zip archive
āœ… tar -czf → Created compressed .tar.gz archive
āœ… tar -tvf → Listed archive contents
āœ… tar -xzf → Extracted .tar.gz archive
​
MODULE 6 - Capstone
āœ… Combined commands into a real sysadmin workflow
āœ… Generated a health report
āœ… Filtered and analyzed log data
āœ… Created a timestamped archive of reports

Quick Reference Cheat Sheet

SYSTEM INFO          FILE OPERATIONS      TEXT ANALYSIS
whoami               ls -lh               cat file.txt
id                   cd /path             head -n file.txt
uname -a             pwd                 tail -f file.txt
df -h               find ~ -name "*.txt" wc -l file.txt
ps aux               touch file.txt       sort file.txt
top                 mkdir -p dir/sub     sort | uniq -c
echo "text"         cp -R src/ dst/     grep "pattern" file
date +"%Y-%m-%d"     mv old new           cut -d',' -f1 file
man command         rm -rf dir/         paste file1 file2
​
NETWORKING           ARCHIVING
hostname             zip -r out.zip dir/
ip addr show         unzip out.zip
ping -c4 google.com tar -czf out.tar.gz dir/
curl -I url         tar -tvf out.tar.gz
wget -O file url     tar -xzf out.tar.gz -C /dest/

Key Takeaways

āœ… System commands (whoami, df, top, ps) give you instant situational awareness on any Linux machine — memorize these first.

āœ… Text processing tools (grep, cut, sort, uniq) are the foundation of log analysis and automation — combining them with pipes | multiplies their power dramatically.

āœ… Networking commands (ping, curl, wget, ip) are your first tools when diagnosing connectivity problems in cloud or on-premise environments.

āœ… Archiving (tar -czf, zip) is a daily task for every sysadmin — log rotation, config backups, deployment packages, and data transfers all rely on it.913

āœ… Always use man for any command you’re unsure about — it’s faster than Googling and works offline.


Next Steps in Your Learning Journey

Now that you’ve completed this lab, here’s what to tackle next:

Week 1: Practice these commands daily by exploring your VM’s /etc and /var/log directories

Week 2: Learn file permissions (chmod, chown) and user management (useradd, passwd)

Week 3: Begin bash scripting — automate the health report from Module 6 as a scheduled cron job

Month 2: Set up a LAMP stack (Linux + Apache + MySQL + PHP) using the commands you’ve learned here

šŸ“Œ Share Your Lab: Did you complete the capstone project? Post a screenshot of your final archive on LinkedIn or drop your questions in the comments below. Every command you practice today brings you one step closer to your first Linux admin role.

Arbaz
Arbaz

I’m a dedicated IT support and cloud engineering enthusiast with 3+ years of experience, passionate about solving problems, continuous learning, and creating innovative tech solutions.

Articles: 48

Leave a Reply

Your email address will not be published. Required fields are marked *