Linux System Administration: Permissions, Partitions, and Security

This resource is a guide to Linux system administration, covering essential topics for the Linux+ certification. It thoroughly explains system architecture, including the file system hierarchy and key directories, as well as boot processes. The text also discusses partition management, detailing file system types, mount points, and commands for manipulating disk partitions. User and group management, file permissions, and special permissions are all explained. Finally, the document explains monitoring system performance and managing processes via command line tools.

Linux Study Guide

Quiz

Instructions: Answer each question in 2-3 sentences.

  1. What is the role of Linus Torvalds in the Linux operating system, and what title was given to him?
  2. Explain the difference between the Linux kernel and a Linux distribution (distro). Provide two examples of popular Linux distros.
  3. Name three common uses for Linux operating systems, beyond desktop computing.
  4. Describe the purpose of the /bin and /sbin directories. What is the key distinction between them regarding user access?
  5. What is the purpose of /dev/null and /dev/zero?
  6. Briefly explain the function of journaling in file systems like EXT4 and XFS.
  7. What are the key differences between the FAT32 and NTFS file systems? What are the common use cases of each file system?
  8. Explain the roles of the kernel, the shell, and the user space in the Linux system architecture.
  9. What is the purpose of the /etc/fstab file?
  10. Explain the difference between a hard link and a symbolic link (soft link) in Linux.

Quiz Answer Key

  1. Linus Torvalds is the creator of the Linux kernel and maintains ultimate authority over its development. He is known as the “benevolent dictator of Planet Linux,” as his approval is needed for incorporating code into the kernel.
  2. The Linux kernel is the core of the operating system, while a Linux distribution (distro) is a complete OS that bundles the kernel with other software, utilities, and a desktop environment. Ubuntu and Kali Linux are two examples of popular Linux distributions.
  3. Linux operating systems are used in servers (web, database), embedded systems (IoT devices, routers), and cloud computing platforms (AWS, Google Cloud).
  4. The /bin directory contains essential binary executables accessible to all users, while the /sbin directory holds system binaries used for administration, which typically require root privileges.
  5. /dev/null is a “black hole” that discards any data written to it, often used to suppress output. /dev/zero produces an infinite stream of null characters, useful for initializing storage or testing memory.
  6. Journaling is a feature that records file system changes in a journal before they are applied. This improves data integrity and recoverability in case of a system crash or power failure.
  7. FAT32 is an older file system with limited file size support, primarily used for USB drives. NTFS is a modern file system used by Windows, offering security features, compression, and large file support, but typically only accessible by Windows.
  8. The kernel manages system resources and communication between hardware and software. The shell is a command-line interpreter allowing users to interact with the kernel. The user space is where user-level applications execute, isolated from the kernel for stability and security.
  9. The /etc/fstab file contains a list of file systems that should be automatically mounted at boot time, along with their mount points and options.
  10. A hard link is a direct reference to a file’s data, creating a duplicate entry. A symbolic link (soft link) is a file that stores the path to another file acting as a shortcut.

Essay Questions

  1. Discuss the advantages and disadvantages of using Linux in an enterprise environment compared to other operating systems. Consider factors like cost, stability, security, and the availability of support.
  2. Explain the importance of file permissions in Linux. How do traditional file permissions (user, group, other) and special permissions (SUID, GUID, sticky bit) contribute to system security and access control?
  3. Compare and contrast the systemd and SysVinit initialization systems. What are the key benefits of systemd over SysVinit, and why has it become the standard in modern Linux distributions?
  4. Describe the process of partitioning a hard drive in Linux. What are the differences between primary, extended, and logical partitions, and how are they used to organize a file system?
  5. Explain how shell scripting can be used to automate system administration tasks in Linux. Provide examples of common scripting tasks and discuss the advantages of using scripts over manual commands.

Glossary of Key Terms

  • Binaries: Executable files containing compiled code.
  • BIOS (Basic Input/Output System): Firmware used to perform hardware initialization during the booting process on older systems.
  • Bootloader: Software that loads the operating system kernel.
  • CLI (Command Line Interface): A text-based interface for interacting with the operating system.
  • Daemon: A background process that runs without direct user interaction.
  • Dev: Directory containing device files, providing a virtual interface to hardware.
  • Distro (Distribution): A complete Linux operating system that includes the kernel and additional software.
  • EXT4 (Fourth Extended Filesystem): A journaling file system commonly used in Linux.
  • FHS (Filesystem Hierarchy Standard): A standard that defines the directory structure and contents in Linux.
  • File System: A method of organizing and storing files on a storage device.
  • GPT (GUID Partition Table): A partitioning scheme that supports larger disk sizes and more partitions than MBR.
  • GNU: An open-source, Unix-like operating system.
  • GUI (Graphical User Interface): A visual interface for interacting with the operating system.
  • Kernel: The core of the Linux operating system.
  • Linux: An open-source operating system kernel.
  • Mount Point: A directory where a file system is attached to the directory tree.
  • MBR (Master Boot Record): A traditional partitioning scheme that limits the number of partitions on a disk.
  • Nas (Network Attached Storage): A server that attaches to a network and enables other users to access the same data.
  • NTFS (New Technology File System): The file system used by modern Windows operating systems.
  • Open Source: Software with source code that is freely available and can be modified and distributed.
  • Partition: A section of a hard drive or other storage device.
  • Process: An instance of a program that is running in memory.
  • Root: The top-level directory in the Linux file system, represented by /.
  • Sbin: System binaries; executables used for system administration.
  • Shell: A command-line interpreter that allows users to interact with the kernel.
  • SUID (Set User ID): A special permission that allows a program to be executed with the privileges of its owner.
  • Swap Space: Disk space used as virtual memory when RAM is full.
  • Symbolic Link (Soft Link): A file that stores the path to another file acting as a shortcut.
  • Systemd: A system and service manager for Linux.
  • UEFI (Unified Extensible Firmware Interface): A modern firmware interface used to initialize hardware during the booting process.
  • User Space: The environment where user-level applications execute, isolated from the kernel.
  • XFS: A high-performance journaling file system commonly used in enterprise environments.

Linux Fundamentals and System Administration Training

Okay, here’s a detailed briefing document summarizing the main themes and important ideas from the provided excerpts.

Briefing Document: Linux Fundamentals and Administration

Overview:

This document summarizes a comprehensive training on Linux fundamentals and system administration. The training covers a wide range of topics, from the history of Linux and its various distributions to file system management, process management, user authentication, and scripting. The primary focus is on equipping users with the knowledge and practical skills necessary to effectively navigate, manage, and secure Linux systems. The training utilizes a lecture format coupled with practical command-line exercises. Access to supplemental materials, including a detailed Google Document and an extensive slide presentation, is offered as part of a membership program.

Main Themes & Key Ideas:

  1. Linux History and Distributions:
  • Linux was created as a free and open-source alternative to Minix/Unix, spearheaded by Linus Torvalds and Richard Stallman. “the goals for this were to make it free make it open source alternative to Minix uh which was based on Unix and that’s the guy lonus tals he’s still around and he’s still the the father of Linux.”
  • Linux Distributions (Distros) bundle the Linux kernel with other software, each tailored for specific purposes.
  • Popular Distros include:
  • Ubuntu: User-friendly, for general users and desktop environments. “Ubuntu which is the most commonly used uh it’s popular for General users beginners and people who want to use Linux in the desktop environment”
  • CentOS/Red Hat Enterprise Linux (RHEL): Stable, supported, for enterprise environments. “sent OS AKA Red Hat Enterprise Linux or real and these things are used in Enterprise environments so they’re mainly for stability and support”
  • Debian: Servers and advanced users, often command-line focused. “Debian is known for servers and advanced users mostly because the fact that you’re not going to get a graphic user interface”
  • Fedora: Cutting-edge, for developers. “Fedora which is great for developers and it’s more Cutting Edge and it has a lot of Innovations and utilities that are pre-installed on it”
  • Kali Linux: Cyber security and ethical hacking. “Cali Linux and it’s for cyber security and ethical hackers which is US everybody that comes on this channel”
  • Linux has Common Uses in servers, embedded systems/IoT devices, software development, cyber security, cloud and data centers. “servers embedded systems and iot devices iot devices are internet of things”
  1. File System Hierarchy and Navigation:
  • The File System Hierarchy Standard (FHS) defines a consistent directory structure across Linux distributions. Key directories include / (root), /bin (binaries), /sbin (system binaries), /etc (configuration files), /home (user directories), /var (variable data), /tmp (temporary files), and /dev (device files). ” the the home of everything is the root and it’s represented by this singular forward slash”
  • The /bin directory contains essential binary executables, accessible to all users. “the bin or the binaries directory contains essential binary executables that are needed during the boot process or in single user mode”
  • The /sbin directory contains binary executables for system administration, requiring root privileges. “the sbin is known as the system binaries folder and this is the binary executables that are used for system administration it typically requires root privileges to execute this”
  • The /dev directory provides a virtual interface to physical devices. “the dev folder important uh device abstraction by treating devices as files Linux provides a consistent interface for interacting with various Hardware devices”
  • Common commands for navigation: pwd (print working directory), ls (list files), cd (change directory).
  • Commands for file manipulation: cp (copy), mv (move/rename), rm (remove), mkdir (make directory), rmdir (remove directory).
  1. File Systems:
  • ext4: A widely used file system balancing performance and reliability. “xt4 which is balanced performance reliability and is known to be compatible with all different versions of the OS”
  • xfs: Designed for performance and scalability, popular in enterprise environments and large databases. “xfs the xfs file system uh it’s for performance and scalability it’s a very popular choice in Enterprise environment”
  • NTFS: Used in modern Windows operating systems, requires consideration for cross-platform compatibility. “NTFS the new technology file system is actually a file system that’s been used in modern Windows operating systems it’s reliable it’s secure performs well”
  • FAT32: Commonly used for USB flash drives and external hard drives (max 2TB), compatible with various OSes. “fat 32 for the most part is being used by one individual person for like a home computer or something like that or maybe a USB drive or an external hard drive that is maxed out at 2 terabytes”
  • Formatting tools: mkfs (make file system) can format partitions in different file systems, and requires backing up data as formatting wipes the data. “mkfs is used to format partitions with different file systems”
  1. System Architecture: Kernel, Shell, User Space
  • Kernel: The core component, acting as the bridge between hardware and software. It manages system resources. “the kernel um it’s the core component of a Linux based operating system it’s basically the bridge between the hardware and the software layers”
  • Shell: A command-line interpreter that allows users to interact with the kernel. “the shell is a command line interpreter that facilitates communication with the kernel”
  • User Space: The environment where user-level applications execute, separated from the kernel for stability and security. “user space is the environment where the user level applications actually execute”
  1. Boot Process:
  • BIOS/UEFI: Initializes hardware and passes control to the bootloader.
  • Bootloader (e.g., GRUB): Loads the operating system kernel.
  • Init System (e.g., systemd, SysVinit): Starts system services and processes. “init is actually the traditional initialization system that was used in Linux dros to start system services and processes during brute”
  • systemd is faster, flexible, feature rich, current version of the init system.
  1. Package Management and Installation:
  • Selecting a Linux distribution is based on the task to accomplish. There are desktop versions, server versions, and security versions, etc.
  • The process of flashing an ISO image to a USB drive using tools like Etcher is covered.
  • The presenter demonstrates how to find the assigned drive to the USB device before flashing the ISO image.
  1. Partitions and Mounting:
  • Understanding the concept of primary, extended, and logical partitions. The function of partitioning is creating containers to contain the files.
  • Mounting file systems: Attaching a file system to a directory to make it accessible. “when you mount something you’re attaching the file system to a directory so that it’s accessible within the larger directory tree”
  • /etc/fstab: Configuration file defining file systems to be automatically mounted at boot. “ETFs tab uh configuration file that defines all the file systems that should be automatically mounted when you start your system”
  1. Swap Space:
  • Swap space acts as virtual memory when physical RAM is exhausted, preventing out-of-memory errors. “swap a buffer to prevent out of memory errors meaning that the system can’t run because the physical memory has been maxed out”
  • Creating swap partitions or swap files is covered.
  1. Process and Service Management:
  • Processes, daemons, and services and the relationship between them.
  • Daemons are background processes.
  • Services group daemons to provide specific functionality. “service the higher level concept meaning that they group one or more Damons together and that provides a specific functionality”
  • Managing services using systemctl (enable, disable, start, stop, status).
  1. User Authentication and Permissions:
  • /etc/passwd: Stores user account information (usernames, user IDs, etc.). “the Etsy password file stores all of the user names so the user accounts themselves there is no uh password doc uh hashes or any PL text or anything like that that’s inside of this file”
  • /etc/shadow: Stores password hashes (sensitive file, access must be restricted).
  • File permissions (read, write, execute) for owner, group, and others. The acronyms are R, W, and X. “permissions are designed they go by read write and execute and they have acronyms for them so R would be for read W would be for write and then X would be to execute”
  • Special permissions: SUID, SGID, sticky bit. “sticky bit on a directory only the file owner or directory owner can delete or modify the files within it regardless of the group or other right permissions or anything else”
  • sudo and user authentication.
  1. Shell Scripting:
  • Shell scripts are text files containing a series of commands. “shell scripting is a an extension of Shell’s uh interactions and commands that essentially allows you to create a document put a bunch of commands inside of it”
  • Used for automation and repetitive tasks.
  • Key concepts: Shebang line (#!/bin/bash), comments (#), variables, conditional statements (if, else), loops (for, while).

Quotes of Importance:

  • “toal uh tals posted the source code for free on the web inviting other programs to improve it making Linux a collaborative project and the foundation for open- Source software and to this day it is still open source meaning you can get access to it for free and you can even make modifications to it”
  • “without that guy’s permission you cannot make any incorporations to the Linux kernel uh or at least not publicly shared you could probably make the modifications your yourself but you won’t be able to make it as one of the dros that are available uh to everybody else”
  • “every time that you request a video to be loaded from YouTube you’re requesting a packet of data to be delivered to you and that’s done through your network”

Recommendations:

  • Reinforce learning with practical exercises and command-line practice.
  • Explore different Linux distributions to understand their specific strengths and use cases.
  • Prioritize understanding file permissions and security concepts to maintain a secure Linux environment.
  • Utilize scripting to automate common tasks and improve efficiency.

This briefing document provides a comprehensive overview of the Linux training material. By focusing on the main themes and key ideas, it enables individuals to quickly grasp the essential concepts and apply them effectively in real-world scenarios.

Linux Essentials: Concepts and Architecture

1. What is Linux and who are some key figures in its development?

Linux is a free and open-source operating system kernel. Linus Torvalds created it as an alternative to Minix, drawing inspiration from Unix. Richard Stallman and the Free Software Foundation contributed the GNU utilities to the Linux kernel, creating GNU/Linux, the modern version of Linux. Torvalds remains the ultimate authority on the Linux kernel, often referred to as the “benevolent dictator” of Planet Linux.

2. What are Linux distributions (distros) and what are some popular examples?

Linux distributions, or distros, are different versions of the Linux OS that bundle the Linux kernel with other software. Popular examples include Ubuntu (user-friendly, for general users), CentOS/Red Hat Enterprise Linux (stable, for enterprise environments), Debian (for servers and advanced users), Fedora (for developers, cutting-edge), and Kali Linux (for cyber security and ethical hacking).

3. What are some common uses of Linux?

Linux is used in a variety of environments, including servers, embedded systems/IoT devices, software development, cyber security, cloud computing (AWS, Google Cloud), and data centers. Its versatility and open-source nature make it suitable for a wide range of applications.

4. What are some important directories in the Linux file system hierarchy?

Some key directories include:

  • / (root): The top-level directory, the home of everything
  • /bin (binaries): Essential binary executables needed during the boot process or in single-user mode, accessible to all users.
  • /sbin (system binaries): Binary executables for system administration, requiring root privileges.
  • /dev (devices): Represents device files that provide an interface to hardware devices.
  • /home: Contains personal directories for individual users.

5. What is the /dev directory and why is it important?

The /dev directory contains device files, which provide an interface for interacting with hardware devices as if they were files. This allows for consistent interaction with various hardware through commands. When you connect a USB drive, for example, a file or folder corresponding to that device appears in /dev, allowing command-line interaction.

6. How do I determine file system type?

To determine the file system type the command lsblk -f will display file system type along with UUID and label.

7. What is shell scripting and why is it useful?

Shell scripting involves creating text files containing a series of commands that can be executed sequentially. It’s a powerful tool for automation, allowing users to automate repetitive tasks. Shell scripts start with a shebang line (#!) indicating the interpreter to use. The rest of the script contains commands, often with comments (lines starting with #) explaining the code. Control flow structures (if/else, while loops) and variables are essential components of shell scripts.

8. What are the three key components of the Linux system architecture?

The Linux system architecture consists of three main components:

  • Kernel: The core component that acts as the bridge between hardware and software, managing system resources.
  • Shell: A command-line interpreter that allows users to interact with the kernel and execute commands.
  • User Space: The environment where user-level applications execute, separate from the kernel for stability and security.

Linux System Administration Essentials

Linux administration involves several key aspects, including system architecture, installation, package management, file management, user and group management, process and service management, and job scheduling.

System Architecture and Boot Process

  • Understanding the Linux system architecture is crucial, including the kernel, shell, user space, basic input/output system (BIOS), Unified Extensible Firmware Interface (UEFI), GRand Unified Bootloader (GRUB), and init system.
  • The file system hierarchy standard (FHS) defines the directory structure, ensuring consistency across distributions. Key directories include root, bin, sbin, etc, home, var, and tmp.
  • The kernel is the core, bridging hardware and software by managing resources.
  • The shell is a command-line interpreter for interacting with the kernel.
  • The boot process involves BIOS/UEFI initializing hardware, the bootloader loading the OS, and the init system starting system services.

Installation and Package Management

  • Installing Linux involves choosing a distribution based on specific needs, such as Ubuntu, CentOS, Debian, Fedora, or Kali Linux.
  • The installation process includes downloading an ISO image, creating bootable media, and configuring partitions.
  • Package managers are used to update, remove, and troubleshoot software packages.

File Management

  • File management includes creating partitions and understanding primary, extended, and logical partitions.
  • File systems like ext4 and XFS are also important, as well as understanding mount points.
  • Commands such as mkfs (make file system) are needed for formatting partitions.

User and Group Management

  • User management involves creating, modifying, and deleting user accounts. Commands such as useradd, usermod, and userdel are used for these tasks.
  • Group management involves creating and managing groups to organize users. Commands such as groupadd and groupdel are used to manage groups.

Process and Service Management

  • Process management involves understanding daemons, services, and process management commands.
  • Important commands include ps for viewing processes and systemctl for managing services.
  • Job scheduling can be achieved using cron jobs for recurring tasks and the at command for one-time tasks.

Linux File System Administration: Types, FHS, and Management

File systems are a critical component of Linux administration, involving how data is stored, accessed, and managed. Key aspects include file system types, the file system hierarchy standard (FHS), mount points, and various management tools.

File System Hierarchy Standard (FHS)

  • The FHS defines the directory structure in Linux, ensuring consistency across different distributions.
  • Key directories include:
  • Root directory Top-level directory for the entire system. All other directories are extensions of the root.
  • /bin Contains essential user commands.
  • /sbin System binaries, typically for administration, requiring root privileges.
  • /etc Houses configuration files for services and applications. It is known as the control center.
  • /home Houses user directories.
  • /var Stores variable data such as system logs.
  • /tmp Houses temporary data, often wiped on reboot.

File System Types

  • ext4: The fourth extended file system, is a journaling file system and the default in many Linux distributions. It supports large files and is reliable.
  • XFS: A high-performance journaling file system often used in enterprise environments for its scalability and reliability. It is optimized for sequential read and write operations.
  • NTFS: (New Technology File System) Used in Windows, it’s included because of the need for interoperability between Linux and Windows systems.
  • FAT32: Known for its simplicity and broad compatibility; though older, it is still used due to its cross-platform compatibility. It has a limited file size of 4GB and a partition size limit of 2TB.

Mount Points

  • Mounting attaches a file system to a directory, making it accessible within the larger directory tree. The mount command is used for this purpose.
  • Automatic mounting is configured in the /etc/fstab file, which defines file systems to be automatically mounted at boot. Each line in /etc/fstab represents a file system and its mount options.
  • Unmounting a file system is done using the umount command, preventing data loss or corruption.

Partitioning

  • Partitions are logical divisions of a physical storage device, allowing the operating system to manage data in isolated areas.
  • Common primary partitions include the root partition, boot partition, home partition, and swap partition. Most Linux devices break down to a max of four primary partitions.
  • Extended partitions can contain multiple logical partitions.

Tools and Commands for File System Management

  • mkfs (make file system): Used to format a partition with a specified file system type.
  • lsblk: Lists block devices (disks and partitions) in a tree-like format.
  • fdisk: A command-line utility for creating, modifying, and deleting partitions.
  • fsck (file system check): A utility for checking and repairing file system consistency.
  • parted: A versatile command-line tool supporting both MBR and GPT partition schemes, ideal for resizing and modifying partitions.

Swap Space

  • Swap space is virtual memory on the hard drive, used when physical RAM is exhausted. It can be a partition or a file.
  • Commands like mkswap (make swap) initialize a partition for use as swap, while swapon and swapoff activate and deactivate swap spaces, respectively.

Linux Permissions Management: A Concise Guide

Permissions management in Linux is a fundamental aspect of system administration, focused on controlling access to files and directories. It ensures that only authorized users can read, write, or execute files, maintaining system security and data integrity. The key components of permissions management include permission models, commands for modifying permissions, and special permissions.

Permission Models

  • Levels of Access: There are three levels of access: owner, group, and others.
  • Owner: Typically the user who created the file or directory, possessing the highest level of control.
  • Group: A collection of users who are assigned specific permissions.
  • Others: All users who are neither the owner nor members of the group.
  • Types of Permissions: Permissions are categorized into read, write, and execute.
  • Read (r): Allows viewing the contents of a file or listing the contents of a directory.
  • Write (w): Permits modifying or deleting a file or directory.
  • Execute (x): Enables running a program or script, or accessing a directory.

File Permission Representation

  • When viewing file permissions, the output typically looks like this: -rwxr-xr–.
  • The first character indicates the file type: – for a regular file, d for a directory.
  • The next three characters represent the owner’s permissions.
  • The following three characters represent the group’s permissions.
  • The last three characters represent the permissions for others.
  • Example: -rwxr-xr– indicates a regular file. The owner has read, write, and execute permissions; the group has read and execute permissions; and others have only read permissions.

Commands for Modifying Permissions

  • chmod (change mode): Used to change the permissions of a file or directory.
  • Symbolic Method: Uses symbols to add or remove permissions. For example, chmod u+x file adds execute permission to the owner of the file.
  • Numeric (Octal) Method: Uses numeric values to set permissions. Each permission has a value: read=4, write=2, execute=1. The sum of these values represents the permissions. For example, chmod 755 file gives the owner read, write, and execute permissions (4+2+1=7), and the group and others read and execute permissions (4+1=5).
  • chown (change owner): Used to change the owner of a file or directory.
  • chgrp (change group): Used to change the group associated with a file or directory.

Special Permissions

  • SUID (Set User ID): When set on an executable file, it allows the file to be executed with the privileges of the owner, not the user running it. It is represented by an “s” in the owner’s execute permission slot.
  • SGID (Set Group ID): Similar to SUID, but it applies to the group. When set on a directory, any files created within that directory inherit the group ownership of the directory.
  • Sticky Bit: When set on a directory, only the file owner or directory owner can delete or modify files within it, regardless of group or other write permissions.

User Authentication

  • /etc/passwd: Stores user account information, including usernames, but no password hashes.
  • /etc/shadow: Stores encrypted password hashes and other security information. Should only be readable by the root user.

Managing Pseudo Permissions

  • Granting users the ability to execute commands with root privileges via sudo is a critical aspect of system administration.
  • This involves adding users to the sudo group and configuring the /etc/sudoers file to define command restrictions.
  • The visudo command is used to safely edit the /etc/sudoers file, checking for syntax errors before saving.
  • Within the /etc/sudoers file, you can specify which commands a user can run with sudo. For example: username ALL=(ALL:ALL) /path/to/command. This setup ensures users have the necessary permissions to perform administrative tasks while maintaining security.

Linux User Management: A System Administration Guide

User management in Linux is a critical aspect of system administration. It involves creating, modifying, and deleting user accounts to control access to the system and its resources. Effective user management ensures system security and integrity by granting appropriate permissions and privileges to different users.

Key aspects of user management include:

  • Creating Users: User accounts can be created using the useradd or adduser command, depending on the Linux distribution.
  • The basic syntax is sudo useradd username or sudo adduser username.
  • Additional options can be used to specify the home directory, login shell, and supplementary groups for the user.
  • For example, sudo useradd -m -d /data/users/alice -s /bin/bash -G developers,admins alice creates a user named “alice”, assigns the home directory to /data/users/alice, sets the login shell to /bin/bash, and adds the user to the “developers” and “admins” groups.
  • Setting Passwords: After creating a user, a password must be set using the passwd command.
  • The syntax is sudo passwd username.
  • It is common practice to expire the initial password, forcing the user to set a new password upon their first login. This can be done using a specific command to expire the password immediately after it’s been created.
  • Modifying Users: Existing user accounts can be modified using the usermod command.
  • This command can change the username, lock or unlock an account, and modify group memberships.
  • For example, sudo usermod -l newusername oldusername changes the username from “oldusername” to “newusername”.
  • The -L option locks an account, and the -U option unlocks it.
  • Deleting Users: User accounts can be deleted using the userdel command.
  • The syntax is sudo userdel username.
  • The -r option removes the user’s home directory and all its contents. For example: sudo userdel -r username.
  • Groups:
  • Groups are managed using commands such as groupadd to create groups and groupdel to delete them.
  • A primary group is the main group associated with a user. When a user creates a file, the group ownership of that file is set to the user’s primary group.
  • Supplementary groups are additional groups that a user is a member of, granting them access to resources associated with those groups.
  • A user can be added to a supplementary group using the usermod command with the -aG option. For example: sudo usermod -aG groupname username.
  • File Permissions and Ownership:
  • Every file and directory in Linux has associated permissions that determine who can read, write, or execute the file.
  • Permissions are defined for the owner, the group, and others.
  • The chmod command is used to modify permissions, chown to change the owner, and chgrp to change the group.
  • Special Permissions:
  • Special permissions like SUID, SGID, and the sticky bit can be set to modify how files are executed or accessed.
  • SUID allows a file to be executed with the privileges of the owner.
  • SGID, when set on a directory, causes new files and subdirectories to inherit the group ownership of the parent directory.
  • The sticky bit, when set on a directory, restricts file deletion and modification to the owner of the file, the directory owner, and the root user.
  • User Authentication Files:
  • The /etc/passwd file stores basic user account information, such as usernames, user IDs, group IDs, home directories, and login shells. However, it does not store password hashes.
  • The /etc/shadow file stores encrypted password hashes and other password-related information. It should be readable only by the root user.
  • Managing Sudo Permissions:
  • Granting users the ability to execute commands with root privileges via sudo is a critical aspect of system administration.
  • This involves adding users to the sudo group and configuring the /etc/sudoers file to define command restrictions.
  • The visudo command is used to safely edit the /etc/sudoers file, checking for syntax errors before saving.
  • Within the /etc/sudoers file, you can specify which commands a user can run with sudo. For example: username ALL=(ALL:ALL) /path/to/command. This setup ensures users have the necessary permissions to perform administrative tasks while maintaining security.

Linux System Boot Process: BIOS, Bootloader, and Initialization

The system boot process in Linux involves several stages, starting from the hardware initialization to the loading of the operating system. Key components and processes include BIOS/UEFI, bootloaders, and initialization systems.

  • BIOS/UEFI:
  • BIOS (Basic Input/Output System) is a firmware used in older systems to initialize hardware and pass control to the bootloader.
  • BIOS initializes the hardware and conducts a Power-On Self-Test (POST) to check hardware.
  • It issues beeps to indicate test outcomes and initializes essential hardware like the keyboard, mouse, and disk drives.
  • UEFI (Unified Extensible Firmware Interface) is a modern interface replacing BIOS, offering a more flexible, faster, and secure boot process.
  • UEFI can boot systems faster and supports a user-friendly graphical interface.
  • It enhances security with features like secure boot, protecting the system from malicious software attacks, and supports larger disk drives.
  • UEFI also allows booting from network resources, facilitating remote system deployment and management.
  • Both BIOS and UEFI perform POST, execute the bootloader, and initiate the operating system boot.
  • Bootloader (GRUB):
  • The bootloader acts as an intermediary between the BIOS/UEFI and the operating system.
  • GRUB (Grand Unified Bootloader) is commonly used in Linux distributions.
  • It takes control of the system after BIOS/UEFI and completes the boot process.
  • GRUB loads the kernel and initial RAM disk into memory and transfers control to the kernel.
  • It presents a boot menu, allowing users to choose an operating system in dual-boot or multi-boot systems.
  • GRUB supports multiple OSs, customization of the boot menu, secure boot options, and advanced features like chain loading and network booting.
  • Initialization Systems (SysVinit, systemd):
  • After the bootloader, the system uses an initialization system to start system services and processes.
  • SysVinit is a traditional init system using a sequence of scripts to bring up the system, but it can be complex to configure and is relatively slow due to its sequential processing.
  • It runs scripts located in /etc/init.d to start and stop services.
  • SysVinit uses run levels, each representing a specific system state.
  • systemd is a modern init system that is faster, more flexible, and more feature-rich than SysVinit.
  • It manages the lifecycle of system services, ensures services start in the correct order based on dependencies, and logs system events.
  • systemd starts services in parallel, reducing boot time, and manages dependencies automatically.
  • It provides a unified framework for managing system services and includes features like socket activation, journaling, timers, scheduling, and device management.
  • systemd uses boot targets, which are groups of services that should be started or stopped together, allowing efficient management of system behavior under different circumstances.
  • Run Levels and Boot Targets:
  • Run levels (used in SysVinit) represent specific states of the system, with the system transitioning through them during the boot process.
  • Common run levels include halt (0), single-user mode (1), multi-user mode without NFS (2), multi-user mode without GUI (3), full multi-user mode with GUI (5), and reboot (6).
  • Boot targets (used in systemd) replace run levels and represent groups of services to be started or stopped together.
  • Common targets include multi-user.target (default multi-user mode), graphical.target (graphical services), rescue.target (system recovery), and emergency.target (minimal services for system maintenance).
  • Service Management:
  • In systemd, services are managed using systemctl commands.
  • Common commands include sudo systemctl start service, sudo systemctl stop service, sudo systemctl enable service, sudo systemctl disable service, and systemctl status service.
  • In SysVinit, the service command is used with similar options but a slightly different syntax.
Full Linux+ (XK0-005 – 2024) Course Pt. 1 | Linux+ Training

The Original Text

Tesla cars the Google search engine and Gmail the astroe robots from NASA PlayStation and Xbox the trading systems at the New York Stock Exchange as well as many other popular services around the world run on Linux this means that there’s a lot of benefit to you learning Linux and the base certification for Linux is the Comas Linux plus even if you don’t plan on getting your Linux plus certification this video will help you to develop your Linux skill set and being that this channel is all about cyber security and hacking learning Linux will make you a much more capable security pro and a force to be reckoned with that being said here’s the outline of what we’ll be covering in this tutorial this training series is sponsored by hackaholic Anonymous to get the supporting materials for this series like the 900 page slideshow the 200 Page notes document and all of the pre-made shell scripts consider joining the agent tier of hack holic Anonymous you’ll also get monthly python automation exclusive content and direct access to me via Discord join hack alic Anonymous today are a couple of quick notes the format of the training series is going to be half lecture and half practical commands that we actually run inside of Linux on the command line um as you can see over here we have a bunch of content so it goes up to chapter 11 as far as the lecture portion itself is concerned and we’ll be covering essentially everything that you see inside of this notes document this uh Google doc as well as these 800 something slides that I have here for you in the Google Slides presentation so that’s going to be the part about of the lecture of this particular training series there is going to be a section on chapter 3 where we actually go through the installation of Linux and it’s something that I want to go through with you so you actually have a lab environment just in case you don’t have Linux on your computer and uh you don’t have Mac OS and Mac OS can be kind of similar to Linux but I want you to actually practice inside of a Linux environment so we’re going to go through the installation portion of it and that’ll be similar to a practical uh example that I’ll give you so that we can just go Click by click so you can see how to download an ISO file and uh go ahead and install it for yourself and essentially set yourself up to run all of the commands that we’re going to end up running when we actually do get to the Practical portion of this but for the most part it’s going to be broken down into a lecture and then at the very end of it when we get to chapter 12 we’re going to get to all of the commands that we have and then for chapter 12 there’s individual sections as well that has all of the uh the individual commands that we’re going to run that will just break down into multiple sections under chapter 12 so there’s actually 12 subsections inside of chapter 12 that goes everything from getting system information all the way to getting scripting and learning about how to create shell scripts so that’s going to be the outline that we have or the format of the outline that we have over here the second note that I want to give give you is that you can actually get this document this Google document with all of its notes and all of the commands and everything that’s for the lecture as well as this Google Slides presentation and it’s 800 something slides you can get access to both of these things by being a member of our hackaholic synonymous membership community so if you’re actually an agent tier member or above you will get access to this as a part of your membership and in my opinion it’s actually very very valuable um the video is obviously available to you for free um most likely it’s going to be broken down to two videos because when we get actually get to the Practical section I think I have to break it down into its own series of videos or at least its own separate video because uh Google Maxes us or Google YouTube Maxes us out at the uh 12-hour Mark so just in case we go over 12 hours for this training series I’m going to break it down into two uh separate videos but you will get access to those videos absolutely for free if you wanted to get the docs uh notes as well as the slides it would be part of the hack Anonymous membership and just to give you an idea of what that looks like as far as the the price comparison of what you can expect if you want to com Tia to try to get their Linux plus uh education just for the exam voucher itself it’s $369 but everything else so for example if you wanted to do the the labs and their education uh videos as well as the notes that they would give you and everything else that would be 1165 and then you have these various bundles as well so you’re looking at at least several hundred uh if you wanted to get it through Linux plus uh from CompTIA or really anything that would be comparable to this particular education uh what I’m giving you in these notes as well as the slideshow is based on the exact outline that they have in the CompTIA Linux plus education so that you can be ensured that you’re getting everything that you need for your examination but you literally get it at a fraction of the price probably less than on10th of the price so that’s really the entire plug that I have over here is that if you actually wanted to get these notes as well as the slides you can get it by being a member of the hackaholic anonymous membership community and the link to that is below in the description now that being said let’s actually jump into this outline so you know what to expect okay so first and foremost the timestamps that are going to be attached for this presentation are going to be in the description below as well as the very first comment that will be pinned to the top of the comment section and I’m going to try to give you as many time stamps as possible without making it ridiculous and being too specific so um the time Stamps will be brok broken down and then this is the the outline of this uh course so number one obviously we’re going to go into the intro to Linux and then Linux plus certification which means you’ll get the overview of the history of Linux the distributions that are available and the Linux plus certification itself what the benefits would be for you and more importantly the structure of the Linux plus examination which means the categories that are going to be covered and the format of the exam and any prerequisites that would be required to make sure that you do very well on this training series as well as if you decide to go take the certification exam so that you can do well on the examination uh the second chapter second section is going to be all of the system architecture and the brute process property so this is very similar to what happens if you got an education in comp plus but of course we’re not going to go into the nitty-gritty the super in-depth stuff it’s going to be all the things that you would need to understand Linux as well as the kernel and the shell and the user space and the basic input output system and ufv and Grub and the init system so on and so forth so you’re going to get a good understanding of what these things are and how they relate to Linux then we’re going to go into the actual installation of Linux and you’ll learn how to install Linux on a USB drive so you can have a live boot as well as a full installation if you wanted to install it on a computer to have as a secate second operating system or if you just want to start installing Linux on a variety of machines if you got hired to be a Linux administrator that’s essentially what they might ask you to do is to install Linux on actual computer so that the computer runs on Linux so you’ll learn that entire uh installation process and then we’re going to go through the partitions and the file systems so you know how you can create partitions and what primary partitions are and what extensions are so on and so forth we’ll go through the package managers and of course updating removing and troubleshooting packages then we’re going to go through the basics of the command line now this is just going to be again a part of the lecture itself all of the commands that you’re going to learn in every other section that we talk about will be covered inside of the the Practical portion of this training series and we’re going to go in depth and you’re going to try so many different versions of the commands and you’ll get a really really good training on it and a good understanding of it so I don’t want you to think that just because we’re going to skim them and you know introduce you to the commands for these various sections that we’re not going to actually go and practice them when we get to the Practical section so this is going to be the basics of the command line as well as the text editors and manipulating files and creating basic uh basic shell scripts and what the the various portions or the various elements of a shell script are then we’re going to go into user and group management how to create and manage users and groups the file permissions and ownerships and access control list and things like that special permissions and of course authentication and pseudo permissions that will come to that specific section and then we have the file management and file system so this is actually going deeper into the file systems we had the introduction to file systems when we were over here but over here we’re actually going to go in depth for the file systems and how to mount and unmount a file system how to actually create a file system using uh Fisk or the M mkfs the make file system commands and how to create partitions configuring in those things uh of course and managing the swap spaces as well because that’s actually a separate partition as well so you you’ll get all the understanding that you need for file management and the file systems and how to create file systems and partitions so on and so forth uh we’re going to go into process and service management so so this is also very important as a Linux administrator all of this stuff is actually really important as a linuxis admin so um we’re going to go into the process and service management portion uh understanding what a Damon is what services are uh Process Management commands service management commands and of course scheduling jobs and then we’re going to go into networking so this is not going to be a replacement of the network plus training that you can get but we will go into enough that you will be dangerous so you’ll learn enough about networking that you will know exactly how to navigate the network infrastructure and how to set up the IP addressing and the DNS and dynamic host configuration protocols and how to actually uh create the network manager CLI or how to use the network manager CLI and how to troubleshoot uh network issues and of course managing the firewalls with the basic firewall which is the uncomplicated firewall as well as the IP tables so we’ll go into a good amount of stuff so that you you know enough to be dangerous right but we’re not it’s this is not going to subst intitute the network plus education cuz that’s a whole other Beast by itself and then we have security and access management so this is another very important element uh this is a channel about cyber security and hacking so security is a very very big deal to me so this is going to be something that I’m very interested in sharing with you and file system security C trud file permissions Access Control lists network security user authentication methods and configuring secure shell data encryption and secure file transfer and a lot of other things that would fall under the category of security and access management and then we’re going to go through troubleshooting and system maintenance and these are all the different elements so analyzing interpreting log files dis usage analysis backup and restoration strategies and of course system performance monitoring and then there’s virtualization and Cloud Concepts now this was not listed as an actual uh category under Linux plus but I think this is very important for you to learn because most likely if you’re going to get a job as a Dev ops manager or somebody who will go into Dev Ops or somebody who’s going to become a linuxis admin you are most likely going to deal with a cloud environment because companies these days aren’t really setting up physical servers anymore they’re most likely going to go and buy a virtual server with uh let’s say aw or with Google cloud or something like that and you’re going to need to learn how to uh essentially just navigate those things it’s not really complicated to understand and the command line and everything that’s associated with this isn’t really complicated but I just want you to understand and I want you to know how these things work and you know what is a virtual machine and what is a container for example and how do you use Virtual box or Docker these various tools that are associated with Cloud Concepts and cloud computing so it’s not uh required for Linux plus but I think you will be more powerful as a Linux administrator and if in case it does show up because by the time that you watch this video they’ve added it to the most updated version of the Linux examination in case it does show up you’ll know what you’re talking about and you’ll be ready for that as well and then finally we’re actually going to get into the practice command portion which is a massive portion all by itself and of course exam preparation tips so there will be the key commands for the Linux plus exam there’s going to be exam preparation resources like practice labs in addition to everything we’re going to talk about mock exams study guides I’ll give you a bunch of mock questions that you will need to kind of review just to prepare yourself for the examination and of course uh the exam day tips and how to just prepare yourself for that day and the final wrapup that we’ll go through so this is going to be a comprehensive training course I really wanted to be very useful for you whether or not you take the examination so whether or not you decide to become certified should not matter because by the end of this thing you should be so functional and you should be so competent with Linux that even if you’re not certified you can do all of the requirements of any Linux sis admin job and then you can confidently put that on your resume and even on your portfolio and say look at all of the stuff all of these shell scripts that I created and all of the various things that I can do inside of Linux so that you can just prove that you’re competent and you’re actually proficient in Linux so I’m really excited for you I’m glad that you’re here I hope you go through the entire thing but in case you want to skip around and find individual sections that would be relevant to you the time stamps are all going to be below and that’s about it so I feel like I was talking a mile a minute minute cuz I’m trying to get through this whole thing so quickly to be able to save on time you probably don’t need to speed up this presentation cuz I’m going to try to talk really really fast to just just try to get as much content uh crammed into this video as humanly possible so uh I’m excited for you I hope you’re excited too let’s jump into the very first section okay here we go so introduction to Linux and the Linux plus certification uh overview of the history dros and of course the common uses so I was created in 19 1991 by lonus tals who developed Linux as a hobby while studying at the University of Helsinki I don’t know what you do as a hobby but this guy created an entire operating system um the goals for this were to make it free make it open source alternative to Minix uh which was based on Unix and that’s the guy lonus tals he’s still around and he’s still the the father of Linux um inspired by Unix which was created by Maurice Bach uh version 02 of Linux kernel was released in 91 and then 1.0 was released in ’94 toal uh tals posted the source code for free on the web inviting other programs to improve it making Linux a collaborative project and the foundation for open- Source software and to this day it is still open source meaning you can get access to it for free and you can even make modifications to it so long as your modifications get approved by the father of Linux uh richel stalman uh who’s an American and the the free software Foundation created the gnu which was the open source Unix like OS and then they added the gnu utilities to the Linux kernel to create the gnu Linux or the modern version of Linux that you see uh when you interact with Linux or uh when any of these companies that we mentioned at the beginning of the video when they create it or they use it uh that’s the version of Linux that they’re using so it’s the most updated version of Linux and that’s the guy Richard stalman he definitely that’s like the best picture that I found of him all of the other pictures are not flattering at all and he looks like a mountain man pretty much in every single one of these pictures and it’s interesting to see that like this is one of the guys that’s like the a tech uh nerd that was like the one of the founding uh people of Linux so it’s actually it’s really interesting to see the different types of people that you see inside of the tech world so yeah this is Richard stalman so then what happened was Linux became a complete Unix clone now it’s used everywhere as you saw uh torval Remains the ultimate Authority on what new code is incorporated into the Linux kernel and AKA he is known as the benevolent dictator of Planet Linux and the best picture that I could find to bought you know exemplify that to embody that was that picture right there so without that guy’s permission you cannot make any incorporations to the Linux kernel uh or at least not publicly shared you could probably make the modifications your yourself but you won’t be able to make it as one of the dros that are available uh to everybody else so uh that is what we got for the history of Linux now these are the popular Linux distributions and the various purposes that they hold Linux distributions AKA dros are different versions of Linux OS that bundled the Linux kernel with other software so you have the Linux kernel as the base and then you have a variety of different additions to it that create various distributions so for example we have Ubuntu which is the most commonly used uh it’s popular for General users beginners and people who want to use Linux in the desktop environment and it’s used or it’s known for user friendliness right so it actually has a GU and you get point and click and a lot of cool things that come pre-installed with it uh but Ubuntu is the most popular in my opinion and it was like the first introduction when I uh heard about Linux and I was looking at the software that was installed the OS that was installed it was Ubuntu Dro that was installed and then we have sent OS AKA Red Hat Enterprise Linux or real and these things are used in Enterprise environments so they’re mainly for stability and support so there’s actually a support team there’s a dedicated support team that the company can hit up and get their questions answered and it’s very very stable and it’s good for large environments so Enterprise environments use Cent OS and then Red Hat Enterprise Linux and then we have Debian and Debian is known for servers and advanced users mostly because the fact that you’re not going to get a graphic user interface there are some installments that do have it but for the most part it’s just a black screen with a bunch of command line comments that are on it and this is used for Server so this is a very very big one in the server environment and then we have Fedora which is great for developers and it’s more Cutting Edge and it has a lot of Innovations and utilities that are pre-installed on it and this is again for people who want to go into development and coding and software development etc etc that would be Fedora and then finally last but not least is my favorite which is Cali Linux and that logo is just absolutely amazing I love C Linux and it’s for cyber security and ethical hackers which is US everybody that comes on this channel we are all cyber security people and ethical hackers and we use C Linux and Cali Linux also has auntu installed on it but it comes pre-installed preconfigured with a bunch bunch of cyber security tools and a bunch of ethical hacking tools and Pen testing tools and it is my favorite version of Linux I’m biased obviously I love all of Linux I think it’s an amazing OS but this is definitely mine this is my favorite one and uh that is it for the the majority of the popular dros there are some other dros as well that you will see as we go into uh some of the links for being able to download these things but these are the most popular and uh you can just do a Google search to see what all different distributions are as far as the common uses of Linux we already kind of touched on them through the disos but you have you know servers embedded systems and iot devices iot devices are internet of things so it could be a smart Appliance like a microwave or something or a fridge or something it could be a router it could be cars like the Tesla cars it could be the aobi robots it could be uh so many different things that would be considered an embedded system or an iot device and then you have software development and cyber security obviously and then we have cloud and data center so AWS and Google Cloud actually use Linux uh devops and Cloud architecture all of those things are centered around Linux actually the only uh platform that I don’t think uses Linux for cloud computing is Microsoft Azure uh but they even have uh kind of like a integration with Linux that you can actually use to go into it as well so um for the most part Linux is the the primary OS uh that is used for a lot of these things it’s believe it or not it’s massive there’s so many different entities and so many different platforms and software and equipment and systems that use Linux so uh you’re in a good place by studying this you are adding a very important skill set that I think is going to be very relevant uh for many many years to come for decades to come probably uh assuming that the human race is still around so a very very good skill set to have just so you know what the benefits are by this certification for it professionals even if you don’t get the certification and you just get really good at Linux and you can put on your resume and on your portfolio that these are all the different things that you can do and uh for the the exercises that we run you’re going to get some output results so you can develop scripts and you can do a lot of different things that will demonstrate your skill set on this um but there are a lot of benefits to it so obviously there’s industry recognition obviously there is career advancement opportunities so it’s recognized Glo so Linux plus is actually a globally recognized certification and it’s very valued in it roles as a system admin as somebody in devops or cyber security and cloud computing uh it’s an entryway so it’s a Gateway it’s uh getting your foot in the door for professionals who are seeking to specialize in linux-based systems it validates your skills it has a lot of Versatility to it so system configuration troubleshooting security as we’ve just touched on employers really appreciate app people who actually have certifications or at the very least understand fundamentals and practical uses of Linux and in my opinion it’s the Practical side of everything that is the most important I know a lot of people who don’t have the degrees who don’t have the certifications but are very very capable and really good at these things that are getting hired in very big roles andc it’s mainly because of the fact that if you put on your resume that you can do something as a part of the application process they’re going to have you go through some kind of an assessment and the assessment is going to test your skills in Linux and if you actually know what you’re doing they don’t care whether or not you have the certification they just want to know that you know what you’re doing and you know how to navigate these types of environments so it’s very very important in validating those skills and of course there’s competitive advancement and salary potential so if you actually if you know what you’re doing and if you know Linux versus somebody who doesn’t know know Linux you will get paid a higher rate you can find some really nice salaries for people who know how to interact with Linux and Linux plus certification just kind of is that seal of approval to make sure that you actually know it but you know there’s this really great quote that says uh you know what do you call somebody who got a D in med school you call him doctor and so even if somebody got a Linux plus certification it doesn’t mean that they scored super high on that certification exam it just means that they got at least the bare minimum score to be able to pass it and then they got that search so you could for the most part A lot of people again could just be self-taught and without the certification but they know a lot of Linux uh interactions and environments and fundamentals and advanced scripting and so on and so forth that they actually are more knowledgeable than people who got theer so just kind of keep that in mind okay so here is the structure of the Linux plus exam the exam format uh currently the xk5 as of 20 24 so uh the time of this recording is November 2024 uh it consists of approximately 90 questions including multiple choice and performance-based questions and the candidates that uh take this exam they have 90 minutes to complete the examination and the passing score typically varies depending on the version but it’s around 720 out of 900 720 out of 900 is 80% so this is not something that you can get a c around or something like that so if you get 721 1 you pass if you get 720 you pass but it’s Approximately 80% is what you’re trying to accomplish here the main domains that are covered our system management so how to install configure and manage Linux systems security which is implementing security best practices and understanding the Linux system security requirements there’s scripting and automation for Shell scripting to automate tasks and manage resources which actually whether or not you take this it’s so powerful to be able to automate uh activities on Linux and just in general learning how to automate things is very very it’s good for you it’s good for your life um networking and storage management configuring troubleshooting the network the storage as well as share resources and performance and monitoring so monitoring the system performance and implementing logging these are going to be the main domains the main categories that are covered when you are going through uh the Linux plus certification and under this false troubleshooting under this false system administration so on and so so forth so uh these things we’re going to go through all of that stuff so that you have a very very good fundamental good foundation and then you actually get into the intermediate and advanced things as well the prerequisites are none actually there are no strict prerequisites but uh prior experience in Linux is actually very useful in this um although we will go through all of the introductory and intermediate stuff as well so all the beginner level stuff we are going to cover as well there’s actually a series of videos that I’ve done on Linux fundamentals and Linux Basics on this channel and they’re relatively short compared to what this video is going to end up being so if you want to go check those out just to kind of give you a you know a foot in the water dipping your toe in the water so to speak that would be useful if you get uh the CompTIA A Plus or network plus that’s also super useful but there are no actual strict requirements or prerequisites to get into this uh mostly because the fact that we’re going to be touching a lot of the fundamentals for all those requirements anyways but uh the better introduced you are to this the more familiar you are with this uh you’re going to have an easier time for the rest of it and you’re just going to get a lot of the information reing grained in your mind and reaffirmed and you’ll get a really strong understanding of everything that goes on all right so let’s get into the system architecture and the boot process and first in that regard is going to be understanding the directory structure and the file system hierarchy of Linux one of the common things that you’re going to to hear a lot as we go through this tutorial is going to be the FHS the file system hierarchy standard and we’re going to be referring to this frequently so the FHS defines the Linux hierarchy structure and what each folder is commonly used for and so the key directories that you see on your screen right here these are the big directories for Linux installations and for the most part these things always come pre-installed every time that you install Linux uh there’s going to be a difference between primary and extensions and log logic type of directories but these things are all falling under the key directory so you have the root the bin or also known as the binary and then you have the spin which is s uh system binary you have the ETC which is typically used for configuration files you have the home that houses all of the users you have the ver log the variable log that houses a lot of actual logs and variable Dynamic data you have the user folder that also houses user data you have the optional stuff and then you have the development uh directory that has a lot of the source uh as well as the virtual environments as well and then you have a temp folder that houses temporary data that typically gets wiped on reboot and these are our main key directories now the root directory is the top level directory the root is literally the root for the entire system so all of the other directories in FHS are inside of the root directory and they are technically all just extended ions of the root directory they’re organized hierarchically under the root subdirectories Branch out and then more under them and so on and so forth so they just keep getting more and more uh subcategorized under the root the root directory is owned by the root user who has complete control over the root directory and by extension the entire machine the structure and contents are crucial for a system stability and security meaning that you don’t want to mess with this unless like you really know what you’re doing you don’t want to mess with any of it because most likely if you do anything if you delete something accidentally if you modify something accidentally you’re going to mess the entire thing up and then you have to reinstall Linux and then there’s the software packages that are installed and their files and specific directories under the root directory CIS admins often work directly with the root directory to configure and maintain the system so this is the the mother right this is like the basic the very very main directory the the home of everything is the root and it’s represented by this singular forward slash the bin or the binaries directory contains essential binary executables that are needed during the boot process or in single user mode so it’s generally accessible to all the users and it houses all of the basic uh binaries so for example the ls command also known as the ls binary the cat command the CP the MV RM so LS is listing all the files and directories cat concatenates and displays the contents of a file onto your terminal CP copies files and directories MV moves or renames Co files and directories RM removes files and direct and these are just a few of them there’s many there are dozens of binaries that are included inside of the bin folder the binaries folder the sbin is known as the system binaries folder and this is the binary executables that are used for system administration it typically requires root privileges to execute this so it’s not accessible by everybody and since they involve system level changes it’s very very important that you know what you’re doing uh for the most part you’re probably going to be using your own version of Linux on your home computer to go through all of this so you’re not going to be in an Enterprise environment to do this so even you are going to be the root user that gets access to the system binaries you don’t want to mess with these things right so the the Fisk binary for creating partitions and the fs check for file system check um for in it initializing the system rebooting shut these are very very important binaries you don’t want to mess with these these are not things that you should even try to open or manipulate or unless you are super Advanced which if you’re watching this you aren’t so just be very very careful with the binaries just leave these alone you know uh for the most part as long as your your intrusion detection system or your antivirus or uh any of those security appliances don’t give you any kind of an error or an alert that has to do with this for the most part you’re not going to be interacting with the system binaries it’s just good to know what it houses and it is the part that uh primarily houses the system administration the binary executables that are relevant to your system and this is not accessible by anybody other than the root user that’s what you need to mainly keep in mind then we have the the variance the difference between these two right so you have the binaries folder that typically all users get access to the system binaries folder is only accessible by the root user the purpose of the binaries is essential user commands like LS and CP so on and so forth the system binaries are for system administration the execution timing so early boot process or single user mode is when binaries are actually launched and executed the system binaries are for system maintenance and administration so they actually manually are booted and uh used or activated executed and they’re not done automatically by the system so for example uh creating partitions is is going to be done manually by the root user as they try to create partitions or when you go through the primary installation and you’re assigning the various partitions that you want that’s when those things are going to be executed apart from that unless you do need any of those binaries they’re not going to be activated or executed so these are the main differences between binaries and system binaries so that you have a good understanding of what they actually are the Etsy so the configuration binary or not binary sorry folder the Etsy folder folder directory uh houses configuration files for a lot of different services and applications and these could be the things that you also install manually uh this is known as the control center and the contents of etsy can vary depending on the Linux distribution that you got and the software that’s installed so uh it again houses the stuff that comes preconfigured inside of the system as well as the stuff that you download and install into your system etsy’s containers or etsy’s directories are things like the Etsy pass password that actually houses all of the user account information but it doesn’t show you anything for password hashes so it’ll show you just an X for that the Etsy Shadow file is going to show the uh user password hashes so this is these two are very very important in the concept of cyber security and ethical hacking these are very very important files and then you have the Etsy group that has the group information the Etsy host for example the host name and the IP address mappings the host name itself which is the system host name you have the edsy res configuration file that has the DNS resolver configurations and you have the Etsy network interfaces that has the network interface configuration and you’ll see what that means when we actually run a couple of these networking commands just so you can see what those things mean and then we have the Etsy system control configuration that has the system kernel parameters and again you’re not going to be messing with this uh maybe at some point you might be messing with the DNS resolver configuration maybe you might be messing with the network interface configuration for the most part you’re not going to be messing with any of these things directly uh unless you’re doing pen testing or ethical hacking exercises where you would manually try to add a user uh to the Etsy Shadow file or the Etsy password file something like that but the interaction that you’ll have with all of these files will be through individual commands that will add or remove uh entries from some of these files and that’s how you will interact with them it’s very rare that you’ll actually open these files directly and interact with the file directly you’ll probably use one of those binaries that we talked about earlier to interact with these files the service configurations are like such so if you have an Apachi web server that’s installed that would also be inside of the Etsy folder and it’ll be Etsy Apachi and then if it’s in Jinx it’ll be Etsy in Jinx uh if you have a mySQL database installed that’s what it’s going to look like if you have the postresql installed that’s what it’s going to look like so these are the configurations for Individual Services right so uh it houses configuration files for the system Sy Services as well as anything that was installed on the the machine itself so MySQL may not necessarily come installed on your system you may install it and then when you do that configuration drive or uh directory that configuration directory as well as its individual configuration files are going to be installed inside of the Etsy directory and of course package manager so very similar to the previous uh slide that we were on apt the AP package manager resources or the Yum package manager resources these are uh dependent on the type of Linux that you’re running um so app typically comes with Ubuntu yum typically comes with uh Centos or red hat but these are the package managers that come that you can use to uh install various uh packages as well as software and applications and we’ll get deeper into that when we actually get to that slide but typically again these things are all included inside of the Etsy folder so as a final note why is the Etsy folder important well there’s system Behavior so all the configuration files directly influence your actual systems Behavior the security is very important as we talked about the Etsy password and the Etsy Shadow files specifically are really really import important so um if there’s misconfigurations that are done to any of these configuration files that poses a security risk but then there’s actual data that is inside of these configuration files that if mishandled or accessed by somebody that shouldn’t have access to is going to pose a massive security risk um there’s customization so you can customize the system by editing these particular files if you want to append to them you could use it uh doing various binaries or you could just open some some of these things up and directly uh append or uh manipulate the data that’s inside of them I wouldn’t do it against the actual system important stuff so if we go and talk about those individual softwares that were installed you can customize those by opening them inside a text editor but for the system uh related uh configuration files I would only use specific binaries unless I know exactly what I’m doing I would not open those files directly I would use the system binaries because the system binaries also give you the messages if you made an error in something or uh with the syntax of something they’ll tell you exactly what you need to do so you’ll get a lot of help with the terminal itself when you use the binaries instead of opening the document or opening the file and trying to configure it manually and of course troubleshooting so any system problems that can be resolved by modifying configuration files so at the same time vice versa of that a lot of system problems can be caused by incorrectly modifying configuration files so again just notes take notes be you know be warned I gave you the disclosure right so just be careful as you manipulate those data points and again so we have an individual slide for the caution so always back up the configuration files uh be mindful of the file permissions incorrect permissions can lead to system instability or incorrect permissions can lead to hacking that takes place um ensure that the configuration has the correct syntax to avoid any errors and of course test these things in a controlled environment before deploying them to a production system this is more relevant to somebody who’s actually working inside of some kind of a company and is hired as the system administrator so before launching it to the 500 or 5,000 employees inside of the company before deploying it to a production system that’s what that means before you actually deploy it to the entire company you need to test it on one machine in a controlled environment to make sure everything is all good and there are no glitches or errors or security vulnerabilities and once that’s been approved and everything’s taken care of then you deploy it to the rest of the company so that’s what that means you got to test it before you submit it and deploy it to the entire system to the entire production environment the home directory is where user accounts are stored and an individual user account so for example if you have user one this would be the directory for user one and then inside of that directory is going to be a bunch of information that’s just relevant to user one and typically it’s only accessible by user one or at least somebody who has user one’s password so user accounts are housed inside of the home primary location for files documents settings configuration Etc uh the key directories inside of it are going to be these and you’ve probably seen these so this also is very similar to what happens in a Microsoft uh environment where each user has their own documents folder and a downloads folder they have a music folder pictures public videos so on and so forth for the most part all of these things are only accessible to that user except the public folder the public folder is used to share files with other users and typically it’s accessible by everybody else that’s on that system as well but all of these individual folders right here are usually only accessible by that specific user uh user ownership has to do with their home directory and everything that’s inside of that home directory is owned by that user by default only that user and the root user have full access to the home directory as well as the stuff that’s for that specific user so it ensures privacy it ensures security um a user can customize their home directory to essentially do whatever they want um so long as they don’t install anything or do anything that would ruin the entire system itself they can essentially do whatever they want inside of their directory they can store whatever they want inside of their directory as long as they don’t pass their storage limits which typically if they do they just get notified and they have to remove some stuff and delete some stuff so that they open up some more storage space and by default the home directories are protected from unauthorized access um by anybody who does not have root permissions or is not on the sudo’s list or does not have the username and password for that specific user the notable notes here are that you want to back up these things regularly uh to just make sure in case anything happens with a system crash or something you can just recover all of that stuff um the file permissions are again very important so file permissions is another one of those things that is going to come up very regularly as we go through the rest of these videos and it’s Mo it falls under security so file permissions um have to do with if somebody is supposed to access this should access it if somebody’s not supposed to access it they should not be able to access it very simple it’s not it’s not a complicated process it’s just something that needs to be kept in mind and as you navigate and administer administer a Linux machine or a Linux environment this is some of the stuff that you really need to be uh be uh considered for or consider uh to just make sure that none of these things uh happen and nobody accesses these things when they’re not supposed to and we’re going to go through those individual file permission commands and how to set file permissions so on and so forth uh storage limits as we already talked there is going to be definitely storage limits and uh the users uh are supposed to maintain their own storage limits and not or not maintain them but they’re supposed to keep their storage limits in mind so that if they ever hit a storage limit they can delete data and make some more room but each user is going to get a storage limit so that the overall system and the overall environment doesn’t get maxed out and of course I mean keep it organized if you can but I know some people that have a really really freaking overwhelming like scary looking desktop because it’s just a bunch of files on the desktop there is zero organization and it’s like how do you find anything here and they’re like no no no I know where everything is at yeah I’m like all right bro if you know where it’s at then fine but if I am your manager I’m going to be like you need to learn how to organize all your stuff so that’s basically it so V or variable data I call it Devar it’s just it’s weird I don’t know the pronunciation is weird to me but whatever the VAR directory the VAR folder uh it stores variable data so this could be the system logs because system logs are constantly updated with new activity it could be temp files it could be mail spools it’s essentially Dynamic data that changes frequently or uh variable data as the name implies um these are some of the key directories inside of the VAR directory so you have the log directory that houses all of the system and application logs and it can be very very important that’s another one of those locations where we refer to regularly when we’re doing incident response in cyber security or when we want to do troubleshooting if a system crash happens or unexpectedly so it’s a very very important directory the VAR log directory is very important the our mail directory stores all the incoming mail for the users the spool directory contains various services including print jobs mail and news the lib has the state information for various services and applications and then you have the temp that is inside of the V directory which should not be confused with the actual temp directory which we’re going to cover in a few slides so it stores temporary files created by various applications that will be inside of this specific directory so inside of thear directory there is a temp directory that stores temporary files by various applications then we have the importance of this thing so as you can imagine Health monitoring by looking at the log files inside of that directory is very very important so it gives you the data that’s necessary to uh reverse engineer or backtrack what happened for system performance and any potential issues or attacks or anything that may have happened security analysis again same thing service operations so spool directories in the spool are very uh crucial to services like Printing and mail so if there’s some kind of an issue that is with your printer that isn’t the actual Hardware itself and there’s no connection with the wires or anything anything like that you may want to go over here to try to find out what is going on with uh the service itself why the operation of that printing is not working accordingly uh application States so the lib stores information needed for applications to maintain their state so whether or not they are enabled or disabled or if they have issues or glitching whatever it may be most likely it’s going to be inside of this specific directory for the application state so so the VAR lib directory is going to store information needed for applications to maintain their State uh important considerations would be regular cleanup of the temp directory to make sure that you don’t uh overload your storage uh rotate the files uh the log files regularly because log files can get massive if they are designed or if they’ve been configured to store data for a long period of time and I don’t care what the system is like uh for the most part any system can overload their log directory or the log contents very quickly if you don’t uh set it up to clean up so if you don’t set it up to wipe the log data or if you don’t back up the log Data before it’s wiped so on and so forth those log files can get very very large if the system is being used regularly log files can get really big uh permissions uh are very crucial again so right files and directories the permissions of those files and direct to protect system security somebody can get into your log directory and they can go into the authentication log and erase their tracks of being in your system that’s not good so they can actually get into your system they can do whatever they need to they put some kind of a root kit or something some kind of a Trojan or a worm something that they can connect to a C2 connection some kind of a shell something right they can get in your system house some kind of a connection where they can come in as they please and then they erase their tracks from your logs that’s not a good thing that is very very very bad and that should not happen because the permissions to those logs should be kept at a root level and uh a pseudo level so that people who are only an administrator should be able to access them and their password should be super complex so that somebody can’t just hack in their password by using a password cracking tool or something like that so permissions again as I’ve mentioned this is going to happen frequently so permissions to certain log files and just certain directories should not be accessible by anybody other than the root user and the administrators and of course backup whatever is necessary so not everything needs to be backed up but the important things should be backed up for example log files should be backed up and configuration files should be backed up that is all important stuff we have the user directory the user directory has user programs and libraries so anything that has to do with the programs of that user the libraries of data and any documentation typically stored here um they have the user bin so user binaries right the user sbin system binaries you have the user Library you have the local for locally installed software often outside of the actual package manager that we’ll be talking about the user share that has share data files like documentation icons and configuration files and of course the source code for various system utilities the binaries are accessible to all the users the system binaries are typically for system administration requiring root privileges so this same thing applies here so even the inside of the user folder the system binaries are typically only accessible by the root user the shared libraries used by programs the locally installed software often outside of the package manager these things are very similar to what you saw previously it’s just it happens to be for the individual user the root user will have their own directory so it’s not going to be something that you’ll see inside of the user directory it’s going to be forroot just by itself but the the user directory itself will have individual users other than the roots that will have all of that information stored in here so why is this important the program execution so in the binary and the system binary are essential for running programs uh shared libraries are used by multiple programs to reduce disk space and improve performance so if there’s been something that’s been installed that’s accessible to all the users it’s going to be inside of that directory so that it’s not duplicated across every single user taking up space so you have one installation but that’s now available inside of the lib the lib the library so that all of the users can get access to it instead of having multiple installations of the same thing uh software install packages install uh for the users are actually inside of the subdirectories of the user and a system documentation could be inside of the shared doc directory that contains documentation for system utilities installed software so on and so forth and what we’re talking about here is actual manuals for example how to use a specific uh software or utility the documentation for that is going to be inside of this system documentation so the user share Doc is going to house typically documentation and instruction manuals for the various utilities and so key points here there should be certain things that are readon so the user directory is mounted as a read only to prevent accidental modifications by somebody who should not be accessing it or should not have uh root privileges uh package management like AP and RPM they manage the installation and removal of software often modifying files within that user directory so instead of going and manually removing files or folders you would use one of the package managers and the the unin uninstall uh command that comes along with it that would be uh removing all of the software packages and all of the dependencies that comes with that software so uh rarely are you going to be accessing these things directly and trying to remove them directly which is why it says that there should only be readon permission so you should not have the permission to modify the contents inside of this it’s typically done by the various binaries that are inside of the system and some files in the user are accessible to all the users others may require root privileges to modify and that falls under file permissions and user access so again common themes you’re going to see this a lot so the reason why I keep saying them is because I wanted to like really embed in your brain as we go across this information so that you really really understand certain Concepts and then when you see the repeated themes you start understanding how all of these things actually connect to each other because CU it’s not complicated it’s most of this stuff is common sense you just need to know where certain things are housed and what the rules are for accessing them and giving permissions and so on and so forth so that is what we got there now we have the opt the optional software it stores additional software packages that are not part of the base system so optional software installations from third-party sources so uh let’s say OBS for example which is the software that I’m using to record this presentation that’s something that came from a thirdparty so software Third Party Source and that was installed so the contents of that the optional uh require all of that stuff is going to be inside of the opt folder for this particular software right there is a screen recording software that comes pre-installed with my machine but that’s not the one that I’m using so that uh content the content for that specific software are not going to be inside of this particular directory this is for the optional software that is coming from a thirdparty source so just keep that in mind it’s the optional configuration files or software installations or anything else that has to do with stuff that comes from a third party source and I I’m actually going to correct myself it’s not the configuration file so as we mentioned earlier configuration files would actually be inside the Etsy folder this is just the additional software packages that have been installed so it would be the packages the actual uh the use f uh packages of that particular software that are going to be in here the configuration file for that software would be inside of the Etsy directory why would you use this so it keeps the optional software separate from the base system which makes it easier to manage and remove um it prevents conflicts between different software packages that’s why we isolate them and it’s flexible so it allows for easy installation it allows for easy removal of the optional software and you can technically come in here and manually remove them but even if you have installed something uh and it ended up in here I would highly recommend just using the same package installer to remove it unless there was some kind of an install file that you downloaded from a place and then you used uh a package uh manager I mean I it I feel like it always comes down to the package manager so there especially in a Linux environment that doesn’t have a GUI a graphic user interface if it’s like a Linux server or something something you’re not going to get a install file that you would double click on and then you open the double you open the install file with the double click and then it goes through the installation Wizard and then you just click next or something it doesn’t work like that if you’re using the command line interface you’re typically using the package manager to install something which means that you would most likely use the package manager to remove something so this is uh more so for the system itself but again you just need to know uh what purpose this serves the optional software the opt has to do with thirdparty software that’s been downloaded therefore it keeps all of that stuff separate from the base system the system binaries and the base installations which makes it easier to manage and remove and it isolates everything so if there’s something wrong you can backtrack and try to find exactly where that problem came from and of course it’s flexible for easy installation and removal of that stuff the typical structure uh like looks like this so if you actually install something in the opt uh it typically creates a directory named after the software so for example MySQL so there would be a MySQL directory for the MySQL software and then it will house the software’s binaries libraries and configuration files and data and for the most part most likely there’s going to be also something that would be inside of the Etsy folder so you’ll actually have MySQL inside of the opt as well as the Etsy folder that would have to do with the configuration files of that however in this particular location it also houses the libraries it also houses the data it also houses the software binaries as well as the configuration files whereas inside of the Etsy folder it just has the configuration files so keep that in mind inside of the opt MySQL it has everything including config files inside of the Etsy MySQL it only has the config files so just keep that in mind key points here ownership and permission the ownership and permission files and directories within opt can vary depending on the software so if the user installed it versus if the root installed it that could determine the the direction of the permissions and the ownership uh package management software packages can be installed using the system package manager others may require manual installation that’s typically in some kind of a uh gu environment based on what I’ve experienced so far sometimes you use another command line tool that is not the actual package manager itself so for example it’s not a or apt it’s not that so there is some other kind of command line tool if you’re only in a command line environment um so sometimes that may be the case but for the most part I’ve actually only used the package manager to install packages and the dependencies for those uh tools and utilities um uh unless I was inside of a GU environment where there was some kind of a you know uh you double click it and you extract the files and then you click on the install file inside of that extraction and then it runs through the installation so on and so forth um then there’s a configuration for software installed in opt it’s often located within the software’s directory and then cleanup so when you remove software installed in opt it’s important to remove all the files and directories associated with that software so basically just just delete the entire directory instead of going inside of the directory and then in uh manually uninstalling things just uninstall everything that has to do with it if there’s data that you need to back up save and back up that data and then just wipe that entire directory that is the extension of opt so opt MySQL for example you would delete the entire MySQL directory to make sure that everything inside of that thing have also been deleted the dev directory houses the virtual device interface so a virtual ual file system that represents Hardware devices as files so it allows the OS to interact with Hardware devices using standard file operations so this could be like a printer for example so there would be the dev SDA or sdb that represents the hard disk drives so SD stands for standard dis and then it could be uh if you have multiple hard diss installed a would represent the first one B would represent the second one C Etc and it would just continue on from there there would be the CD ROM there is the USB drives right so it could be the external drives as well null is actually something that discards anything that’s written to it so when we run certain uh commands that we know they’re going to give us a lot of error messages we point those error messages to the dev null directory because it immediately just deletes those error messages because they’re unnecessary they do nothing to you and it’s mostly like permission denied to such and such resource when you do a find command this is actually what I use this for that that’s the only only time that I’ve used this specific directory for I’ll run a fine command and I know that from the results of the fine command there’s going to be a lot of data that’s going to pollute my screen and it’s going to hide the important data so it’s going to be like noise right and typically that data is permission denied to this permission denied to that and it’s all permission denied messages and then within a 100 permission denied messages there’s going to be two messages that have the data that I want and so I point all of that denied messages those error messages to the devn folder and then from there I can only look at the actual results so instead of seeing a 100 results I only see the two that apply to me because all the error messages go to the Devol folder um and then you have Dev zero which is a special file that produces an infinite stream of zeros and so far I have never used this thing I think it’s important from what I understand I mean it’s listed so I know that it’s important but I’ve never had a chance to actually use it so I’m very curious as to what that’s even for um but we just know that it does exist and so it’s a special file that produces an infinite stream of zeros and we can uh I mean I can I’m I’m actually going to pause the presentation real quick and I’m just going to do a Google search just to find out what the dev zero uh file is for all right so here’s what we got uh in Linux Dev zero is a special file located within the dev directory that essentially acts as a source of infinite null characters like asky value zero meaning when you read from it you will continuously receive zeros making it useful for initializing data storage with blank values which is why if it’s a blank value why do you need to store it whatever okay so key points about Dev zero the function is that it provides a stream of zeros whenever you try to read from it um the disk initial initialization would be the use case uh creating a completely blank dis by writing data from Dev zero to it okay that makes sense so if you want to initialize a dis creating a completely blank dis by writing data to it so you just pull a bunch of zeros from this thing and you write it to the disk memory allocation checking if a specific amount of memory can be allocated by attempting to read a certain number of bytes from Dev zero and then generating blank files creating a file with Zer is by redirecting output from Dev zero to a new file so it’s basically a storage of Z and then you you you pull that storage of zeros to either allocate testing or generate a blank file or initialize a dis that’s very interesting that’s like such a nerdy thing I didn’t even know that something like this existed but okay now we know very interesting so why is the dev folder important uh device abstraction by treating devices as files Linux provides a consistent interface for interacting with various Hardware devices so for example the file that is been assigned to a USB drive for example when you plug in the USB drive you now have a folder dedicated to the file system for that USB drive and then when you go inside of that folder you get to interact with the contents of the USB drive and this is more important in a command line environment because if you’re inside of a gue environment a graphic environment you just click on it and you get to see inside of the file browser that okay these are the contents of the USB drive but when you’re inside a command line environment only it turns it into a file system so that you can actually interact with the contents of it um device drivers which are software components that control Hardware devices create device files inside of Dev so the device driver which is the software components that control a hardware device so for example your printer has a driver that you need to install to be able to interact with that printer if you’ve ever insted inst a new printer it says hey you need to go download this the driver for this printer from our website and that’s how you get to interact with the the printer itself so that is housed inside of the dev folder the user interaction can access uh a user can access and control devices through these device files often using commands like dis duplicate or HD parm so on and so forth to be able to interact with the various uh extensions of the dev Direct directory and that is why this is actually important so it’s the virtual interface of a physical device typically so a USB drive a printer a hard disk on the computer so on and so forth this would be the virtual interface to interact with that physical device key points here are that the device files are often created dynamically when a device is plugged in or when the system boots so when you plug in your USB all of a sudden a little folder or file pops up that has to do with that USB uh access to device files may be restricted to rout or specific users depending on that specific device and the correct device driver might must be installed and loaded for a device to be accessible in Dev for example your printer USB drives typically don’t require device drivers but printers usually do so something that does something in the physical world typically has to have its own driver that comes installed and then that will be inside of the dev folder then we have the temp folder so the temporary file system it’s used to store temporary files created by various applications and system processes the files are typically deleted when the application or process finishes or when the system is restarted or rebooted the temp has write and execute permissions it’s a very popular Target for hackers most of the pen testing exercises that I have done have been they’ve had to do with you know creating a shell or some kind of a script that does some malicious activity creating it outside of the machine that I’m targeting and then taking that specific file and using something like w get or some kind of a python server to transfer it from my machine to the Target machine and then I put that file inside of the temp folder because the temp folder has right and execute permissions and I I can have no access to any of the other folders that we just talked about so the root folder obviously the home I can have literally no access to anything else but typically there is always access to the temp folder so if I drag and drop it or transfer my binary or the the the payload that I have inside of the temp folder then I can execute it from the temp folder so this is a very popular Target for hackers because of the fact that it has these permissions that typically come preconfigured you can obviously change this but for the most part it has right and execute permissions U mostly because the fact that it usually just deletes everything uh once the system has been rebooted so it’s not there’s nothing in here that’s permanent it’s a temporary file system so that’s why uh it has those permissions and that’s why it is such an attractive Target uh temporary storage is a convenient location for applications to store temporary files without cluttering the user’s home directory or other permanent locations and when you close the software those files get wiped from the temp folder when you restart the system those files get wiped uh system processes often use the temp folder to store temporary files during operations such as package installations or system updates and then they get wiped user applications such as web browsers and text editors also use the temp folder to store temporary files and then they get wiped so that’s why it’s like it’s probably one of the most volatile uh folders here because it’s just constantly changing based on the software that you’re using and the activity that you’re having so you don’t really uh you just none of these things are permanent It’s All Temporary some key points here there are automatic cleanup so many systems automatically clean up the temp directory on reboot or at regular intervals or when you close the software down or something like that security uh you got to be aware of potential security risk associated with the temp files because they contain sensitive information sometimes or that specific directory can be accessible fairly easily if necessary you can manually clean up the temp directory using commands like remove or remove RF meaning to reformat um however be cautious not to delete important files so again just be aware of certain things when you’re doing automatic cleanup or automatic wiping of something make sure there’s nothing in there that is important that you should have backed up before you wiped everything inside before you like digitally naal the entire thing and just wipe everything and so these are some of our other important directories the boot directory stores boot loader files Dev we’ve already talked about lid we’ve already talked about the mount directory uh houses the mount point for removable media um the proc is the virtual file system providing information about system processes and then the SRV contains data for services provided by the system and then there’s the system the CIS which is the virtual file system that has info about the systems Hardware so for example the CPU or the GPU so on and so forth so these are other important directories um however uh what we talked about is most likely going to be talked about very frequently you do need to know what is inside of these things you don’t have to know uh the nitty-gritty you don’t have to know all the finer details but you should be aware of these individual directories as well that are all a part of the file system hierarchy standard okay so now going into the file system types and mount points so these are our common file system types you have the xt4 xfs NTFS and fat 32 the xt4 is the fourth extended file system which is designed as the successor to xt3 and xt2 it’s a journaling file system for Linux and what that means is that it you it houses a journaling file system that basically it records everything that happens inside of the file system in in case the system crashes or there’s some kind of a power failure or something and it needs to pick up where it left off it maintains a journal so it supports large files um for example 16 terabytes in size which is freaking massive um it makes it suitable for storing large media files and databases it has the large file system support that can handle file systems of up to one exabyte in size accommodating massive storage needs enhances performance so improved performance especially for the large file systems and it can be extended so it’s designed to be extensible allowing for future enhancements and features and hopefully you can actually see the bottom part of this I’m not sure exactly if you can see that inside of the video but the bottom piece right here says extensibility the xt4 file system is designed to be extensible allowing for future enhancements and features so these are the key features that we have of the xt4 file system then there is the xfs file system the xfs file system also has journaling for high performance scalability and reliability it’s used for large file systems and high performance workload so this is typically for an Enterprise type of an environment uh it’s optimized for sequential read and write operations making it ideal for file servers and databases um it can handle file signs uh up to 8 exabytes in size so it’s pretty freaking massive um the xd4 very similar to the xt4 it uses journaling to ensure data in Integrity in case of crashes it can be scaled so it can handle a large number of files and directories that is really good for file servers with a lot of users so typically this is used for a central file server that a lot of remote users can access and it makes it very easy to scale up and house more files as well as give access to more users then there’s flexibility which has online file system res resizing and real time def fragmentation which makes the file system very flexible so you can actually make certain edits without removing uh permissions or without removing accessibility to it you can make certain modifications to the xfs file system so it’s typically used for large databases or data centers with many users the common use cases here are often used in high performance servers like the web server or the database server and file servers so server essentially it’s serves the data and other people can take the data from it so it can be one central location for example it would be the web server and then M multiple users can access that website it could be a database and multiple users can access that database it has the nas devices so network attached storage devices often use xfs to storage large amounts of data and then you have virtual machines like the xfs uh that can be used as the file system for virtual machine dis image images and just a quick note a network attached storage device is essentially a fancy name for a server so it it’s the physical uh server itself that uses the xfs file system to serve that data so it it stores a bunch of data that other people can access and it’s attached to a network meaning that is accessible via a network connection so that’s what an Nas device is and then it typically uses the xfs file system to be able to house that data to store that data so that other people people can get access to it the NTFS the new technology file system is actually a file system that’s been used in modern Windows operating systems it’s reliable it’s secure performs well the reason why you need to know this is that most often you’re going to need to format a drive that can interact with the NTFS system or at the very least you need to know what it is so that in the context of your interaction between Windows machines and Linux machines you need to know how to format a a file system or some kind of amount so that it can be accessed by both of these file system types right so if somebody is using a Linux machine and another person is using a Windows machine they both should be able to get access to your overall server they should be able to get access to your xfs file system for example so that they can pull whatever data they need from it so this is one of the reasons why we’re actually even talking about the NTFS the new technology file system um you know it’s secure it offers robust security features like Access Control list that allow granular control over the file and folder permissions it uses journaling very similar to everything else we’ve talked about to improve the integrity and recoverability of data uh it’s optimize for performance especially on Modern Hardware and it can can handle large file systems and large file support so it’s a very it’s a solid file system and it’s been used on the modern Windows machine so obviously this is a very very solid good file system it typically is only available so if something has been formatted for NTFS typically it can’t be accessed by a Linux machine that’s based on my experience at least so what you want is dual formatting and typically when you do something that can handle both of them it is not just NTFS just keep that in mind so NTFS is uh typically uh inherent to uh Windows right it’s uh very much so dedicated to Windows uh compression is one of the uh other features here um it can support file and folder compression to save disk space it allows you to encrypt files and folders to protect sensitive data meaning you use password protect files and folders and it has hard links and uh symbolic links it provides flexibility in file management a hard link is a direct reference to a file system essentially creating a duplicate entry for the same file data while a symbolic link link also called a soft link is a file that stores the path to another file acting like a shortcut that points to the original files location meaning with a hard link multiple file names can access the same data a symbolic link just points to the original files path on the system so for example a symbolic link could just be the desktop shortcut for example so that would be the main difference the hard link is a direct reference to a file on a system essentially creating a duplicate entry for the same data symbolic link is just a shortcut to that file right so a hard link duplicates the same exact thing puts it somewhere else a symbolic link or soft link is the shortcut that points to that file the common uses of NTFS would be the Windows operating system so it’s the default system for Windows nt2000 XP Vista 7 8 10 and 11 so the more recent versions of Windows uh many external hard drives are formatted with NTFS especially those that are designed designed for Windows system again so we’re referencing being able to use it by Windows um a lot of external hard drives are formatted as such so you want to make it uh formatted to access both types of machines so for example Mac OS as well as Windows so you need to reformat the hard drive uh USB drives can be formatted with NTFS to storage large files and folders and you can also reformat them to house both types of files so on and so forth so NTFS new technology file system some common use cases we have the fat 32 file system this is by far my favorite name for a file system so the fat 32 file allocation table 32 file system uh it was used in the early days of personal Computing it’s not as featur rich like NTFS or xt4 but it remains popular due to its Simplicity and Broad compatibility and it’s still used in a lot of environments um the key features is that it’s relatively simple which makes it easy to implement it’s compatible with a wide range of operating systems including Windows Mac OS various Linux distributions so it is cross formattable uh it has read and write access uh allowing you to create modify and delete files it does have a limited file size so it has a maximum file size that’s limited of 4 GB and a limited partition size for fat 32 meaning it’s 2 terab it’s still fairly big so 2 tbte file system or a 2 tab extension partition is not bad especially if you’re dealing with with just one person one user but most likely you’re not going to be using that inside of an Enterprise environment with a bunch of different users and a lot of different data so fat 32 for the most part is being used by one individual person for like a home computer or something like that or maybe a USB drive or an external hard drive that is maxed out at 2 terabytes the common use cases is the USB flash drive so commonly used for USB flash drives memory cards like digital cameras or other devices and external hard drives that only have a maximum of two terabytes typically use fat 32 and that’s actually what I did for uh most of my external drives cuz I don’t have external drives that go past 2 tabt so uh when I can I do go ahead and um format it for the fat 32 so that I can use it on both my Windows machine as well as my Mac OS and Linux machines all right now we can go into the system architecture which would be the kernel the shell and the user space so the kernel um it’s the core component of a Linux based operating system it’s basically the bridge between the hardware and the software layers managing system resources facilitating communication meaning that if you have you know if you put a request or we call it a call to the system if you send a call to your uh CPU to perform a certain action the kernel is that middle ground between whatever the software was that made that call and the actual CPU operating that call so it’s the middle ground it’s the bridge as it says between the hardware which is the CPU the GPU the motherboard so on and so forth and the software layers meaning your text editor for example when a text editor makes a certain call the kernel translates that call it transfers that call to the actual hardware and it’s the bridge that does that communication the key roles of the kernel are resource management and several other management types as well so you have memory management Process Management device management and file system management so allocates and deallocates memory to processes as needed and we’re talking we’re not talking about storage here we’re talking about the random access memory the processing memory you have the process management to create schedule and terminate processes it controls access to Hardware devices like the dis drives the network cards and the printers and then you have the managing of the file system and providing access to files and direct so memory management specifically it all allocates memory to processes as needed dividing physical memory into virtual memory segments page fault handling so a page fault or a page in general is just a a certain amount of data and the handling of the faults that happen with pages is typically done within the kernel so if a process tries to access a memory page that’s not currently in physical memory the kernel triggers a page fault and then loads the missing page from the disk then it swaps So Physical memory is scarce so and typically it’s in a computer that’s a little bit older or maybe it doesn’t have as much physical RAM you know let’s say 4 gbt of Ram or less so if the physical memory is scarce if there isn’t enough physical memory the kernel swaps the interactive pages to the dis to free up memory for active processes we also refer to this as the swap space and we’re going to be talking about this a lot later on but if there isn’t enough physical memory the kernel is responsible for swapping between virtual memory as well as the physical memory to make sure that the system doesn’t crash and you get access to uh the amount of processing power that you need to be able to run your active processes so this is what we mean by memory management you can allocate memory you can handle any uh data loading so if there’s any data pages that aren’t loading the kernel is responsible for retrieving that data and then it’s also responsible for managing the actual swap space managing the processing power between the physical memory and any virtual memory pages that have been designed we also known as the swap spaces so that the proc the computer doesn’t crash and you can still actually run the processes that you need to process then we have the process management so we went from memory management to process management and process creation and termination is one of the big things so the kernel creates new processes it assigns them unique identifiers uid and then terminates them when they’re no longer needed so when you close it down it terminates it and the uid goes away process scheduling so if you have anything that should run a certain amount of time on boot for example when you start up your computer or every day or every 5 minutes whatever those things are done inside of the kernel that process scheduling is done inside of the kernel and then the CPU time efficiently is allocated via the kernal uh context switching so between different processes saving their state and restoring the state state of the process to the to be executed so for example if you go from Google Chrome to your text editor you’re technically switching the context and then if you come from the text editor back to Google Chrome you should be able to pick up where you left off where you were on Google and so that is considered a context switch and the kernel switches between those processes saving whatever their state was and then restoring it when you go back to it so that you can just continue with business as usual uh interprocess communication is also facilitated by the kernel meaning that they can share information and synchronize their activities with each other and interprocess communication could be something like copy paste for example you take something from one thing and you paste it to the other one or if you have two uh processes that are integrated with each other that rely on each other those things can be done via the kernel as well so the whatever the processes are whether it’s a background process or a foreground process meaning something that you can visibly see versus something that you don’t see that’s just running in the background there are certain communication that takes place between those processes to make sure that the computer is running properly your system is all good and that’s all done through the kernel so the conversations that happen between processes the conversations that happen between physical hard drives as well as uh the physical CPU and your software on the computer all of those conversations are handled via the kernel and finally we have the device management here or I think there’s actually one more one more role here but device management is one of the other ones as well uh which would be the loading and unloading device drivers uh software components that interact with specific Hardware it loads and unloads those specific things so you can interact and communicate with the physical Hardware you can have the input output operations so when you click something on your mouse that’s considered an input your mouse is considered an input device when you type in to something on your keyboard that’s considered an input device or the input that’s being received by the computer and then the reaction from the computer would be considered the output so if I click on something it needs to open that’s an output if I type something in the keyboard and it prints onto my screen that’s the output so the kernel handles the input and output transferring data between devices and the memory which would be considered all the processing that’s being done on the computer and then the kernel also responds to interrupts generated by the hardware devices like the dis drives and network cards and if there is any kind of interruption happening within the processes of those it’s also handled by the kernel itself Hardware abstraction layer also known as Hal so this is the consistent interface for software to interact with Hardware hiding the complexities of different architectures uh this abstraction allows software to run on various Hardware platforms without requiring significant modifications so the hell the hardware extraction layer it allows it’s all done by the kernel so this is another rule that’s done by the kernel and it’s consider the interface for a software to interact with Hardware so various Hardware platforms have different functionalities so a 4 GB Ram is different than a 16 GB Ram a CPU that is one brand is different from a CPU that’s Nvidia or something like this so these are various types of Hardware or Hardware platforms and this specific H this specific abstraction allows the computer the the software that you’re using to interact with these different types of Hardware without requiring any kind of modification right so that’s what the purpose of this is for you don’t need to modify this software to interact with a 4 GB Ram although it might run a little bit slower you don’t need to modify it because the kernel will handle all of that translation for you same thing if it’s a 32 GB Ram you don’t need to modify it the kernel will handle that for you specifically how within kernel will handle that for you and then we have the system call so this is another one of those things that I’ve already kind of touched on a little bit but a user level program interacting with the kernel and requesting Services is called a system call so you make a call to the system to do something for you you make a call to your calculator to process a uh calculation for you and then the kernel translates that call it translates that request to the CPU the CPU processes that and then feeds it back to you so you put your input into the calculator the kernel translated that input or transferred that input to the CPU the CPU processed it and then the results came to the calculator as an output for you that’s called a system call system calls enable programs to perform tasks like creating files reading and writing data making network connections doing calcul ations playing a video yada yada yada all of these things are considered a system call then we have security so this is this is one of the big things for our community the cyber security and hacking community so security mechanisms to protect the system from unauthorized access and malicious attacks is done via the colonel it includes user authentication Access Control list and network security feature so user authentication meaning the colonel verifies the user identities and grants access access to some resources based on their privileges that’s all done by the colonel so when somebody logs in first of all it verifies that okay this login is actually correct and once you’re logged in the colonel processes your actual permissions and says okay based on what Hank has privileges to he can access these various resources and that’s all done by the colonel all behind the scenes so as soon as you log in the colonel says okay you have permission to it’s not going to declare it to you onto the screen but it’s just going to process that internally and it’s going to say okay okay this user has permission to access all of these things based on there are password and username and permissions that have been assigned to them their privileges that have been assigned to them then there’s access control so it enforces Access Control mechanisms to protect system files and directories so if you actually can’t if you shouldn’t have permission to access a certain file the kernel is the one that says hey user uh access denied right you don’t have access to this thing because of your privileges for your user and then there’s the network security feature like firewalls and packet filtering to protect the system from Network attacks to detect a fishing link that’s been clicked or to detect that you’re visiting a malicious website or to detect that somebody is trying to access your system and they’ve been denied access through the firewall all of this is done by the kernel the kernel processes all all of those uh commands and all of those requests aka the system calls and now we have the shell so the shell is probably the most basic vers of your interaction with the kernel so it’s a command line interpreter that allows you to interact with a computer system and it’s text based so it’s a you you you probably have seen a version of a shell when you see any kind of a movie or a show about hacking it’s that black screen that has a bunch of white text on it and it looks like gibberish but to the hacker they hopefully they know what the hell they’re doing um that person interacts with the system through the shell so when when we want to make a system call for example we want to run a program when we want to access something inside of the file system or manage the resources of the system the shell is our way to interact with the computer system and this is how you interact with the kernel you’re not you’re not technically directly interacting with the kernel but when you input something into the shell the kernel takes that input and processes it and goes and calls on the system some resources and the file system does everything else that you want it to do and this is how you directly communicate with the kernel there’s a lot of different ways to communicate with the kernel for example your calculator or your text editor but at the most basic level when you’re interacting with the a Linux server that only has a command line the shell is how you interact and how you process your commands to the system the shell works like this you provide your input right so you type in a command into the shell the command gets parsed so meaning the let’s say there are four different pieces to that command you say pseudo which is one portion of the command you say Nano which is another portion of the command and then you say the name of the file so let’s say that’s three different commands right so it gets parsed the command gets broken down into its individual sections or its individual tokens and then it gets executed so pseudo says that you have root level privileges and the shell will process that first and it’ll say okay if you have root level privileges enter enter your password so I can verify that you’re actually root and then when you do it say okay so you’re all good or it’ll say you actually don’t have access to this then once it has processed that then it’ll execute Nano and Nano is a text editor so it’ll just open up the file name which will be the third piece of that command it’ll open up that file name inside of the Nano text editor meaning that it’s executing the full command that you’ve given to it and then once you do that it opens the file and you get to see the contents of the file which is considered the output shell scripting is a an extension of Shell’s uh interactions and commands that essentially allows you to create a document put a bunch of commands inside of it and then when you run that one document when you run that one shell script then it runs all of the commands inside of that shell script without you having to manually input all of those commands so this is a very powerful Concept in automation so if you have a repetitive task like looking at a log file and doing it daily and you need to make sure that there’s nothing fishy inside of the log file you create a script that checks that log file every single day and it checks for certain strings or certain pieces of data to make sure that it’s not in there and if it’s is in there it’ll give you an alert that says hey there’s something to worry about if it’s not it just says hey everything’s all good system check everything’s fine and that’s a very very simple but powerful use of a shell script so you can create a bunch of different repetitive tasks or automate a bunch of different repetitive tasks using shell scripts and these are essentially text files like we also mentioned and they just have a sequence of commands that can be executed by the shell uh why we use the shell script is as you can imagine when you think about automating it does a lot of different things for you so number one it’s efficient so instead of you having to do it every single day you can just schedule a task via one of the tools that we already talked about and go through that task scheduling with a shell script so you say hey system run this shell script every single day at 9:00 a.m. or every single time that I start this computer run this shell script so it automates that for you okay the second part of this is that it’s very consistent so sometimes the human makes an error when they run a shell script or not even a shell script so let’s say you want to run one of the commands and you mistype something that’s considered human error or you forget a part of your series of commands that you’re supposed to run that’s a human error so instead of relying on the human you just write it once make sure that it’s written really well and there are no errors in it and now every single time that you need to perform that single task it’s done consistently exactly how it’s supposed to be every single time that’s great flexibility is that if you wanted to do the same task but you want to do it with a variety of different files or with a variety of different users you can just modify that script and then now it’ll do the same exact series of actions now I’ll just do it with this user and then the next user and so on so so it’s flexible you can modify it and then of course you can reuse it to death you can just keep running the same script over and over and over again and it’ll never complain it’ll never you know uh waver it’ll never uh water itself down it’ll run exactly the same way over and over and over again and will’ll have no problem no matter what version of Linux you’re running no matter how old it is so on and so forth is super reusable and it’s just very useful so some of the system administ ation use cases you can backup system files you can monitor system resources you can install in configure software you can automate user account management uh you can parse log files you can extract data from those files you can convert file formats uh you can deploy web application you can run tests again those applications you can compile and Minify code which is actually very useful and then there’s just general stuff so renaming the files in bulk moving and copying files scheduling tasks all of these things can be done through the creation of shell scripts so we’re going to get into the nitty-gritty of this when we actually start creating shell script so you don’t necessarily need to memorize all of this right now because you’re going to run this over and over and over again to practice these things and create them but when you open a shell file when you create a shell file that essentially could be file. sh that would be a shell file you open that the very first line needs to be the shebang line that calls to the path of your your specific shell bash or zsh or whatever it is that would be The Interpreter that you use to write the rest of the code so when you put that at top of the text document the computer then knows okay this is what I’m going to run first this is the very first thing that I’m going to run and this is going to interpret the rest of the commands in this document so then it knows that it’s supposed to run a script and then you can start usually the very next line is a comment so notice how this has a uh hashtag right here but there’s a uh exclamation at the second part of it and then you have the path to the the file The Bash file what this is is very different by having a singular uh hashtag the singular hashtag is just a comment meaning that this is not being processed it’s just something for somebody else who opens this file that wants to see what these individual lines of code represent so you have a #and then you explain what the next block of code does or at the very top of the file you’ll have one hashtag right under your shebang line that will explain okay the purpose of this script is to do X Y and Z and then for each block of code or for even sometimes even each line of code you will have a comment right above it that says this is what this line of code is going to do and then you’ll have the line of code that doesn’t have this so if there’s no comment uh hashtag in front of it that line of code is actually going to be executed which means if you don’t want the script to execute that line of code you put a comment in front of it and then all of a sudden that specific line of code has been neutralized and deactivated and this is actually something you’ll find a lot in configuration files where a lot of the commands that have already been pre-written there’s a comment uh sign in front of them there’s a hashtag in front of them which means that that specific line of code isn’t being run but if you want to configure that software to do that specific activity you just remove that #and now that line of code has been activated and then there are structures for actual flows so there would be the if else commands so if this happens do this otherwise meaning else if this happens do this else do this other thing or don’t do anything for XYZ do ABC uh while this is true then I want you to do this other thing if it’s not true then stop doing it these are types of the control flows and then you have variables so it could be you know name is equal to Hank that would be a variable and then you can use values inside of variables and variables are very very useful but essentially these are kind of like the main points of a script you can create variables within those V or you can uh call on the elements of those variables inside of an IFL statement you can have inside of the IFL statement you can have multiple variables and just recycle all of these things over and over and over again to create complex bits of code but for the most part this is the basic super super basic structure of a shell script and then we have the user space so the user space is the environment where the user level applications actually execute so it’s the separate space from the kernel obviously but it’s the the front-facing uh environment where the user actually interacts with the system so it’s an isolated environment for applications to run without directly interacting with the hardware so it goes from the user space to the kernel and then to the hardware um technically the shell is a part of the user space um this is a crucial separation for system St stability and security because the you don’t want the average user that doesn’t know what the hell they’re doing they don’t know their head from their butt for for a lack of a better word you don’t want the average user to start interacting with the kernel directly because they’ll mess up the system so you give them a user space and a user space has its own series of filters and software that interacts with the kernal and then if there’s something wrong the user space typically is the output that dis plays whatever the error message is to the user that says hey this is what you did wrong and this is what you need to fix uh the key characteristics of the user space are that the processes are isolated from each other and from the kernel preventing one process from affecting the other so for example the video uh viewer so your video player shouldn’t be interacting with a text editor right it just isolates these processes from each other and they also get isolated from the kernel itself the user space contains a wide range of applications including text editors web browsers games system utilities and then there’s limited privileges that have been applied so preventing uh the user space or from these actual uh users or these softwares from accessing Hardware or accessing Hardware directly or modifying system level settings so you can’t use a basic text editor to modify the kernel it just doesn’t allow you to do it because there there’s the limit of the authentication that takes place and the Privileges that are verified by the colonel that say Hey you can’t modify me you don’t have the Privileges to modify me and this is where a lot of these things come in from right so for each user their user space has limited privileges according to whatever the system administrator said they could or could not do the user space is separate from the kernel but they actually interact with each other right so the user space is how the user communicates with the colonel whether or not they know that they’re doing it the colonel provides system calls that allow userspace processes to request services from the kernel services that are requested by the userspace from the kernel are the file as file system access network communication process management memory allocation so uh reading and writing files right sending and receiving Network packets meaning you send uh Gmail to somebody or try to connect with gmail through your internet uh or connected with YouTube through your internet those are all considered sending and receiving Network packets it doesn’t seem very fancy when you think about it like this but every time that you request a video to be loaded from YouTube you’re requesting a packet of data to be delivered to you and that’s done through your network creating and terminating and managing processes and memory allocations so requesting and releasing memory um by using the calls to the system so system calls as we mentioned earlier the user space processes interact with the hardware devices and other resources through the kernel ensuring a secure and controlled environment so when you when you think about the kernel and then the shell and then the user space when you think about these three things you can see how they kind of layer on top of each other the kernel is the bridge between the user talking to the hardware but they need some way to talk to the kernel so they use the various tools and software and resources from the user space to talk to the kernel and then the kernel talks to the hardware and that’s basically how you interact with a system in general and now we need to know how the computer boots how does a machine boot what is the boot process so there’s something called bios and then the updated version which is UF bios is the basic input output system and it’s common onto older systems um you may have actually I don’t know how old you are but I remember bios on my very first computers when I would boot the computer it would say you know bios is initiating right so it initializes the hardware and passes control to the bootloader so it actually turns on the hard Ware the CPU the motherboard Etc and then it passes the control to the boot loader meaning it’s going to boot or load the rest of the system right to access bio setup you typically need to press this a specific key uh depending on the computer it would be delete F2 or escape again this is very old most likely your computer does not have bios and it has UF and that you do that during the boot process so as soon as you turn the computer on you start tapping F2 for example until you actually get to the BIOS menu and allows you to very uh configure the various system settings like the boot uh order the clock speed and the hardware settings and the way that looks is like this so This is actually when you get to the boot menu again I don’t know if you recognize this most likely you don’t recognize this because this is ancient um for the most part um some people I know that we have a few viewers that uh are a little bit uh there’re around my age or maybe a little bit older than me that would definitely recognize this and this is something that would be done in you know really old computers but this is basically the the BIOS menu right so this is when you actually get into BIOS and then you can do things like you know the boot priority of the hard disk quick boot first boot device second boot device third boot device checking of the password and etc etc etc uh this is not interacted with the mouse this is interacted strictly with the keyboard um I don’t know how much of this is completely relevant but this is basically what you need to know is that bios is on the older systems it’s considered the basic input output system and this is the piece of the boot process that initializes the hardware and then after the BIOS is done it passes the control of the hardware to the boot loader key functions is to power on and self test it’s called post so it cond and conducts a series of tests to check if the hardware is all good and then it issues a series of beeps to indicate the outcome of the test so this is very nostalgic for me I can I can recall those beep sounds every time I turn on my computer and if there’s an issue the beep sounds are different and once the post is done Hardware initial initialization happens so it initializes the essential Hardware like the keyboard the mouse the dis drives then it transfers the control to the boot loader which loads the operating system um which then loads the system itself so it it it loads it passes the control to the operating system bootloader and then it goes from the boot boot loader to the operating system itself so it goes from bios to post initializes The Mouse and the drive so on and so forth and then it initialize the boot loader to process the rest of the boot process and then it transfers the control of everything to the actual operating system and then you have the basic input output services so that’s the rest of these things uh for the operating system like the keyboard and mouse and display output those are the things so the keyboard and mouse would be your input display output would be the monitor and that’s your basic input output UF the unified extensible firmware interface which is just a big mouthful compared to basic input output system um the UF is the modern interface designed to replace bios it offers more flexible faster secured boating process compared to bios and this is what this looks like so you just from the get when you look at it you’re like oh this is this seems more modern right it typically does the same things though so it can actually have the same boot or tool or password or prioritization all of those things still can happen with the ufy it just happens to be quicker it happens to be a little bit more secure in this particular screenshot we see that there’s an AI tweaker wow super fancy so it allows you to have the overclock tuner the CPU the selection the filter of the pl yada yada yada these are all of the things that fall under the CompTIA A+ category to understand what all of these things actually mean and you don’t need like you really don’t need to master this for Linux it’s just that you need to understand that there was initially bios and then there was UF and then the process of the or the purpose of these things is to interact with the hardware make sure that the hardware is initialized make sure that all of the system checks are good so everything is running smoothly and then it transfers the control to the boot loader which does the rest of the loading of the boot process and then ultimately trans transfers the control to the operating system itself which means that it transfers it to Windows operating system or Linux operating system or Macos operating system they usually all have the same type of process it first calls on the hard devices the hardware and then the hardware is all good then it transfers the control to the operating system key advantages of UF over bios UF can boot systems faster um the graphic user interface so it’s actually point and click instead of just being uh keyboard navigated and it’s nicer so it’s a userfriendly graphic user interface uh secure boot so it enhance security features like secure boot which helps protect system from malicious software attacks uh large dis drives are supported through this it can support the largest dis drives or larger dis drives than bios could and then there’s Network boot that allows you to boot from a network resource making it easier to deploy and manage systems remotely so you can actually run something from a different location which was not previously available through bios um very similar to what bios was right so it still has the post the post power on self test it still has the bootloader execution and it still has the operating system boot so all of those things still happen with UF um it’s doing literally the same thing it’s just a little bit more advanced more flexible than what the standard was uh with bios and it’s faster more secure more futurer boot process that’s basically what it is it’s it does essentially the same exact thing it just does it faster it’s a little bit more flexible it allows you to do it from a remote location it’s more secure so on and so forth then you have the actual boot loader so the bootloader is the middle ground between bios UF and the actual operating system so once you uh start the BIOS or once you start UF the grand unified boot loader or grub is the bootloader that’s in Linux distributions and it takes control of the system and finishes the initial boot process then the inter so again it’s the middle ground right it’s the intermediary between the hardware and the operating system and ensures that the correct operating system is loaded and it provides a userfriendly way to choose between different boot options so for example when we go through the installation portion of this and you see as I go through the install process of Linux we’re going to be using uh the bootloader of my machine and we’re going to see oh you know what it’s actually choosing us to choose Cali or the Windows installation it’s it’s allowing us to choose the OS that we want to load and that’s where that goes right so the BIOS and UF make sure that the hardware is all good and there are no glitches and then it goes to the boot loader and the bootloader says okay which one of these os’s do you want me to load and that’s essentially the whole process between this thing and this is kind of what literally this is actually what it looks like uh just a much kind of a more modern version on the computer that I had but you see right here it actually has two different operating systems it has Linux Mint and it has Ubuntu and it has advanced options for Linux Mint and advanced options for Ubuntu so it has two different operating systems that you can choose from and then it loads it right so it’s flexible it can support multiple operating systems making it versatile for dual boot and multi- boot uh it it allows customization of the boot menu timeout settings default boot options through a configuration file so you can develop that we’re not going to talk about that um it can be configured to provide secure boot options protecting the system from malicious bootloaders and it supports Advanced features like the chain loading Network booting and kernel parameters which again are beyond the scope of this particular training but what you need to know bios UF checks the hardware passes control to the bootloader which allows you to check which operating system you want to run and then from there it transfers the control to the actual operating system so the boot menu presents a boot menu that allows you to choose which operating system you want which you already saw uh if you have dual boot which was what the example was you can choose whichever one that you want to boot whichever OS you want to boot then once you’ve selected it grub loads the actual kernel and initial Ram dis into the memory and then it transfers the control to the loaded kernel which then takes over the boot process dual boot is literally what you saw so this is just a breakdown of it when you install the operating system alongside an existing one so for example that Linux Mint could have been the primary and Ubuntu could have been the secondary the installer software modifies grubs configuration to include the newly installed OS as a boot option creating the boot menu generates a boot menu uh user selection allows you to choose the desired operating system from the menu and then loading the selected OS and then transferring the control to that OS once it loads the OS it loads the kernel for that OS and then the initial Ram the random access memory for that and then it it allows you to actually start processing or allows the computer to start processing all of those things and then the control is sent to those things so it transfers the control to that kernel and that uh RAM initial Ram dis and then those things essentially take over the control from the grub and then you’re actually in the operating system and it in initializes the operating system and that’s when you actually get into Linux right so the benefits of this is that it’s flexible you can have multiple loads uh experimentation you can test new operating systems it’s uh specifies task optimization for the specific OS that you want so if you want uh one that’s in inherent to gaming uh or one that’s intuitive or just better with gaming uh software development system administration all those things you can actually CH choose based on your goal so you can choose the OS that matches your goal and any data that’s stored on partitions that would be formatted with different file systems you can actually go access those individual ones so for example if you have data that’s only available to the NTFS which is a Windows thing you can go and recover that data because you have dual boot and then once you’ve got gotten that you can switch to Linux right so it allows you to deal with different formattings of data as well as different goals like gaming software development so on and so forth so it’s flexible it’s it’s very beneficial if you can do it then do it if you just want to run uh Linux from a USB drive and have a live boot then do that if you don’t want to mess with your overall Hardware installation or the the storage capacity of your hard disk then you can just load everything from a live USB which is good as well so dual boot is more when you install it to the actual hard drive itself to the actual physical computer you install Linux and now you have Windows and Linux on your file system then it goes from grub to CIS vinet so it’s the CIS vinet is actually the traditional initialization system that was used in Linux dros to start system services and processes during brute so uh CIS vinet or the next one the systemd that we’re going to talk about those two are the step right after grub and the bootloader so it’s basically a sequence of scripts that run in a specific order to bring up the system um the it runs those scripts so the if you are running cisv it which is again kind of like the Legacy the older version of it it runs the scripts that are located inside of the Etsy in itd uh folder and they’re responsible for starting and stopping the system Services um it has different run levels so for example run level zero is the halt state run level one is the single user mode run level five is is the default multi-user mode and it can uh choose those based on whatever the configuration was and then during the boot process it transitions through various run levels starting from a low level gradually moving to the higher level and then as it moves through those it executes the corresponding initialization scripts for each one of those run levels that are all located inside of this right the limitations here are that it’s pretty complex to configure this and difficult to understand if you don’t know coding or development language or uh syntax it can be relatively slow especially on systems that have a lot of services or they don’t they have physical limitations like the the ram is small or the CPU is old so it can be fairly slow to boot and it’s limited on parallelism meaning that it can only start the services one after the other which is sequentially uh which is inefficient right you you typically want to start everything at the same time if you just have to wait for this one to load and then move to the next one and then move to the next one it’ll take a long time especially if the system itself if the hard uh Hardware on the system is actually slow and old then this process can actually be fairly lengthy so then you want to go into system D which is the more current version of it it’s sophisticated and it essentially does what CIS vinet did but it does it better it’s faster more flexible more feature Rich and it’s the it does the same thing right so where where UF did the same thing that bios did systemd does the same thing that CIS vinet did and the way that it works is that during the boot process it takes control and starts essential Services it manages the life cycle of system services including starting staring and restarting them it ensures that the services are started in the correct order and based on their dependencies and then it logs system advents and applications to a central Journal so this was not previously provided by CIS vinet right so it goes through the boot process it runs all the services that it needs to if there’s any dependencies that are required by those Services it’ll make sure that all of those are also started and then it journals the entire process and logs them in case there’s any kind of uh troubleshooting that needs to take place that you can backtrack and find out what happened um it’s faster obviously as we’ve already talked about it runs those systems in parallel so as we talked about where it doesn’t provide parallel options in CIS vinet this can actually optimize everything by running everything at the same time and starting them at the same time time which reduces the boot time significantly as you can imagine um if there are any dependencies by those services so if a service requires something else to run it manages those dependencies automatically and just makes sure that everything is all good so that all of these things load simultaneously some things have dependencies that are required and we’ll talk about the installation of dependencies later but the you want all these things to load simultaneously you want them to load automatically without you having to worry about about it without the average user having to worry about it cuz they don’t know what they’re doing so it provides a unified framework for managing system services including starting stopping restarting and enabling disabling services that is system d uh it activates sockets uh Serv uh services that can only uh needed for further improving system performance um it can be activated through the sockets and sockets are actually also very useful for networking as well but that’s beyond the scope of what we’re talking about so system can activate Services only when they are needed further improving system performance so if you don’t need it it’s not going to be activated journaling we already talked about it in case there’s a crash or analyzing system logs timers and scheduling so precise scheduling of tasks and device management managing devices like disk network interfaces and USB devices all of this is done through systemd which is very awesome and most likely hopefully the thing that you will have to deal with when you go through your actual responsibilities as a Linux administrator um most likely you will not have to actually deal with this in the real world at all meaning you won’t have to deal with CIS vinet most likely unless you’re really dealing with like an old school system um if you do then just remember that they’re they run uh all of everything is run based on those run levels that are associated with each one of those scripts that are inside of the Etsy folder and the stuff is done not in parallel meaning it’s done one after another another it doesn’t deal with uh dependencies very well it’s fairly slow these are the key features that you need to keep in mind about CIS vinet and where CIS vinet falls short systemd takes all of those things and then some and just fills in all of those gaps and you just saw how it did all of those things and this is where we actually get to look at those run levels and boot targets so run levels would be relevant to CIS vinet right so these are the specific uh boot processes that is done through those various scripts uh each run level represents a specific state in the system and then the system transitions through these things during the boot process so you have these run levels so halt the system basically means stop the system uh one would be the single user mode two is multi-user mode without the file system the NFS um then there’s three which is the multi-user mode without the graphic user interface four is not used five would be full multi-user mode with the graphic user interface and which is this is typically the default that loads and then six would be the rebooting of the system so these are the various run levels that come in CIS vinit and these are associated with those scripts that we talked about that are inside of that Etsy init D configuration so boot processes when a system boots it starts in a low run level and it works its way up as it initializes it moves to higher run levels starting whatever service is associated with that run level for each of those run levels the scripts are inside the Etsy in the directory and these are defined they Define the actions that need to be taken when entering or exiting a particular run level and then the init process which is responsible for managing the Run levels can be instructed to switch to a different run level using the tell init command or by issuing a specific signal and this is an example of That So when you say tell in it one you’re asking it to run level one right so tell in it five would be asking it to run level five and you start the system in single user mode which is what run level one is so it causes to transition to that stopping most of the services and going only to the environment that is associated with run level one and then you have boot Targets in system D which replaces run level so cvin it had run levels system D has boot targets powerful mechanism for managing the system State they represent groups of services that should be started or stopped altogether they uh Define and control the targets and by doing that you can efficiently manage the system’s Behavior under different circumstances so the target works like this you have the dependency management the activation of the Target and a deactivation of the target so it automatically determines what dependencies between services are required and then starts them in the correct order for whatever the services are that you need to have and when a Target is activated it si the system D starts all of those Services associated with that including the dependencies for all of those services and then when you deactivate it it stops all the services that are associated with it so fairly simple the common targets would be this right so you have the multi- user. target it’s the default Target it starts multi-user mode or multi- user Target and it start Services required for multi-user environments including networking file systems basic system Services graphical Target starts graphical services like the display manager and desktop environment rescue Target that start Services necessary for system recovery like a minimal shell and network access which typically does not happen unless you’re actually troubleshooting and there’s the emergency Target which is the minimal Target starting only the most critical Services required for system maintenance which does not include network access for the most part you’re only dealing with that specific system and you’re trying to figure out exactly what’s going on so it is the like the lowest level and it only loads critical Services required for system maintenance so that’s essentially the various forms of this this is the highest level and graphical user Target or the graphical Target and the multi-user target load uh together because you want to have multiple users including networking file systems basic system but the graphical Target starts the graphical surfaces like the display manager and desktop environment if you have a version of Linux that actually supports a GUI this is the Target that would come with the boot inside of system d if you want the CIS vinet scripts if you want to manage them uh you have to know where they’re located which is the Etsy init d uh they’re typically named after whatever service they’re actually running so Apachi 2 if you’re running Apachi 2 MySQL secure shell Etc uh each script contains instructions for starting stopping and restarting the corresponding Service uh suspended executes these in a specific order ensuring that services are started in the correct sequence and the dependency are met so for example if you are running an Apachi 2 server you would need to load the Apachi 2 server before you can load any other uh software or service that requires Apachi to be running if you try to load the software service but you don’t have aachi running then that doesn’t work right so it needs to run in order so it does that in the correct sequence to make sure that whatever needs the previous thing already has that thing so that it can run it system D enables a service and this is uh very very uh similar right here so instead of doing these scripts that would be running sequentially the way that they are you can do it manually uh through enabling disabling starting stopping checking restarting and reloading and this is all very the the pattern here is very very simple the syntax is very simple so if you want to enable a service you would use enable if you want to disable it disable start it stop it uh check the status of something restart something reload something it’s very very intuitive so pseudo system CTL system control enable service name disable service name start service name stop service name and if you can probably put two and two together you would need to enable the service first before you can start the service and then you would have to stop the service before you can disable the service so that’s the the order of these things that come in once you’ve enabled it you can check the service and it says oh it’s been enabled but it’s not going to be active because you haven’t started it once you start it you check that service status and you’re like oh this is active it’s currently running so you get that uh the status of that thing and then if you need to restart it you can just do restart the service you can’t restart it if it hasn’t been enabled common sense right and if you need to reload the service configuration after you’ve done certain modifications to the config file you would just reload the configuration or reload the service and it will just reload it it has to be running obviously actually it probably doesn’t need to be running but if it is already running and you modify the configuration and you want that configuration to apply you would need to reload it you don’t necessarily need to restart it okay so let’s talk about the installation and package management portion of this whole thing um specifically the preparing and installing distributions now selecting a Linux distribution really has a lot to do with what you are trying to accomplish um so there are a variety of different versions of these dros um so we have the desktop version for Ubuntu Fedora and mint we have the server version Cent OS the Ubuntu Server Debian and then there’s actually this one that I kind of forgot to put in the title here which is the red hat Enterprise Linux and of course there’s security and Pen testing for Cali Linux so it just depending on what it is that you’re trying to do uh these are the various uh locations that you can download your ISO images so this is what’s very important to consider so if I go to the Cali uh website here this is what it would look like and so we have the the various installer images virtual machines Cloud mobile arm devices containers live boots and so on and so forth for the Intensive purposes of this particular uh portion of this uh tutorial I’m going to be using a live boot ISO image which means we want to install something that we can load from a USB drive that’s the the whole purpose here and you can have it in like one of those mini kind of USB drive drives or you can actually have a an external drive that is attached via a wire that’s what I’m going to be using and mostly because the fact that I’m also going to be using it as a storage container so typically a small USB plugin uh that is like a the size of your thumb for example a thumb drive is what they call it uh um a small USB thumb drive does not have that much memory on it so it won’t go into the terabytes at least not that I have looked for recently I’m pretty sure they have them that can go into a significant portion of storage but I’m going to be using something that has a significant portion of storage because uh for the rest of these tutorial recording I have to record the videos on that specific drive and make it so that it actually has enough uh storage for me to store video files right so uh but essentially this can be installed on any kind of USB plug-in which is called a live installation it’s a live boot or live installation so we can choose anything that we want from this particular website depending on the type of service or the type of purpose that we have for that specific installation and then same thing applies to all of these other ones as well right so you want an ISO image for the uh the specific Dr that you’ve chosen so that you can have that installed either through your computer itself a hypervisor where it’s like a cloud computing virtualization or you want to install it as a live image that to boot from a USB drive so in in either one of those cases you would get your ISO images from these links I’m going to put these links inside of the description below so you can get access to them directly and I highly recommend that you don’t get them from any kind of a torrent website just get them directly from the actual website for these individual distributions so that you can be sure that there hasn’t been any manipulation done with the iso and you’re not downloading anything that you shouldn’t be downloading or anything that has been tampered with that’s the that’s a really big important piece right here you don’t want to download something that has been modified even if the developer says oh this is going to be really really helpful or whatever learn how to make those modifications yourself don’t download something that’s been pre-modified especially from a torrent website because you just you you never know what you’re going to be downloading and if there’s some kind of a malware or ransomware or something that’s installed on it the Trojan warm whatever you you just don’t want to deal with that kind of stuff so again one last final notice download directly from the websites that you see here so that you get the actual ISO images that have been verified and uh you know that they’re they’re good to go now as far as the installation process itself is concerned I’m just going to kind of give you an overview of this but I’m going to demonstrate what this looks like when you want to use uh one of these ISO images to install a uh uh limit Linux version for yourself so in this particular case now as far as the installation process is concerned I’m just going to give you like a step-by-step breakdown and then we’ll go into an actual live installation of Cali Linux cuz I love Cali Linux and we’re going to install it on an external USB drive and then you can just see what that entire process looks like so first and foremost you want to prepare your external drive I always format it especially if it’s just a fresh drive and you’re uh making you want to make sure that it runs on all of the uh major operating system so for example a Windows Mac OS and Linux operating system system you want your drive to be able to adjust to whatever that OS is and actually run on all of them if it’s formatted in one particular format and won’t run on Mac OS or Linux then you’re going to be hindered and limited so even it’s a brand if it’s a brand new drive that you’ve gotten you can use something like the Mac OS dis utility or the equivalent version on Windows to just format the the drive so that it can run on all of the different os’s and I’ll show you what that looks like in a bit so you want to prepare the drive and then make sure that it actually has enough storage space for Cali Linux it’s not a big file so the installation and the iso image is not large but the recommended size is to have at least 4 GB on the drive just to make sure that you’re all good um then you want to download the Linux from one of the official links that I gave you and get the iso image of the drro that matches your system architecture and your needs whatever you actually want to do and then we have the creating of the bootable media so this this is where etcher which is a tool it’s a free tool so is Rufus they’re both free tools etcher is for Mac OS Rufus is for Windows and it essentially turns uh the iso image it makes it creates a live image uh on that USB drive so that that USB can now be a bootable device and once you connect it to a Windows machine or anything that allows you to do that you can go ahead and load that Cali Linux image from the the device so technically you’re just literally going to be walking around with essentially a computer on a USB drive that’s kind of what it feels like right so you want to turn the USB drive into a bootable media and we’ll be doing that live as well and then we’re going to boot from the USB drive so when you actually go through this process uh this one is kind of uh fairly simple straightforward you’re just going to be going through the wizard itself there are a few elements that once I boot from the USB drive I’m going to be uh recording so I can show you what specific options you need to choose to make sure that it loads uh most likely in every case that I’ve gone through you need to get to the BIOS or UF settings which is the the very base settings of the computer so that it allows you to choose uh a boot from either the internal memory or from an external file and what we’re going to be doing is loading from an external file and then we have the Linux installation itself so this typically this is done uh using rofus Rufus or etcher but sometimes it’s actually also done uh from your computer when you’re going through the boot process where you would you do your Linux installation now if we’re running from a USB drive you don’t necessarily do any kind of a Linux installation it just loads from that USB drive so that that’s kind of one of those uh optional portions and same thing here this can this is most likely done in etcher or roof this and sometimes maybe it’ll be done while you’re booting from the computer where you you know choose your language and the location keyboard layout etc etc um then we have the external SSD portion of this where you choose this as the installation Target and again this is done through etcher or Rufus where you choose the external SSD which is when you choose the USB drive to uh be as your installation Target and then it’ll install the live version of Linux onto that external SSD and then you just go through the the Motions you may be prompted to set up a username and password if you aren’t the basic username and password is C Cali so C would be the username Cali would be the password and then once you’re logged in you can modify those and then you just start it you you restart the computer you boot into Linux uh from the external SSD from the file inside of the SSD and then from there you actually have your boot running and your operating system will load and you’ll be inside of Kelly Linux so this is what the step-by-step looks like so what I’m going to do now is I’m going to actually go through all of those steps and just show you what that looks like okay so this is my dis utility on Mac and so what we can see here is that I have my primary 2 tbte drive and I have the Linux drive that’s right here and then there’s actually a balena etcher uh dis image as well that’s been uh activated because that’s what we’re going to be using to do our installation of the Cali Linux ISO so in this particular case I’ve already formatted this drive I just want to show you what that process would look like so that you can make it uh usable by all of the operating systems so I’ve clicked on the actual drive itself up here I’m going to go through erase and then you know the name is fine it’s Linux the format is the big piece right here so MS DOS as you can imagine is Microsoft Mac OS extended and journaled these two are going to be Mac OS specific what we want to do is we want xat xat is the uh multi-os version of this where you can just make sure that this thing will run on uh essentially any operating system that it runs on now if your options for this formatting aren’t exactly the formatting that you see right here then just a simple Google search or conversation with Gemini or GPT will allow you to find the specific formatting option that will allow you to go through uh a boot from any kind of a machine Mac OS windows so on and so forth and then Security Options this portion is I’ve specifically actually had information that was on this drive previously so I left it at fastest because it would be possible to recover the data that was on this uh drive if I leave it as fast as because it’s not the most secure meaning it’s not going to wipe it and make it unrecoverable when you take it all the way to the end right here it makes it absolutely unrecoverable you won’t be able to take any data from it and it essentially overwrites the the drive with a multiple series of zeros and ones and literally just making it unrecoverable you won’t be able to boot anything from it so uh that one’s up to you if you’re getting a brand new drive it doesn’t really matter but if you’re wiping something that you previously had you might want to keep it so that you can potentially at some point reboot it and then once that’s once you’ve selected your options you just click erase right here and then it’ll just erase the whole thing and make it a brand new drive so that you can use it for your installation process and then this is what it’ll look like it’ll just be you know 1 tbyte free and then you have 12 megabytes that are still used which I presume would be all of that data that I previously erased that’s still kind of sitting somehow in storage back there so that’s pretty much it that’s all we got and then the dis itself the USB drive will actually be ready for installation using either Rufus or etcher and now that my drive is ready what I’m going to do is I’m going to choose the version of these isos that is most applicable to me and in this particular case it’s the live boot that’s available over here so it’s literally unaltered host system meaning your main computer won’t be changed in any way you get access to the hardware of the host system so if you have a 16 GB Ram the live boot will use the 16 GB RAM and then the customized Cali kernel of course performance decrease when heavy in andout is not a uh input output is what that essentially stands for um it won’t be affected in any way because again it just is doing a live boot from this piece and you’re using the main system so very quick very easy you get a full Cali installation or Linux installation and then you can do this with all of the different versions of Linux as well this is not limited just to Cali so you can run all of the other ones from a USB drive as well so I’m going to click uh the uh live Boot and then from here I’m going to pick 64bit because my machine is actually it can handle a 64-bit so I’m going to do 64-bit and from here on out I’m going to be using a Windows machine and I just have verified on there as well that it is all good and then the piece right here that you see that it says torrent and then the sum value this is the piece that you can verify what the hash of this particular ISO image is and for security purposes if you don’t get your ISO image from the Cali website directly if you get it from some kind of a torrent website you need to run the Sha 256 sum and find whether or not that value is exactly this value if the value has been altered even by one letter then don’t do it because the official installation shop 256 hash value is this which means that this is what C has actually confirmed therefore it’s something that you can trust right that’s essentially how I’m going to go about trying to explain this as best as I can the installation itself is 4.3 GB so we’re going to just click download and it’ll start to download right here for me and once that has has been downloaded we will go through etcher and we will install it on our USB drive okay so our ISO has officially downloaded and I have etcher open and as you can see it’s very very simple it’s not it’s not a complicated uh interface so what we’re going to do in this case we’re going to flash from the file the file that we’ve downloaded the iso image uh you can also Flash from a URL or you can clone a specific Drive that you have so I’m just going to do flash from file and then from here I have to choose the iso image and in this particular case it’s literally right here in my uh Cali Linux in my downloads folder so I’m just going to click that ISO image it has the iso at the end of it we’re going to click open and then we’re going to select the target onto which we want to write this specific thing and in this case I have my Cate beup slim BK this is the one with the one terab that’s the one that I know that I want there’s the Apple SSD and all these various things I don’t want to mess with these in any way this is the one that I want which is the one TB Drive uh the CATE 2 uh portable media the 2 terb drive that’s the one that I uh don’t want to mess with now there is a way that we can just make sure which one these are cuz I’ve actually renamed these and I’m not seeing the renamed version of this so there’s a way that I can find out which Drive it is that has been assigned to dis 5 and dis 4 Etc and we’re going to do that inside of our terminal okay I have loaded my terminal very very simply um the command itself is very simple so it’s disk util for the dis utility and we’re just going to do list and you press enter and it brings up a bunch of data um including all of these partitioned drives that we have over here and these partition drives should actually have their name that we’ve assigned to them as well so in my particular case we already saw on uh the eter that we had Dev dis 4 and Dev disk 5 were the two drives that it uh noticed that it said it has a lot of information or a lot of por a lot of the size data storage data on it um the 2 tbte one is the one that is named right here which is the primary 2 terb that’s the actual name that I gave it not the name that it came pre-installed with inside of the uh when like when I bought the CATE drive that those are the names are the ones that you see like those original names is what we see uh the next one is the one that I’ve named Linux and that’s the drive that I’m most interested in and we can see that it has the 1 tab size and the availability in this particular case we have the two terab size but the availability is not that much cuz I’ve St I’ve stored so much data on it um so it’s not really that good as far as the availabilities of us concerned I definitely want this one that I named Linux and I can see that it is inside of the dev dis 5 partition right here so that’s what is most interesting to me and most useful to me so now I’ve confirmed that data I’ve confirmed that information I know that I want the drive that’s on dev disk 5 so I can continue with my installation on etcher and here it is I already had it installed cuz that’s pretty much what I figured because I knew it was the one TB drive so this is it the seate buff Slim K media etc etc I’m going to select this one as my drive and that’s it once that’s done you just click Flash and it says hey you’re about to erase an unusually large Drive do you are you sure that you don’t want uh are you sure that the selected drive is not a storage drive and I’m just going to say yes I’m sure because what I know is that I can go ahead and uh use that as a storage drive when I do launch Cali Linux uh in front of my or from my other computer you will also get notified to enter your password because it is a privileged access type of a uh thing that it needs to run so uh I haven’t pressed enter so it actually stopped but uh we’re going to retry this one more time because I just want to make sure that uh I do run it correctly so we’re going to run it all over again and I’m just going to enter my password at the very end real quick just to give it permission to do what it needs to do and then it’ll it’ll create the drive for us okay so there you go the password went through I got a permission request for eter to uh access this particular drive and I gave it the request and you can see it’s working and it works very quickly cuz it’s not a massive ISO image or anything like that so it’s going to take maybe another 30 seconds or so and then the installation of Cali Linux will be done on this external USB drive and then we can go ahead and boot it and talk about the rest of the stuff about partitions and package managers so on and so forth all right so when I reboot my computer and I just keep pressing F12 this is on a Windows machine it’ll take me to the boot menu and then when I get to the boot menu uh the option that I want is to actually get to the boot screen which is F9 for me and then I choose a live boot from a file and then I choose the Cali image that I had and then I go to EFI and then I go to boot and then I choose the uh 6 4bit version of it and then this is what we land on this is the actual menu for Cali from uh any kind of a USB drive or even a CD ROM if you actually had a CD ROM and then you see that we have all these options to boot from live from the USB itself and then we have the installation portion and what I’m going to do for this section is I’m going to boot it from the uh the USB live image and we’re going to boot it with USB persistence and as soon as you pick that image it basically brings you to the the loading screen here so uh it usually takes about maybe like 20 to 30 seconds for this to actually load and then when it loads we have the operating system running ready to go with all of the utilities installed and uh essentially that’s it so it’s that quick to just boot from a USB drive uh without going through the full installation process so you’ll see here in a couple of seconds that it’ll actually load and I have to actually record the screen with myself cell phone so that’s why the image quality is like this but when we start doing the screen capture once we’re inside of uh the Cali Linux machine it’s going to be much clearer and you’ll be able to see everything as I click around and here we have it this is literally the the operating system loaded ready to go we have of course the file system with all of the uh system binaries and everything like that you can open up a terminal and have that run very quickly and smoothly and I uh all of the utilities for the most part like 9 % of the utilities are already pre-installed and then if you wanted to go to your home as the user you would access it through the file system if you went through the menu you will get access to all of the utilities and the shortcuts that are available so it’s a fully functional operating system that’s available so what I’m going to do right now is I’m going to restart it and uh it’ll take me back to the very beginning of the uh computer’s boot process and then we’ll do F12 again so that we actually can get to the boot menu and then once we are inside of the boot menu we’re going to repeat the same process that we did before uh meaning we are going to go and actually uh do the boot menu Itself by F9 then we’re going to boot from a file we’re going to choose our Cali live image and then I’m going to go to the EFI option and go to boot again and then choose the 64-bit version again but this time around when we actually get to the menu here I’m going to actually click the install installation uh option so that we can go through a full install so you can actually see what it’s like to go through the installation process with Cali Linux Wizard and uh how to select your partitions and so on and so forth so pretty much just like every other operating system when you boot it for the first time it’ll ask you your preferences so your language and your country and uh I guess the version of English for me and then I’m going to just keep pressing next until it gets me to the port where I have to choose my partitions um and essentially it’s going to ask me how I want to split the memory storage of the computer that I’m on cuz I’m using a technically a type 2 hypervisor because this is being booted from my Windows machine so my Windows machine the operating system of the windows is going to be the primary portion of the the operating system and we’re running Cali Linux as an extra installation and so the main Hardware of the Windows machine is going to act as the basis for this and then we have the uh Windows machine itself and then this is going to be our dual boot option where we can if we uh whenever we restart this and we get to the boot menu we can actually choose to boot Cali Linux instead of booting the uh the Windows machine so actually technically I think this is a type one hypervisor because we’re booting live from the hard drive we’re not booting on top of the actual Windows operating system so I won’t go inside of the Windows operating system so you can see I’m I’m setting up my username here and I’m setting up a password for my installation all of these things are very very intuitive and very easy to follow the instructions and I do stand correct it so the once this installation would be finished this would actually be our dual boot option where we are running Cali on top of the hardware of my computer instead of running it on top of the window Windows OS so meaning you would log into Windows OS and then you would use some kind of a hypervisor to launch Cali Linux instead of doing that we’re booting it live from the windows OS and this is the portion that we get into where we do the selection for how we want to create our partitions and so um it essentially just gives the the recommended option if you don’t know what you’re doing you can just follow along with the options that it gives you and it creates the most basic uh partition split apart for your root directory your users home directories and all of the other primary and extended partitions that would be necessary to boot uh Cali Linux and I’m using the external drive that you can see right here it has about a terabyte of uh storage that would be used for my partitioning file system and then that’s what I’m going to use to create the rest of this uh the browser or not the browser the wizard and then it’s going to be a guided process and everything else would after that once you press accept and then you press enter it’s going to start creating the partitions and then it’s going to actually take you into the boot menu very similar to what we had from the live uh version you would restart the system and then you would select your Cali Linux installation uh Cali Linux operating system and then it would just load Cali Linux and it would look exactly what it looked like from the live boot that we did from the USB drive except that time around you would be running it directly on top of your Hardware so I’m restarting my computer and then we’re going to get back to the presentation all right let’s talk about managing partitions file systems and dis usage here are some important partitioning Concepts to consider and to keep in mind so the role of partitions in isolating systems or user data is mainly to uh preserve space uh on a physical storage device so if you can think of partitions as logical divisions of a physical storage device allowing the operating system to manage the data in separate isolated areas on the same device they’re key in organizing the data and helping with performance security and manageability and the the visual is really the big thing that I kind of wanted you to wrap your head around I work well with visuals and I feel like you may also get value from this as well so if we look at this left portion right here let’s say you have two physical storage devices right so you have piece one and piece two and piece two has been broken down into three separate pieces right here you can consider that to be partitioning and so this can be actual physical storage or it could be one of the primary partitions as we’ll talk about in the next slide it could be one of the primary partitions that has been broken down into multiple pieces but to really just kind of wrap your head around what partitioning is kind of like the name implies itself you’re breaking storage space down into parts so it could be the actual physical storage space so you could have one physical storage device that’s on your uh computer so whether it’s a server or it could be your laptop or uh the Cali Linux machine uh or really anything on a USB drive or something like that you could have an external storage device that you’ve plugged into your computer that could be the physical storage device and as you go through booting and setting up your your computer so let’s say you’re installing Linux on one of your storage devices or as a virtual computer or a virtual machine and you as you go through the installation it asks you for the drive to use for partitioning and then you select your external external drive to be your partitioning storage or your storage device that you will partition and then from there it’ll just go through its various uh options and it’ll guide you through it or you can manually select how you want to do it like we did in the the installation video or the installation portion of this video right so that’s essentially what partitioning itself is right so when you think about the uh advantages that comes along with partitioning uh there are three-fold and this is just kind of a general approach to it so the isolation of data right so they keep system files separate from user data reducing the risk of user data corrupting or the corruption of user data in the event of any kind of a system failure right so you keep system data as well as user data separate from each other you isolate those data types from each other so that in case anything happens one or the other won’t affect the other right and then you have the performance of it so when you separate frequently access files such as the stuff that’s inside the V VAR directory the variable directory from system critical files that can uh improve the performance of the machine right so frequently accessed files um can be logs they can be dynamic data email spools whatever it may be those things if they’re kept separate from the system critical files then it can actually help perform help the performance of your device it can make it run smoother and faster and then you have security and recovery and this is a really big one especially for our world on this channel we’re going to be we talk about security all the time and you’re probably going to be in incident response at some point and uh having backups and securing The crucial data is very very important so when you isolate system files inside of a partition separate from user files it makes it easier to manage the permissions and restrict access to those system files right here and it enables quicker recovery in the event of data corruption or system failure so these are some of the key advantages of going through partitioning now here are the examples of the common primary partitions most Linux devices break down to at Max four primary partitions one of the biggest and most important one is the root partition so this would be the main directory where everything is installed including the home partition and the VAR partition and everything else right it is an extension of the root partition and keep that word extension in mind so you have the root partition the home partition and the VAR partition these are typically the most common partitions and when you go through the installation of any Linux uh device you will see that as your options right you can say that you know use one specific location use that as the main partition or break it down into these three common partitions and then the swap space is a separate partition for managing memory overflow and we’ll talk about what that is in a little bit but these are the most common partitions and in a lot of cases these are also the primary partitions so this is what we’re going to be talking about right now so the primary partitions are the main partitions that can be directly used to boot the operating system so for example the root partition The Root Drive that forward slash that represents root that is a primary or I would say the primary partition the limitations um have to do with the traditional partitioning scheme that actually allow a maximum of four primary partitions per hard drive now it’s common to use primary partitions for the main operating system and important system data and then have your extensions and logical partitions go for other types of data so the root so if we go back to this real quick this is the main directory where everything of the OS is installed the home is the storing of the user specific file so for example the variety of different users if you have five users inside of the home directory here you will have five separate folders or five separate directories for each one of those users and typically it’s the username that’s assigned for each one of those directory and then you have the VAR partition which is isolating the frequently changing files like the logs that can grow ins size over time and then you’ll need to wipe them eventually to help preserve space on the overall storage of the computer right so if you’re computer has I don’t know let’s say one terabyte of storage then this specific thing will take up a lot of space because if you don’t have proper scheduling of the backups and the wipes so that you can transfer it from that main terabyte of storage into another uh storage device this can overflow very quickly and it can take up a bulk of your uh one tbte right so these are the the primary partition types now what happens is that you have these primary partitions so these are the actual primary partitions uh more often than not the root is uh I mean I would say 100% of the time the root is the primary uh system partition it’s the main partition where the Linux operating system is installed it contains all the system files the libraries binaries and everything else so we talked about those uh when we looked at the file system earlier we were talking about all of those extensions of the the root partition and that’s where all of these system uh files are uh considered to be installed and so the you know we talked about the library and the binaries and the system binaries and the the temp log and all of that stuff or the temp folder and all of these various other uh partitions or extensions of the root partition then you have the boot partition uh which contains the boot loader the kernel other files needed to boot the Linux system it’s usually a small partition and around 500 megabytes to 1 GB because it doesn’t require a lot of data space to run the to you know store the kernel and the bootloader and all those things but this is also a main primary partition because without this the system won’t be able to boot it won’t be able to start and then you have the home partition which is also another one of those primary partitions um and these are the three that have to do with the variety types of data that are stored uh let’s say permanently on the machine and then you have the swap partition that uh deal deals with the random access memory it deals with the the temporary uh data and the processing power of the machine that you’re using so it’s used as virtual memory when the physical RAM of the computer is maxed out and you need a place to uh store some of the virtual or uh the volatile uh data points that need to be stored temporarily until you don’t need them anymore and so the swap partition deals with the random access memory or data that’s similar to the stuff that’s on random access memory which is considered to be volatile data that just is uh is also uh considered to be dynamic data that goes in and out and you use it uh more in like a live environment and then typically when the system reboots and when it restarts uh this whole thing is reset and you don’t have to worry about this stuff anymore um we don’t consider hibernation to be a part of system Reboot system reboot is when the computer is actually restarted or the systems actually restarted hibernation is just when it goes to sleep and then you need to recover all of the data that that was inside of the ram inside of the hibernation file as well as the swap partition to pick up where you left off so these more so are permanent partitions with stored data that you can access after reboot after reboot after reboot and then this is volatile data that typically is reset and wiped every time you reboot the computer and of course hibernation does not count as a reboot when a computer goes to sleep it does not count as a reboot so these are our primary partitions and then we have the extended partitions so an extended partition is a container for additional logical partition so the extended partition is not the actual Storehouse of the data it’s a container for other logical partitions so it helps to bypass the four partition limit that’s dedicating one primary partition as an extended partition which typically is the root uh partition so The Root Drive the root partition ends up being that one primary partition that becomes the extended partition and then within the root drive you end up having all of these other logical partition so for example the forward SL TMP and the forward slash uh Dev and forward slash everything else that falls under a logical partition so the primary partition gets extended right so primary partitions can and often do act as extended partitions however there’s only one extended partition that can be created on a physical drive but it can contain numerous logical partitions so that means you can literally have essentially as many as you need uh within that one primary partition that has been extended so the extended partitions are commonly used on systems that need more than four partitions without using newer partitioning schemes like the GPT partitioning scheme so we’re talking about a lot of the traditional Linux which in in the modern world it’s still being used so uh it’s you’re going to most likely run across the primary or the the the we can call it the Legacy the original uh partitioning scheme that has the four primary and then within that one primary which is the root there’s a bunch of other logical partitions and you’re going to run across that a lot and you’re going to know need to know that uh as a job for a Linux administrator and then when you deal with a GPT partitioning scheme it’s just easier to deal with it uh simply because the fact that allows you to have more than four primary partitions and then you can uh Storehouse or compartmentalize your data easier so these are some extended examples so the SLV VAR log or the VAR folder this can actually be a primary partition when you’re doing your uh installation of Linux but you can also just have it be one of the logical partitions so it can be the extension of the root right so it’s the root is represented by this for slash D VAR right here would be the extension and then inside of this you’ll have the VAR log and then the VAR email or VAR whatever and you’ll have a bunch of other logical containers that will have a variety of different types of data so for example log files and databases email schools and any other Dynamic content that is being frequently used or updated so it helps manage large amounts of data that change frequently so this is one of the very common uh extended partitions which could also be considered a logical partition and then you have the for sltm the temporary files so this is also another extension of the root partition and it holds temporary files right um then you have the for slev which is a logical partition for development work or it can be the other um we can call them virtualized uh portions of your machine so this is another extension of the root partition that can act as a logical partition as well so essentially what you’re doing is you are extending the root drive and then Within These folders you will have a variety of other folders that will act as The Logical partitions that are stored inside of these extended partitions so hopefully that’s making sense a little bit and finally we have the logical partition so these are the subdivisions within an extended partition right so you can have the within the home directory within the home extended partition you can have a bunch of user partitions a bunch of user folders and all of those user folders end up being the logical partitions within the VAR directory you can have the slail SL log SL Etc and you can have a bunch of other uh folders or directories that act as the subdivisions of that specific directory and that ends up being the uh the uh logical extension The Logical partition of this extended partition so hopefully that makes sense so just going back to this image right here just to kind of give you another visual uh with the example of extended and primary and logical partition so if these two for example our are our primary partitions and then within this primary partition we have these extended partitions and then this one could be the VAR partition this one could be the Dev partition and this one could be the TMP partition or the home partition for example so these could act as anything right and so if you want to expand on this so if we bring this one right here and this ends up being the VAR partition now over here which ends up being our extended partition and within this VAR partition we would have the log and then the email and whatever else it would be that’s how it breaks down right so you have have the primary that gets broken down into these extensions and then these extensions can come and then one extension could be broken down into multiple logical partitions so that’s really how the whole thing ends up breaking down and you it’s essentially just a way to organize your data and to compartmentalize data so you can as uh the benefits that we talked about earlier you can just have better uh access to these things and you make sure that the it performs better on the actual system and in case there’s any security or backup issues you’ll be able to compartmentalize and uh access certain points uh much easier and if one thing crashes it doesn’t affect everything else so on and so forth so uh partitioning uh serves multiple purposes and it’s it’s fairly a simple concept once you kind of really get a chance to wrap your head around it you have four primary partitions within those primary partitions one of them we can be turned into an extended partition so from that root we can have multiple extensions and then each one of those extensions inside of it is going to have a variety of different uh partitions that will be the logical so there is four primaries one of those primaries is going to turn into a an extended where you can have multiple extensions inside of it and within each one of those extensions those extensions end up being containers for all of the other logical partitions right so you can think of partitions also as containers if it helps you and that’s essentially that’s what that’s all that partitioning is when you just want to understand the concept of partitioning and to give you a few examples uh as far as the the way that the file system looks so let’s say that you have the extended partition which is Dev sda4 right so this is the extended partition that’s on your disk and this is part itself doesn’t store any data directly but it actually has a contain it becomes a container for other logical partition So within this partition you can have multiple logical partitions that serve various purposes so for example within Dev SDA 4 we can have Dev SDA 5 which is allocated to data right so uh this is done inside of either the partitioning uh process as you’re doing your installation or it’s done inside of the uh the configuration file that is stored inside of our Etsy directory and within that configuration file we can make all of these assignments of these partitions and we’ll we’ll talk about that in a little bit so this logical partition is created within the extended of sda4 it’s assigned to the Mount point of data meaning that when you access data in your file system you’re actually accessing this specific logical partition that is housing all of the data inside of that it kind of seems a little bit confusing but uh if you can just wrap your head around it you’ll you’ll see what it means and I’ll show you what the tree structure of it actually looks like and how these things are connected so you have that visual as well but it’s typically uh used for storing user data documents media files Etc so that’s what the for slash dat uh file is or folder with the forward SL data folder that’s what it houses is the user data document so on and so forth and it gets mount right so it gets mounted onto this logical partition that is sitting inside of that extended partition so we can also have SDA 7 that is inside of the SDA uh 4 right so SDA 5 would be in there uh SDA 7 I think it was supposed to be SDA 6 or something and I accidentally clicked seven but it’s allocated to backup right so this is the third logical partition within this specific extension and it’s assigned to the Mount Point backup which is USIC specifically for storing backup files and system snapshots and it helps in organizing backup data separately from other system and user data and then we this is our tree visual of this so this would be our extended partition and then within this extended partition you have these multiple logical partitions and this one is mounted at data this one is mounted at the VAR uh folder and this one is mounted as the backup folder okay so you don’t need to go a really deep into this it’s beyond the scope of what’s covered inside Linux plus but you just need to understand the difference between primary extended and logical partitioning so that as you are asked questions or you uh are looking at uh potential examples you can differentiate between these various partition types A final note on this during installation uh the wizard will guide you through the partitioning and will ask you how you want to partition your hard disk even if you boot from a USB drive as a live boot you can still choose USB persistence and maintain your partitioning options and in this case you’ll likely have to manually edit your partitions uh you’ll create the partitions using the Fisk or the CF disk command and then from there you’ll Mount each of the new partitions inside of the Etsy FS tab configuration file and that’s essentially how it looks like um so the this portion of it uh I don’t know if this will be right before we go through the actual uh wizard for the installation of Linux or this portion of the lecture will be after we’ve gone through it but either before this you will have seen this entire concept or after this you will see the entire concept I’m not sure exactly how I’m going to end up uh putting the edit together but for sure you will see this uh this example of how this partitioning is actually done especially with the installation of the wizard in 99% of environments there’s going going to be a wizard that will guide you through the partitioning and will do all of these things for you so if you want to boot uh from a USB you’ll need to know how to uh configure the FST tab configuration file but more often than not you’re not going to need to worry about that because if you’re booting from a USB you’re only using it for its uh capabilities and not as a storage device and that is a completely different situation and it again it’s beyond the scope of of this particular uh training Series so that’s essentially uh how you can ensure that your partitions are run and configured during installation and uh you will have either definitely seen that prior to this or you will see this right after this right after a piece swap space uh is something that uh is disk space that’s designated to act as the Overflow for the system Ram so when the physical RAM on on your system is exhausted the system uses this swap space to temporarily hold inactive memory Pages allowing active processes to continue without crashing so let’s say you have a bunch of uh software open and a lot of them aren’t currently actually being used so what’s going to happen is that those memory pages are going to be stored in the swap space and your physical RAM is going to be dedicated towards anything that you’re actively using and and if that ends up being uh too much so if your swap space ends up getting overflowed as well as your physical RAM that’s when you start seeing the system you know uh slow down drastically and then sometimes you even see a system crash because you’re really overclocking the processing power of your computer and at that point uh it just can’t handle everything that you’re throwing at it but more often than not what happens when your physical RAM is maxed out uh it just sends everything that’s temporarily inactive so it could just be something that is minimized on your computer or uh a document or something that you’re just not using those end up going in the inactive memory pages but you can quickly call on them to bring them back to the Forefront and then once they come back to the Forefront they go from the swap space and they start using the physical RAM so the swap space is for inactive memory pages so just keep that in mind and uh that’s how it alternates back and forth between the physical Rams processing and what is being done in the background so to speak the inactive memory pages that are running in the background by doing that swap space uh extends your memory’s capacity so uh when the system hangs uh because memory demand exceeds what’s available on your physical RAM the swap space acts as a buffer to prevent crashes of the system um it takes a lot especially in modern computers cuz modern Computing is strong enough that you really don’t go uh into a system crash unless you’re really just you haven’t done a reboot in a like in years and your computer has a million different tabs open and a bunch of different software open you typically don’t see I haven’t seen a system crash on my computers in a very very long time because the swap space along with the the ram that I’ve chosen works very well and I I just like using computers or buying computers that have a a lot of physical RAM because I wanted to run fast and you know I do video editing and I do gaming and all these things and you just want a good computer that has a lot of physical RAM and the cpu’s uh gigahertz uh the hert power of the the CPU also contributes to the processing power of this so if you have a really strong CPU as well as a lot of gigabytes of RAM you’re not going to worry about this kind of crash or anything but in the event that you’re dealing with a computer that is mainly just for Linux and uh data processing it’s acting as a database and uh it can’t handle a lot of the workflow that’s being thrown at it for whatever reason uh then the swap space will kick in and it will help alleviate alleviate some of the pressure that’s being put on the physical RAM so that there is no crash that happens so it will extend the memory capacity of the physical computer and then it also facilitates hibernation so we already kind of touched on this a little bit but it’s also required for the syst system going to sleep right so when the system goes to sleep there’s a lot of stuff that uh is inactive at that point and a lot of those things go inside of those inactive Pages the data Pages memory pages that we refer to and those things are just stored in the background and when the computer uh wakes back up or it comes out of hibernation all of that stuff is recalled from the swap space so your computer can pick up where it left off and that’s another one of those things that’s really really powerful especially if like for example when you have a laptop and you don’t actually turn off the laptop you just close the screen like you pull the screen down that’s going into hibernation and when you lift the screen back up you literally are waking it up and it’s pulling all of its data from it swap space and this applies to both Linux and Windows and Mac OS so swap space is not something that is just uh uh inherent to Linux right so it applies to everything they may have different names for it or titling for it but this concept applies to everything so it’s it’s just an extension of the physical RAM capacity to make sure that the ram itself is actually all good and the computer runs smoothly and in case the computer goes to sleep you want to pick up where you left back up so it recalls all of the data and all of the the processes that were active and then went to sleep and now you need to reuse them again now some general commands that we need to just keep in mind these are very important uh partitioning commands to create a partition you can do it with either the Fisk command or the parted command and it will’ll talk about those in a little bit in depth um but these are all pseudo commands so it does require some kind of a root permission or at least for the user to be on the pseudo list but you do pseudo Fisk Dev SDA and it creates or modifies or deletes some kind of a partition in this particular case and then parted is another one of those things that in this part in with this specific command it’s creating primary partition with parted and it’s using the format of the ext4 and it’s giving it a 1 Megabyte to 512 megabyte capacity so uh these are just examples of part we’re going to go uh further in depth into uh Fisk and parted in a few slides but these are just uh some sample commands of what it looks like to create a partition or to be able to modify a partition with either parted or Fisk and parted also has a graphical user version of it that is outside of the command line that’s very very intuitive and you can use it with just point and click and it’s very easy so if you’re not using just a Linux uh command line type of a computer and you actually have access to a graphic user interface more often than not you’ll end up using gparted which is the graphic user uh version of this so uh these are just some sample commands of what it looks like to create a partition these are these sample commands to view your current partition so lsblk uh lists the block devices and the partitions that are associated with them and an Fisk L will list all the partitions and their details so these are some ways that you can view your current partitions and the how much storage they’re using and what they’re assigned to so on and so forth so these are things for viewing partitions and then these are some sample commands for dealing with swap space and managing your swap so MK swap swap on and swap off swap on and swap off are fairly intuitive you can kind of tell what they do from just their name but MK swap makes a swap space at this particular location so it in initialize a partition for use as a swap and then swap on activates a specific swap file all right now we need to look at file systems from the perspective of uh partitioning and this is uh another review of what we talked about earlier earlier we were just talking about file file systems in the context of file systems and where things are located how things are broken down now we’re going to be looking at it from the perspective of partitioning so the ext4 xt4 uh is the fourth extended file system so it’s the the most recent version of ex2 EX EX3 Etc and it’s the default in most Linux distributions it’s very reliable it supports large files and journaling uh the file system is one of the most widely used in Linux typically set as the default in many distributions like ubun 2 and davan and Cali Linux is technically an obuntu distribution and so it runs with the xt4 as well uh it builds upon the previous versions which is 2 and three and it has enhanced reliability speed support for large files etc etc so ext4 is most likely what is going to be run in a lot of desktop platforms if you’re going to be doing uh system advance Administration for Linux on desktops these are the key features of ext4 we’ve already talked about this so I’m not going to spend too much time on this slide but it provides for journaling which helps protect data Integrity um it records changes before they’re applied to the main file system making recovery easier um it provides large files uh or supports large files uh it has backward comp compatibility meaning that it can actually work with EXT2 and ext3 allowing users to mount and use those specific file systems without reformatting which is actually very convenient and has delayed allocation and extent so what that means is that it improves the dis input output performance by reducing the fragmentation the breaking down of data and better managing storage space so it’s as uh the the expected uh performance is it should be the better version of EXT2 and ext3 allowing for better performance the better management of storage so on and so forth so xt4 the most commonly used and it does journaling large file support backward compatibility with the other versions and delayed allocation and extension so that you can uh better manage your storage space and better manage your actual input output performance so the input of your uh requests and calls into the system and the output from the system to uh reply to your calls and Grant the requests that you’re making the typical use cases for xt4 uh is as we mentioned desktop systems personal laptops general purpose servers uh it’s because of the fact that it’s most commonly used and it’s one of the most uh versatile it’s used in a lot of common use cases and it’s a good option for people who prioritize data integrity and who may need backward compatibility with older types of file systems so this is for the personal computer uh you know desktop system general purpose server so on and so forth that’s what xt4 is great for and more often than not as a beginner Linux administrator you’re not going to be ma dealing with massive data centers and things like that so you’re going to be dealing with xt4 file systems more often than not as a beginner Linux administrator if you wanted to make an xt4 file system you just run pseudo M mkfs so make file system. xt4 to just make sure that it’s creating an xt4 file system and then you give it the mount point that you wanted to make the file system on and then check or repair that would be FS check file system check. xt4 and then give it the mount point that it was already installed on so these are some ways that you can create one and check or repair the status of that specific file system and then we have xfs the xfs file system uh it’s for performance and scalability it’s a very popular choice in Enterprise environment so this is where you actually go into Data Centers and high performance 64-bit file systems um that’s for Speed and scalability so it’s used in Enterprise environments meaning handling a lot of users most likely or scenarios where High data throughput throughput is critical um it was originally developed by silicon Graphics so it’s become a popular choice in Linux dros like Cent OS and red hat Enterprise Linux which are the Enterprise uh choices for Linux right so this is scalable this is the big the notable factor or the notable characteristics of xfs is that it’s fast and it’s scalable so that you can have a large environment with a lot of users or a lot of computers uh that use its data so high data throughput means that it use it provides data access to data it processes data very quickly compared to the xt4 right so this is designed for big environments the key features is that it has efficient metadata man management so uh handling metadata heavy workloads which is basically for uh requesting or requiring frequent file creation deletion or renaming and this doesn’t seem like a big deal but when you have a thousand users that’s kind of a big deal right so a lot of users going to be creating data renaming data points or re renaming files creating files deleting files and that process by itself just needs to be done quickly when you have a lot of users journaling a very similar concept you’re going to be hearing about this a lot it’s done a lot in uh the all of the different file systems that you’re going to be dealing with more often than not are going to have journaling just in case there’s some kind of a crash and you don’t want to lose data uh it is scalable so and lot large file systems making it suitable for system handling large data sets and high performance workloads which means if you have a lot of employees if you have a lot of lot of computers or servers it’s a really really good file system for that and of course it can resize file systems without requiring them to be unmounted providing flexibility in storage management which is again something that’s very important in data servers or in data centers so that they can uh work with the data without unmounting it and for somebody stopping the use of it you know what I mean so you need to be able to uh allow your employees and your users to still access that data so you don’t unmount the data and you can still allocate various uh uh dynamic or do Dynamic allocation which is essentially the name um that can help you just resize the file system so resize the potential allocation of that specific file system for this one computer or for this data center to make sure that yada yada yada I I feel like you get it I don’t want to kind of regurgitate and repeat myself over and over um but it’s allows for live environment allocation and changes to be made so that the users can still use the system and uh you can do what you need to do as the admin administrator the typical use cases for this are going to be of course Enterprise servers data centers um environments that have a lot of input output performance and scalability requests such as databases media servers scientific Computing etc etc so a big environment that requires a lot of processing power and high performance application so uh large files that need to be accessed and managed efficiently like video production or big data analytics these are some of the environments that re require xfs or make use of xfs and these are some of the example commands to create one so make file system make F mkfs do xfs and then you give it the mount Point pseudo xfs grow FSS grow FS excuse me pseudo xfs grfs and then the actual Mount Point itself so resizing requires the file system to be mounted and it supported only for increasing the size so just keep that in mind so you can’t decrease the size um you can increase the size of the file system by using grow FS grow the file system and then check the repair file system so xfs repair and then give it the mount point so this these are just some of the example commands for using and interacting with xfs and of course we have the swap file system so swap space uh it’s the dedicated partition or file that’s used to extend the memory as we’ve already discussed um it’s Unique compared to the everything else that we talked because it’s not a TR additional file system it’s actually uh used for live data right so it’s an extension to supplement the physical memory of which is our Ram our random access memory so it’s not storing files or storing data it’s just a live type of uh swap space or file system technically um it allows the system to avoid out of memory issues we’ve already gone through all of this so U memory overflow management is used when the system memory RAM is fully utilized we talked about this hibernation support we talked about this um the partitioning versus the swap file is kind of a concept that I guess uh should be noted here a swap partition versus a swap file so a swap was created as a separate partition so a swap partition uh it was created as a separate partition which could offer performance advantages due to the dedicated space on the dis the swap file which is inside of a lot of modern systems uh they use the file inside of a partition itself because it’s more flexible and it can be resized or removed quickly without having to deal with all of the partitioning commands so previously the swap partition was something that you have to actually have to create that specific partition using Fisk or uh you know any of these parted these various types of tools whereas if you’re dealing with a swap file you can resize the file remove the file do anything without actually having to mount unmount and deal with the the process that’s done through partitioning so uh more common systems are going to have a swap file and the kind of the older Legacy systems are going to have swap partitions the size recommendations for your swap file or your swap partition um it really has to do with the the Ram size itself so if there is not that much RAM typically available um then the system would benefit from more swap space so Twice The Ram size that that would really I don’t exactly know what is technically considered to be not that much RAM um if you have ample uh RAM so let’s say more than 8 GB if you have more than 8 GB of RAM then you don’t need that much swap space so for example you can get get away with having 2 GB of swap space and in some cases not at all if the memory is itself is actually really really powerful so if you have really really good Ram you probably don’t need a swap file or swap space um so so ju it just depends on the physical size of the ram itself now again unless it’s a really old business with very little uh RAM or like a really old computer you’re not going to run across a situation where the server doesn’t have enough RAM that’s just I I feel like uh this is a bit redundant but it is something of note so it is something that you should just keep in mind because you may deal with like a legacy computer or an old computer and they may not have that much RAM or maybe they’re expanding and the computer itself doesn’t have that much physical RAM and so you need to deal with uh some kind of a swap space or a swap file to uh to cater to the the massive size of the company or the as the company expands and continues to grow uh so that they don’t crash and until they can uh upgrade their systems and buy better Ram or you know bigger computer so on and so forth so these are some of our example commands for uh creating and activating a swap partition so uh you make the swap space first and then you turn it on so make swap that would be the swap space swap on you turn it on uh create and activate a swap file is a little bit more so the fall allocate or F allocate um one 2 gab or excuse me this would be an L it’s not a one so this would be 2 GB to the actual swap file which is would be the name of the swap file or the path to the swap file and then you need to change the permissions so to actually give it permissions and usually it’s uh the type of permissions that the system itself requires or root user requires and then you create the swap space so make the swap space and then give it the swap file path and then that then you enable it so first you have to allocate something for that swap file then give it the permissions that it needs and then make that as the swap space and and turn it on so that that specific swap file can be used and if you want to just look at the usage or see how it’s working so you can do swap on show or free- and those will give you the swap usage data points that you would need all right so now we’re going to talk about the various commands uh that will allow us to do everything that we just talked about so Fisk is one of the more common ones uh it’s a command line utility that’s used to create modify and delete partitions on a dis and it’s commonly used for managing the master boot Rec record or the MBR which is the traditional partition that allows you to have four primary partitions but it can also handle the GPT which is the guid partition table that can allow you to have more primary partitions so Fisk is the most commonly used right it’s a command line utility and you can create modify and delete partitions on a disk uh the key functions would be things like and this is in no way meant to be the full list of commands that you can run uh if you just do man MN Fisk then you’ll get the full list of what you can do with f disk so this is not something that’s supposed to be all-inclusive um but this you know these are some of the common ones so uh pseudo Fisk L would list all the available partitions and the basic disinformation about them uh interactive mode so if you run f disk as well as the name of the partition so this would be the path to the partition so you would need to replace that with the actual Target dis it opens an interactive mode where users can add delete or edit partitions inside of it so this could be a a interactive version of running f disk instead of just running individual commands to do what you want to do and after you’ve done all of your uh commands or modifications in interactive mode you would type w and press enter and that writes the all of the changes that you just made to the very the specific dis that that you are associated with so you would create a new Partition using the N option in f disk you would select that partition type or or choose a partition type that you want so for example xt4 and then assign the size that you would want and then you press W to press the changes this is this is like a overview we’re going to do this uh in detail when we actually go into the command line portion of this whole thing but you can also get a lot of different uh breakdowns of this if you just go to to uh gemini or if you go to GPT or anything like that and ask it to BL you through creating a partition and going through the various options of what would what it would be like to to mount it and delete it so on and so forth but we will go through those examples as well and then we have parted so it’s a more versatile command line tool than Fisk because it supports uh obviously supports the MBR and the GPT schemes um it’s ideal for resizing copying and modifying partitions without losing any type of data so it’s kind of a more recent version of it you will probably run across both of these and the command line structure of them is relatively similar and they have for the most part uh allow you to do uh the same things you just need to be familiar with the fact that both of them exist there is an interactive mode so very similar to f disk you would just do the same similar type of command except now you just do pseudo parted and you get into interactive mode you can create delete resar partitions with commands like like MK parts for make part RM to remove and then resize part very very like intuitive I think that’s one of the big things about it and then when you get to the GP uh GPD partitioning table um when using large drives which are mass so 2 terabytes in today’s world is actually a very large Drive part it is often preferred because it natively supports GPT uh which is used for a file system that is uh that requires more than for or primary partitions right so the the smaller file systems or the smaller storage discs use the previous one the MBR and then gpts for the file systems and the diss that are definitely going over two terabytes because they’re considered to be much larger especially when you’re not even dealing with media and you’re just dealing with text documents and uh configuration files and logs and things like that 2 terabytes is a lot of space so um parted ends up being being the Premier Choice the the first choice for that and example command to create a new Partition would be to just use parted uh this would be the disc or the physical location that you want it to be stored on you are making a partition so MK part and then you’re making a primary partition with the xt4 format and you want it to be a th000 megabytes in this particular case would ends up being a gigabyte So you you’re creating a gigabyte size primary partition with the xt4 format on the specified dis which would be at this particular location and then you can just run dis utility or a variety of different command line tools to figure out what partitions you actually have mounted onto your device and then from there you would choose that specific path for it as your creating your new partitions or extensions or logical partitions so on and so forth and then we have GP parted uh which is stands for The Gnome partition editor and it’s basically the graphical front end for parted and it’s available for a lot of Linux distributions that actually have a GU a graphical user interface if it’s just a command line uh Linux then it doesn’t exist but most of the desktop versions and anything that actually has a graphic user interface there is a g-p parted um version of parted a available for you and you can do everything that you would do with parted using an intuitive graphical interface so it’s very user friendly it provides a visual representation of partitions and unallocated space allows for easy resizing creating and deleting leading of partitions and it’s ideal for users who prefer graphic over command line management and need a quick way to modify partitions so pretty much does everything that part it does with a graphic interface since you’re not going to be using the command line you basically just need to know how to install this thing and so you would do pseudo apt install gparted and we’re going to go through apt which is a package manager and that’s for deban Ubuntu and then you can do pseudo dnf install gparted for Fedora um and then you just run it pseudo gparted to open the graphical interface uh so the launch and the installation is actually done on the command line but the rest of the stuff will be done through the graphical interface so you just need to know how to run these things um to install them and to start it and then from there you can go ahead and use the graphic interface okay so now we need to list the block devices right so we’ve created the devices and now we need to display the information about those devices including the Diss and the partitions to those Diss and you want it to be user friendly or close to user friendly so it does it in a tree format which you’ve already seen so um list lsblk will list the block devices it’s a great way to get a quick overview of storage devices on the system and the key features is that it has a higher article display so it’s the tree structure that we referred to earlier they’re showing you the relationship between the actual main device and its various partitions and it puts the information like the device name size type and mount points so the essential info you’re not going to get a lot of crazy details from this so but this is a very common tool to use to just see what all the block devices are essentially meaning what all the physical devices are and what the partitions are that are attached to them and it’s done in a nice tree format and these are some of the example commands so lsblk displays a simple tree view of all the devices uh using the dasf includes the file system information like file system type and uuid D and then- D shows only the main devices excluding any of the partitions if you have a very large environment you don’t want to see all the partitions you just want to see the main devices this will show you just those main devices using the DD flag or option and then we have DF which is disk free that basically reports what is uh being used or what is free so it gives you the the free space uh for each of the mounted file systems as well as what’s the the u space on it um but it’s one of those things that if you wanted to see just specific data about the file system dis space usage you would use disk free um it displays total and available space for each file system um dfh to display sizes in megabytes and gigabytes making the information more readable and then uh dftt will show the type of each of the file systems along with the usage stats that you have so this is for DF and then these are just some of the example commands so H again lists all the stuff H home will show show you the dis usage only for the home directory and again all of these things you can find the specific options that you can run by going through the manual page for each one of these tools we’re just going through sample commands and then when we actually get to the Practical version of this where we start using the command line and going through everything we’re going to go through a lot of this stuff in depth so this is I’m just going kind of like a quick overview of this so you can get some familiar I with it and then we’ll go into the great details later du is for disk usage and it’s used to check the space usage for specific files and directories providing a detailed view of which directories consume the most space so it’s not a file system thing uh it runs when you run du with the- SH uh on a specific directory it provides a summary of its total size when you run du by itself it shows dis usage for each directory and the subdirectory and then du and then the threshold of 100 m for megabytes displays only files directories that have over a 100 megabytes of space that they’re taking the command examples would be sh on this particular path would give you the total size of the ver log directory itself the ah home provides a detailed recursive view of the home usage and listing of the individual files and the directories which can be massive so this this you don’t want to do on a uh you don’t want to do on a super large uh directory because the the output could be massive so uh if anything you would uh take the output of this command and you would uh pipe it or export it into a separate file so that you can look at it outside of your terminal um because it really can be a a lot of data if you’re going to list all of the files and directories so it it can just be a really a big output so just keep that portion in mind most of this this specific thing you don’t want to run just on the terminal just to look at it because the termal doesn’t output everything it’s like it’s limited to how much data it can actually show you on the display even if you scroll all the way up you may not see everything all right now we can run through our package managers and the various commands that are associated with them so the intro to package managers um package managers are tools for managing software on Linux so it’s like a tool for managing other tools to help you install tools and to help you update tools so on and so forth they up uh they automat the process of installing updating removing software and it makes it relatively easy or if you know what you’re doing and if you are very familiar with the command line it makes it easier to maintain a consistent and upto-date system they handle dependencies uh meaning the the stuff that the software requires to run right so each software may require a certain number of dependencies for it to run so the the appt manager or the package manager excuse me the package manager will will handle those dependencies to make sure that any required libraries that you need and the tools are installed alongside whatever the software is that you are installing So when you say you know pseudo AP install such and such it won’t just install the tool it’ll install all of the dependencies and the libraries that that tool needs to run properly and then it saves you a lot of manual dependency resolution and potential conflict so you don’t have to go hunt down all the different dependencies cuz it can get very very tedious especially if it’s a piece of software that requires a lot of different elements and adjacent tools for it to run properly you don’t have to go hunt those things down manually you just run this one command and that one command gets all the dependencies that you need the libraries so on and so forth so very very useful um we have a which is the most common that I’ve used because I usually use ubun and Debian versions of Linux and it stands for the advanced package tool it’s very very common so if you want want to install something you do pseudo a install the name of the package if you want to update something pseudo AP update and pseudo AP upgrade you can technically run those both at the same time that’s what the double Amper Sands go into that we kind of talked about uh actually I don’t think we’ve covered operators yet I was just doing a a Linux tutorial Linux fundamentals tutorial so we haven’t gone through the operators yet but when you do two Amper signs right here that means you’re running both of these commands uh s multaneously so it’s going to run this one first and then it’s going to run that one next so it’s going to update the list of the softwares as well as the software itself and then it’ll upgrade anything that it needs to and if you want to remove something sudo ipt remove and then you give it the package name so very very intuitive and most of the package managers are actually similar to this um AP or apt is known for its ease of use the robustness and extensive repository of software packages and again it’s the most commonly used because it’s associated with Ubuntu and davan which is which are the the desktop versions of Linux and very very user friendly as you can tell the the commands themselves are super intuitive as well so very user friendly we have yum which is the yellow dog yellow dog updater modified or dnf which is the dandified version of yum um and this is for Red Hat uh so Cent Os or Fedora uh these are the distributions that would use yum but it’s it’s pretty similar to what we did with apt right so it’s like pseudo yum install name of the package or pseudo dnf install name of the package update or update it’s pretty much the same so you just use yum instead of Apt or you would use dnf instead of yum or whatever it is depending on what uh version of Linux you’re running and typically if you try to do like pseudo appt install such and such the the terminal will actually tell you oh this is not what is used on this system we use yum on this system or we use dnf on this system and it’ll give you an updated command to run so it’ll say yeah this doesn’t run here instead run pseudo yup pseudo yum install the package name so that’s it’s very also very very intuitive and very user friendly yam has been replaced by dnf in the newer dros like Fedora so dnf offers improved performance better dependency resolution and modern design um but essentially they kind of operate on the same uh distributions of Linux and then finally we have Pac-Man this is my favorite name uh it’s the package manager and it’s used for Arch Linux and Pac-Man’s uh commands are a little bit different than what we’ve seen with apt or yum so pseudo Pac-Man DS and then the package name is the installation pseudo uh Pacman Dash and this this is capital S by the way and then capital S Yu would be the software updates and then capital r package name would be the removal of the software itself so uh it’s known for the Simplicity speed and flexibility which is uh a common uh or a favorite among RS Linux users and handles binary packages and Source packages with ease so uh all the other ones do too in my opinion um or in my experience I guess I should say but this is something that is specific to the ARs linic distribution so you need to be aware of it so the name Pac-Man is not the video game it’s a package manager and it’s for Arch Linux and these are the the commands for the installation update and removal of software using Pac-Man and finally for this section we have the updating removal or troubleshooting of packages so updating packages and system upgrades um the reason why we want to do this is a few fold so the first one is security obviously so there are patches that need to be applied to certain packages as the vulnerabilities are discovered so when you’re considering the approach for ethical hacking and pentesting the reason why they do those things is because they want to try to break the package so that they can find what of the vulnerabilities are sometimes vulnerability scanners don’t do their jobs and sometimes there’s an announcement that’s made that hey we found this vulnerability you need to go patch this specific uh package that you’re using so uh this is not uncommon this happens very frequently and running an update on your package manager should be a a habitual thing that you do maybe even daily every time that you log into your Linux or every time that you’re about to install something or every time that you’re about to use a specific piece of software or specific package anytime you’re about to use that thing just run update real quick and just make sure that it’s the most recent version of that that includes all of the patches for it um then there’s also security features that are that come with those updated versions of those packages and then of course the dependency updates as well so if there’s new dependencies that are required to run the updated version that’s something that you may need and these are just some of the common things that fall under the bracket of security and then you have stability so there are bugs that are associated with these softwares or these packages so just make sure that that’s all updated I mean these are so common sense that I I feel like I don’t even need to go through this you just need to understand that updating packages and the system itself whatever you’re running needs to be a regular thing you need to check for system updates and package updates regularly to make sure that compatibility is there performance is there there are no bugs that are running there’s no security issues and the functionality is all great the user experience is great and of course if you’re in a regulated environment you need to be compliant so if there’s something that has to do with customer data and it’s not updated and then there’s a leak of the data because there was a security vulnerability you are now in grounds for a lawsuit or the company is in grounds for a lawsuit and guess what you as the Linux ad administrator are going to be fired because this is something that’s so simple that can so easily be done and it’s so powerful so it’s like just keep all of your packages up to date it keep your system up to date it’s not complicated to do runs a couple of commands and it’s very very useful and of course compliant to the regulatory environment that you may actually be in now the these are some of the update commands the upgrade commands so you do pseudo appt upgrade and it updates all of the installed packages that you have on your particular system that runs AP and then pseudo dnf upgrade runs all of the uh updates for the packages that you have installed on the NF system so if if you don’t want to upgrade the package individually as you’re about to run it you can just run this daily upgrade with a or dnf or Pacman or whatever it is and just make sure that everything on on your system in your environment is fully upgraded I would say removing and cleaning up the packages is probably just as important as running uh the updates on the packages the main reason for this is to free up space uh there’s also something called The Orphan package or unused dep dependencies that uh you just don’t use them anymore and they’re taking up storage space so this is probably something that is also as as important to run as the up updates of everything to just make sure that if there’s something orphaned meaning that it’s uh outdated and there’s a newer version of it if you run an upgrade on something you should also run the auto remove to remove whatever that’s been orphaned and a lot of times you will also get a notice from the system some kind of an alert that says hey are you sure you want to remove this this one’s associated with this and then you can either say yes or no but for the most part it’ll just take it will get rid of the things that are either orphant or legac or grandfather so on and so forth or unused so this is as important of a strategy as the upgrade and the update of all of your packages and thankfully troubleshooting the package issues is not something that you would have to do super uh in granular detail as well you just need to run a certain series of commands so if you have a locked database for example so if there’s a locked package manager in a Debian based uh system which is Ubuntu for example it means that another package management process is running or it didn’t terminate correctly so what you want to do is you want to remove that specific locked front end so pseudo RM removes the lock file and this is the location of the lock file so this is the file that’s used to lock that package database while the AP is running and then sudo RM removes that specific file this is the the full path to that file right so this is how you unlock lock a locked database for example and you’ll get that notification again your terminal will tell you that hey this can’t work because this is happening or this didn’t terminate correctly and then you can go ahead and just run the pseudo RM pseudo remove and it’ll work for you and then you have broken dependencies so if something is no longer current or it’s outdated or it’s missing a piece or uh a dependency for whatever reason wasn’t patched properly so if it’s not installed right any of those things the the version is outdated or it’s not the correct version any kind of conflict that would fall under a broken dependency you can do uh a fixed broken command so pseudo apt D- fix broken and then install so it tells app to uh fix broken dependencies by automatically installing or removing packages that are necessary and then dnf check will check for dependency issues and report them and then you can use dnf drro sync to synchronized installed packages to the versions in their repositories to make sure that all the dependency issues are resolved so again this is what’s really awesome about these package managers you don’t have to do the manual hunting of oh my God I got to go fix this I got a lot of these things are automated so you just run a command and it’ll just automatically install or remove packages that are necessary it’s like it’s so freaking useful but you just need to know that you can do this right and then you need to know what the command is so uh by the end of this whole thing you’re going to have a dictionary of commands that you can run to take care of a variety of different things which is just I think it’s super freaking useful so um there we go this the broken dependencies and then we have repository issues so repositories can be unavailable or misconfigured causing issues with the package management to resolve this you might need to reenable or update your repository resources and uh those things can be done using the various resources lists or uh the locations that are on the sources list so uh app sources list or the app sources list. D both of those are very useful on the debbas system so you can view the sources list in those regards and just see if it needs to be updated or the sources or the links or something might not be right um and then one of those one of the most powerful tools for that is just copy the contents of it and then take it to gemini or GPT and paste it and say hey does this look is something wrong with this and it’ll tell you if something’s missing or if something’s wrong um and then Red Hat systems check the repository file in the Etsy yum repos directory and that you can just make sure that all the sources for your repos are all good however if you don’t want to do all of that if you don’t want to go to GPT or whatever you can just run these commands right because they’re automated and they can just make sure everything is all good so if you run uh for AP you can run apt update for dnf you could run a make cache or dnf update date for yum you could make a cache or do a yum update info and then Pac-Man you can do this as well and essentially they’ll just make sure that the URLs and your repository files are correct and accessible and they’ll make sure that everything on your sources list is up to date and current so again you don’t need to do a lot of these things you don’t have to do manually because these amazing package managers will automate a lot of that processes for you and just make sure your sources are up to date make sure all the dependencies are up to date make sure the versions are current and if there’s any bugs or anything that’s been patched all the security vulnerabilities all of that stuff is taken care of by running regular updates and regular removals as well so make sure that the removals and the updates are being done in tandem to get rid of anything that might be orphaned or uh out of date and to make sure that everything that needs to be updated is updated to uh make sure that everything is all good you got to update the package list and basically this is literally these are the commands that would do all of that stuff for you as well but sometimes you might need to just restart the system or you might need to restart your shell um or just really just running the update command I think is actually all good um but just restarting the shell sometimes can uh do wonders if those update commands aren’t making everything run properly and then if that fails you can just restart the system um and then it just guarantees that everything is updated and current as as well as your package lists themselves all right now that you have Linux installed hopefully uh you took some time to actually install either your own version on a USB so you can Boot It live or you have it installed on your computer where you uh booted it as your operating system um or you’re using a virtual machine or you’ve used uh tryck me’s virtual machines that are all Linux based or you have a Linux computer whatever it is so hopefully now you’re in an environment where you can actually run these commands now if you have a Mac OS uh for the most part you can run most of these commands as well um if you don’t have them you may need to install them using one of the package managers and uh get access to them but uh in either case uh I would say do it in a Linux environment uh if you can download Cali Linux or a version of Ubuntu whatever it may be so that you have no problems and you can be sure you can rest assured that you can run all of the commands that are coming up um what we’re going to do is we’re going to go as an overview because we’re technically still in the lecture portion of this uh training series and I’m just going to show you what these commands are I’m going to tell you what they do and the purposes that they serve and then when we get to the second La the second portion of this whole thing where we actually get to the Practical exercises you will get plenty of chances to run all of these commands and will run all of these commands in an actual live environment while you’re you see me run the commands and we’ll look at the the output that comes from the commands and we’ll combine a variety of these different things um what we’re going to be talking about is just the basics of the command line and the basics of scripting so that you have an idea of what it all looks like and the the syntax and the general structure of all of this stuff and then we’ll go into the other chapters that we have and then when we come back at chapter 12 we’ll actually run all of the commands as well so that’s essentially where we at right now um the isial command line tools and navigation is what we’re going to look at all of these commands will yield some kind of information about the system that you’re logged into so who am I uh gets you the current logged in username by the way all of these things are case sensitive so if you do who am I with a capital W it’s not going to work who am I gets you the current logged in username uname displays all the detailed information about the Linux machine that you’re on including the hardware installation the name of the Linux uh the version of it the OS kernel version so on and so forth and then there are various options that come along with uname as well so this is just uname by itself there are other options or flags that you can attach to uname to get different results but uname is typically about the detailed information about the machine the hardware the name and the OS kernel then you have the host name which gives you the VPS name the VPS host name and other related info so depending on what flag you run with it it’ll get you different print information about that individual host so with no option it it just prints the host name uh with the if flag so Dash lowercase i it checks the server’s IP address- lowercase a prints the host name Alias uppercase a gets the system’s fully qualified domain name which is also known as the fqdn there is a difference between the who Ami and the host name so the who am I is just the user so whoever you are as the user you can be multiple users on the same host and that host would be on the same machine so the there could be multiple users logged in but they would be on the same host machine technically and then the same thing with the the hardware and the Linux machine and so on and so forth so who am I specific to the actual user that’s logged in un name is specific to the hardware of the machine the name the OS kernels stuff like that and then the host name is relevant to the actual host itself that you’re logged in on which is typically that machine um as well as the IP address the host name Alias if there is a host name Alias and the fully qualified domain name as well another way to look at the username and the host name is that the username is assigned to a specific user on a computer network while the host name is a label that’s assigned to the device on the network essentially identifying the computer itself rather than the individual user so there would be an IP address assigned to a the MAAC address and so on and so forth that would be assigned that would come under the host name data versus the username that comes under that individual person so the username is the login name while the host name is the name of the computer on the network and that’s not to be confused with the AME so the host name would be the name of the computer the username is the name of the user and then the uname just brings various data points about the computer itself for example the hardware and the the OS version so on and so forth all right on to navigation so navigating the actual computer itself so the first thing that I always run is PWD that prints the working directory meaning the directory that you’re currently logged in or the directory that you’re currently accessing it prints that path for you um CD changes the directory so if you do CD home user documents you’re changing the directory to the home user documents and you’re going to end up inside of the documents directory documents folder and then when you run PWD while you’re inside of the folder what’s going to happen is that you’re going to get this full path so it’s going to print that full path of wherever you’re logged in on onto your screen and that will say that is your printing of the working directory and then once in you’re inside of that directory you can list the contents of the directory which would be with ls and LS typically just by itself would just list whatever is visible it’s not a hidden file specifically what happens is that when you do LS L it’ll print the detailed view of all of those specific files and then when you do LSA it shows the hidden files and then you can combine these two flags and you can say LS L A so instead of doing- a or- l you could just do-la and it’ll give you the detailed view of everything as well as show you all of the hidden files as well so print working directory change the directory and list the contents of Any Given directory and then there’s the interaction with individual files and directories uh namely creating files and directories copying files and directories moving files and directories and these are what these commands are for so Touch by itself if you just create want to create any kind of a file you can do touch and then the name of the file which would be this specific thing if you want it to be inside of Any Given directory you can give it a full path but typically you would be inside the directories let’s say the documents folder and you just say touch file. text and it’ll create file. text for you and then there’s if there are any options that you want to attach to it you can attach those as well we’ll get into all of those things uh in a little bit when we actually get to the Practical section but touch is designed to create a new empty file in any given directory even the directory that you’re in or the directory that you give the full path to make dur which we would we make directory it creates one or multiple directories if you want it to and very similar it’s mkd and then any options that you wanted to give it and then the name of the directory one or directory 2 and that again could be the full path to the directory or you could be inside any given folder and then you create a new directory and basically what it does is it creates a folder that can house files whereas Touch by itself creates a file so you can’t expect the the file that’s been created by touch to be a container for any other files cuz it won’t be it’ll just be a file so make directory creates a folder that you can house a variety of other folders or you can house a variety of other files inside of it CP is copying so it can copy a file or a directory and then when it copies the directory it’ll also copy everything that is inside of that folder and so you would do CP and then you would provide the source which is the the thing that you want to copy and then you would provide the destination which could be the location of where you want it to go to and if it’s going to be inside of the same folder that you’re in you would copy the same the whatever the file is that you want to copy and then you just need to give it a name so that it won’t be the same exact name so if I want to copy file I would need to have as my destination file to be would be the second uh argument that we Supply to this command and it can still be inside of the same directory that we’re in it’ll be inside the same folder that we’re in or we can just say that I want you to take take whatever the file is inside of my current directory and I want you to copy it and move it to this whole other location and then you would give that full path to it and move would be another one of those tools that you can use but instead of uh copying it actually just takes that original file and it’ll transfer it out of wherever it’s sitting and move it to a different location you can also use it for uh renaming something so if I say move file one I would give the new location and it will transfer file one to that location or I can just say move whatever the old name is and rename it to change it to the new name and it it won’t copy anything it won’t do anything like that it will just rename the file for you so it can either move the file from wherever it’s sitting and transfer it somewhere completely different without copying it so just keep that in mind if you move it you’re not duplicating it you’re transferring the exact file itself or you can just rename the file RM would remove the file so that would be the command that you can delete an entire file or you can delete uh the directory that includes everything that’s inside of that directory so if you do rm- R it’ll actually delete the directory and everything that’s inside of it and it’s very similar to doing rmd so it removes the directory and everything that’s inside of that directory so you can do uh removal of the directory using the dasr uh flag that goes with it or you can just do remove dur and then remove everything that’s inside that directory as well as the directory itself so just keep that in mind that when you remove a directory you’re also removing everything that is housed inside of it so if you need it you need to make copies of it or you need to move whatever is inside of it to a different thing so that you can delete that directory file gets you the file type of whatever the name is that you give it so you say file either the path to the file or whatever the name of the file is inside your current folder and in that regard it’ll say okay this is a text file or it’ll say it’s a python file or it’ll be a CSV file comma separated value file so on and so forth it gives you the file type and believe it or not this is actually very useful especially when you get into scripting because there are certain interactions that you can’t have with certain file types so you need to know what type of file that you’re dealing with and if the file type is the file type that you want to interact with you would then use the series of commands that could potentially interact with that file so getting a file type is actually very important especially when you get into scripting and then zip would compress the file so it basically just creates a zip file for you from one or more files or a directory uh that you assign so you can say zip and then choose whatever option that you want and then you would take the file name and then uh that you want to create so this will be the zip file name that you want to create and then this will be the file that want to be added to it the next file that’s going to be added to it you can have a series of files but essentially you would zip this would be the name of the zip file the name of the zip folder and then these would be the files or the folders that would actually be compressed into that one zip file that one ZIP compressed folder unzip extracts the data or the the contents of that individual zip folder so very it’s a pretty simple concept to understand so again if you choose any specific uh uh options you can do that um and then you just give it the zip file name that you want it to unzip typically it would unzip in inside of whatever location that it’s already in so you just want to make sure that uh wherever you are unzipping it you want those contents of those files because uh sometimes a zipped file will have a bunch of content in it and when you do unzip it uh if you wanted to move to somewhere else you would have to move all of those things manually uh by doing the the MV command so if you want to unzip it in any given directory a specific location that you want all of that content to be first you move move the zip file to that location and then you unzip it um a tar or tar command itself it bundles multiple files and directories into an archive but it doesn’t compress them so you just run tar the tar file name wherever the location is going to be and then the files themselves they don’t get compressed typically it’s good for archiving and creating archives um so that you can interact with them without having to unzip them because when you zip something it gets compressed and then you would have to unzip it to be able to interact with it tar just bundles it multiples files into one location known as an archive and then you can interact with the contents of that without having to decompress it or uh unzip it so to speak these are some of the key operators that you need to keep in mind and this is actually very important because you need to be using this very very frequently so the first one I’m just going to call this the greater than symbol or pointing forward I guess we can call it whatever it is um the greater than or pointing forward adds the outputs of any command to an output file so for example example if I do Echo file contents just by itself it’ll just print file contents onto the screen but if I do Echo file contents and I push it into the new file then it takes this whatever is inside of this or whatever is inside of Any Given command really and it puts that inside of this file then this overwrites anything that is currently inside of the file so you need to be careful with that because if you want to append something if you want to add data to Any Given file you would use the double uh greater than symbols instead of a singular one because the singular one literally overwrites everything inside of that file and you lose whatever else was previously inside of the file so more often than not what you’re going to do is you’re going to take a given command so you’re going to run some kind of a command that will analyze something we do this a lot in security where we would run a uh let’s say a top command or we would run um TCP dump or something like this we would run some of these commands and instead of getting the contents uh displayed onto the screen because it’s going to be a lot of content we would output the content inside of this file so that we can review it later and manipulate the content of it um when you’re appending something you would add whatever those contents are to a new file and for scripting this is very very important to understand because more often than not when you write a script you don’t want that script to overwrite everything that would be inside of your folder or inside of your uh the output file you wanted to add every time that you run that script you want it to add the contents to the output file instead of overwriting everything that’s inside of that file so this one overwrites everything and creates a new file or it’ll just create a new file for you um then this one appends it it adds the output to the file and you keep everything that was previously inside of that file there is the and so ampers and uh you place this at the end of Any Given command uh especially if it’s a big command that takes a long time to process so that it backgrounds the command for you and then when it’s backgrounded you can still continue to use your uh CLI your command line um instead of having to wait for that command to finish so for example if we were to run uh something that’s big like this and then we were going to Output the results of it to a file but we had to wait for the command to finish we would just put an erson at the end of this command and then it would run it would do whatever it needs to do it’s going to create our output file for us and then we can continue to run whatever other commands that we wanted to run while that previous command was completing um this specific one right here where you have a double ERS sand it combines multiple commands and then gives you uh one overall input so you can do uh touch new file so you create a new file and Echo onto the screen that there was a new file that was created and Echo the contents of the file into the new file that we just created right here right so technically you’re running three commands you’re creating a file then you’re notifying that the file was created and then you’re adding whatever contents you want to that file so this is very useful especially if you actually know what you’re doing and you want to get a lot done and not have to do multiple commands uh and then just press enter wait for it press enter wait for it instead of doing that you can just combine a series of commands and then that way with one pressing of the enter button you run the whole thing and then if you have notifications in between each one of the steps you will get all of those notifications and you’ll be uh you’ll know exactly where you are in the process and then it’ll be done real quick so uh Ampersand by itself you put it at the end of a command and then you run that command in the background when you do this when you double up the ERS hand you can combine a variety of different commands and you can string together uh really as many commands as you want um if it gets to the point that you have to run a bunch of commands and you’re trying to combine a bunch of commands at that point you might as well create a script which we’re going to get into in a little bit so uh so far these are the key operators that we got once you’ve created files or if you have a bunch of files that you’re trying to view there are a few few different ways that you can do this when you do cat it concatenates that’s what it stands for it concatenates the the contents of that file and it just prints everything onto the terminal um if it’s a massive file there’s going to be a very large output and sometimes the terminal limit the amount of lines that’s output and you may miss uh all of the contents at the very very top of the file if it’s a really large file typically that happens usually with log files so if you have a very large log file then instead of doing cat and displaying everything on the terminal you can either do less and more to view large files Page by page so you do less and then the name of the file name and then you can uh click your uh the arrow buttons on your keyboard and then you would go onto the next page and on the next page head and tail will display either the starting or ending 10 lines and then you can also just uh display however many lines that you want so if you do tail n 10 it’ll display the last 10 lines of this file if you do tail N5 it’ll display the last five lines of this given file so this is also very useful if you’re trying to if you know that a certain amount of data like if you want all of the headers for example of the file you will look at the top 10 lines and then if you want the most recent data which is typically what’s going to be at the bottom of Any Given log file you want the last 10 lines or the last five lines whatever it is so you can look at the most recent data instead of having to scroll back and forth to try to find it and then there’s find so find itself is actually very very useful if you know kind of what you’re looking for you just don’t know where it is right so for example it would be find and then if you do the p it could be just forward slash that represents the root which means that all of the folders inside of the root you want to search basically in the entire machine so you want to search inside of the root which would be the path or you could say you know uh the documents for example so you can give the path to the documents and you say look inside of the documents and look for a let’s say if I do Dash name which means that I want you to find something by its name and and then you give it the name that you wanted to look for we’re going to do a lot of this because this is actually very useful um and locate of the file uh is essentially the same thing and it prints this location onto the screen so locate uh keyword and sometimes depending on where you’re at uh what type of machine that you’re on locate may not give you results fine may give you the results or fine won’t give you the results locate will give you the results they essentially do the same thing um so that’s kind of what it is for find usually you need to actually give it the path that you want it to search for locate just looks for it and then says this is where the path is so you can locate and then find whatever the path is uh for whatever keyword that you’re looking for find itself you would need to give it a location for it to search and then it’ll go look for whatever the the name is or if you’re looking for a file type or a permission uh so on and so forth it’ll display all of those things for you and then we have our common text editors so Nano and Vim are the most common text editors that are available on Linux uh Nano is very very user friendly Vim is also userfriendly in my opinion it’s just not as uh straightforward to deal with um so for example if you do Nano file.txt it’ll open up file.txt if you don’t have something called file.txt it’ll actually create file.txt so it’s very similar to using the touch command um but what happens is that you can now uh write inside of that file so if I do Nano file.txt there’s nothing inside of the file I can start writing inside of the file if there is something inside of the file and I do Nano I open it up and now I can interact with the content inside of that file so I can look at everything uh scroll back and forth if it’s a massive file instead of doing cat and then printing it onto my display I can just do Nano and just open it and scroll through it as I wish and then once I’m done I can if I want to save any changes that I I’ve made you do control o we’re again we’re going to go through all of this stuff so this these are just overviews you do control o to save it and then control X to exit um contrl K could be cut the line contrl W would be to locate something so it be where is something and you can look for certain text there’s a lot of options that comes with it those are just some of the common ones but Nano is the text editor and Vim is the other one that I mean it says it has a steep learning curve but in my opinion it doesn’t um it can be uh open just to read something it can be open to insert something it can be uh Visual and it can go in Fairly dep uh deep uh options that it has and uh there’s a lot of help uh manuals for all of these things as well so uh you’re not going to be left to the Wolves as so to speak just to kind of go through this stuff and uh I would be to enter insert mode so it would be vim and then I to go into insert mode um once you’re inside of vim and you’re done with everything you can do colon W to write or to save colon Q to quit colon WQ to save and quit Etc um and you can search for an any given pattern uh to delete a line you would do DD these are just common options this is not uh anything that you’re supposed to memorize right now because again we’re just going to go through the interactions but these are the text editors that you should be aware of so that once we go into it we don’t have to just do an overview of hey this is a text editor and this is what it does so on and so forth when we go to the Practical section we’re just going to start using these tools and it’ll be it’ll just kind of uh reinforce all of the data that we’re covering right now all right now we’re going to go into another level of dealing with files um and manipulating them and searching through them so grep is another one of these tools that you’re going to use very very frequently because it’s very powerful and it searches for stuff so uh it stands for Global regular expression print so you don’t have to know that you just know that grep search just for things um you can search for text within files so typically it’s like you’re searching for uh any given pattern or a word or you can even do actual regular expressions and then you give it the location that you wanted to search within so if you wanted to be case insensitive you would do- I so that it can just look for whatever without having to worry about capital letters or lowercase letters if you do R it’ll be a recursive search inside of a directory so if you do uh if instead of having a right here if you give it to something to search for inside of a directory it can actually do that and it can search for whatever that is um V would be to invert the mat so it shows uh lines that don’t match the pattern so if you want to show something let’s say that there’s a lot of noise and it all has to do with a certain event ID so if you want to show everything other than that event ID you would do V and it would show everything that doesn’t match that event ID so will show everything other than that event ID so grip is a very useful tool and typically you can also do a combination of commands where you pipe the output of another command into GP so that it’ll search for you so for example you can do cat and then concatenate something and then use grep to search through the data of that file typic that’s kind of just a very very simple example but you can do that a lot um sort will arrange the contents in a specific order so you can actually combine grip with sort so you can grip for something and if you know there’s going to be a lot of data you can then sort that data to be output onto the screen into a any given order so for example uh to sort alphabetically from A to Z you just do sort by itself if you want to reverse the order so do Z to A you could do sort dasr and then uh the file name or the contents of grep right so you can just do grep Etc and then sort d r and it’ll show everything Z to A if you do sort by itself it’ll just do uh A to Z or ascending order we can call it ascending order and then you can have descending order as well you can also sort files numerically using the N flag so that could be potentially if you’re sorting files specifically that could be potentially what you can do uh you can uh sort output numerically as well uh and then you can reverse the order of that using the- R so it’s very flexible there’s a lot of different ways that you can do uh interact with it um you can then use cut to take certain pieces of data so certain columns let’s say from that given file so you do GP it searches for the data it and you want to take instead of getting every line that comes out of that uh result you want just the First Column or the first two columns so you would do that and then you provide whatever the options are and then you cut that data from the results for yourself so there’s something called said that that is the stream editor so data streams are just another way to look at um text that is inside of a file and sometimes uh when you’re looking at certain types of text uh it may not be plain text it may be formatted in any type of uh display or format so it may not necessarily just be plain text the way that you’re seeing inside of the screen it could be Bas 64 formatting or encoding that should that’s probably the better word so it could be encoded in some way and you want to just display C certain things that you know that will might necessarily match into something um so what you can do is you can uh edit it for the output right so set stream editor uh will take the first instance of old so let’s say it finds something that is called Old for example and then it’ll replace it with new so the the first one is the actual data that it’s found and then the next one is what it’s going to be replaced with and then it’s going to do that for every line that it actually finds this specific pattern in so if it finds old inside of every whatever line it finds it’ll replace it with new for that line if you want to edit it if you want to actually transform that inside of whatever the file is you can actually do that using the if flag so it’ll take the it’ll still find the thing that you want it to find but it’ll do an in place editing it’ll actually edit the contents of it within the file for you um you may not necessarily want to do that maybe you just want to search for it and display the results so that one should be used uh cautiously but uh this is what the stream editor does for us if we want to extract data so the these are things that actually have Fields so typically like a a CSV file for example or something that has a field that has a value uh inside of it uh this is the type of data that you would use a for awk um and the usage is to take essentially the First Column so this is what this is right here so it would print the First Column right here of this file which is what I mean when I say field-based and usually CSV files are like that CSV files are separated by commas which turn into a spreadsheet for example so when you use something that has those types of fields you can extract the data of that uh and in this particular case it’s going to take the very first column and then print all of the data that’s in inside of that First Column from file.txt um if you have a condition that you want it to print for you so if the the third column is greater than 50 for example you wanted to print everything from one to three right so or not from one to three it would be columns one and three so column one for example would be the name of the field or the name of that specific line item and then column three would be the value that so associated with that col line item so if the value inside of column 3 is greater than 50 I want you to print the name of that thing and then whatever the value is and that’s what that does for you so you can actually give it conditional examples and this is almost like writing a script actually it is basically a script it’s like a conditional statement right so if the value of column 3 is greater than 50 I want you to print that data onto my screen and this is the perfect segue to get into shells scripting so setting up a simple shell script uh we’ve kind of already talked about this so I’m not going to spend too much time over here so first and foremost you want to create the script so you can do touch script.sh that would be a basic command to create your script and then add a shebang line which we’ve already talked about at the very very top so the very first line of that script.sh file is going to be uh the shebang line which is this piece right here so we just call shebang right so that’s the shebang line and you have to put that at the top of every single script in order for the script to actually run as a script otherwise it’s just going to be a file with a bunch of lines of supposed code but there is no interpreter this is what The Interpreter would be there is no interpreter so that it won’t be executed as a script now once you have that file and you have your code and everything in it you need to make it executable and that’s done through the changing of the permissions so chod which we’re going to visit later when we get into permissions stands for change mode and then when you do plus anything you’re adding that permission to it so in this case we’re adding X which is the executable permission and we’re adding it to script.sh so change mode add executable permission to the script and that’s the general outline of how that uh permission change does now if you want to remove executable permission you would do minus X and then it removes executable permission from whatever that file is and that’s that’s basically how permission adding and removal works so fairly simple now once you have it so let’s say you want to create a bunch of variables inside of your script the first variable in this particular case would be the name equals Linux GPT and then when you want to refer to this variable so that would be the the variable name it equals so this is called an assigning operator where assign assigning this value to it if you use two equal signs it’s a comparative operator and we’ll talk about that in a little bit but typically when you not typically every single time when you use a singular equal sign you’re assigning a value of something to this specific variable so in this case we’re assigning this string Linux GPT is now assigned to this variable which is name then you refer to this variable with dollar sign name so if I ever use Echo dollar sign name it’ll print Linux GPT onto the screen or inside of whatever it needs to do because we have now assigned this string to this value very simple um you can also substitute the variable by assigning an actual command to it now in order to do that you would need to do dollar sign and then put the command inside of parenthesis but the way that this works is that if I just typed date onto my terminal it’ll give me the date and time that we’re in right now but I want to store that value of this command inside of this variable so I can do date equals and then I do a dollar sign and put that command inside of the uh the parentheses and now I’ve assigned the value of this command to this specific variable you could do name equals uname for example and it’ll give us all of the data that is assigned to that machine and it stores all all of that data inside of this variable that we have over here so you can assign text or strings uh to the variable or you can assign the contents of another command to the variable it’s called assigning variables both of them are technically an assignment because you’re using a singular equal sign and then you’re just choosing the type of value that you want to assign to that variable once you have your variable you can then create your conditional statement if you want to use a conditional statement so a conditional statement is it’s very basic right so if something do something else don’t do something or do something else and then you have to add fee at the end well I consider it to be finish just like short for finish but I think it’s just a backward version of if and that’s how the scripting in Shell Works in uh in Shell the shell language um I don’t know if it’s a language actually I think it’s a language but whatever um when we’re looking at this specific uh the conditional statement right here we can see that if and then inside of our brackets we have our first condition so we say if the variable whatever that variable is equals value so now look we’re looking at a double equal sign so now it’s comparative right before when you use the singular uh single equal sign you were assigning the value but now you’re comparing so you’re saying if the value of this variable equals to the value that we’re looking for and then you do a semicolon you say then I want you to print onto the screen the condition has been met notice that there’s an indentation here typically there’s at least either two spaces or four spaces uh for user friendliness or reader friendliness I recommend you always use four spaces um so if the variable equals the value that we’re looking for then I want you to print onto the screen that the condition has has been met if it hasn’t been met that’s what else means else just prints that the condition has not been met that’s it and then finish and that’s the full basic structure if any of these things are missing so for example if the semicolon is missing this is not going to work this entire conditional statement is not going to work if this fee at the end the FI at the end is missing this is not going to work and usually you’ll get some kind of a notification that there is a error online such and such and something is missing or the syntax is wrong so on and so forth so you’ll be able to debug your code accordingly but it’s very important that you understand that everything in this piece of in this block of code has to be here if this parenthe not parenthesis if this quotation mark at the end is missing you’re going to get an error if uh this specific equal sign is missing then it’s going to say well this is also a logic error because you’re trying to assign a variable to this value when it already has been assigned something else and now you’re trying to reassign it like that doesn’t make sense so there every piece of this needs to actually work here if this dollar sign is missing it’s not going to work so a lot of these things you you got to be very specific with scripting is very specific once all the details are there it’s going to work like a charm but if the details are not there you’re going to have a lot of issues so this is just a basic structure of a conditional statement and these are some of the comparative operators the comparison operators that we’re looking at so right here we’re looking at equals to the value but right here you could say okay if it’s less than the value that we want or greater than the value that we want uh or if it’s less than or equal to so L would be less than or equal to or greater than or equal to the value that we want or equal to the value that we want or not equal to value that we want all of these operators that you’re looking at right here have to do with num numbers okay so if you’re going to use any of these things you’re using integers or you’re using numbers as the value operation or the value comparison excuse me the ones down here have to do with strings so a double equal sign is comparing the value of a string so if you can see right here we actually have a string as our value so we’re using a double equal sign so if we say if this equals I want you to do da in this case W equal equals uh equal a double equal means that you are looking for the actual uh comparison to be made so if this does equal to this uh if you do a exclamation equal means that it does not equal to that there’s too many equals that I’m talking about I’m almost I’m losing myself in this explanation but I think you get what I mean so if you have two equal signs that means it does equal to whatever the value is if you have an exclamation equal sign it means it does not equal to what whatever the value is and then this is where a lot of automation stuff comes in so this is actually really this is like it’s very basic what we’re looking at on the screen right now but this is the foundation of automating tasks so for something in a list I want you to do something right so in this case it would say for item in 1 two 3 I want you to Echo that item so what it would do in this case is it would print one two and 3 1 2 3 so per line it would say this value this value and this value and it would print that onto our screen for us right so for item and this could essentially be anything so it could be for I in one two3 do Echo dollar sign I so this name is another one of those variables that has been assigned in this particular Loop it doesn’t necessarily have to be exactly this in a lot of cases is it could just be a single letter or it could be any word really it could be for value in one two 3 Echo value right so for the whatever this is for this inside of this list and this could be a variable that has that stores a massive list in inside of it in this particular case we don’t have a massive list stored we just have the simple list you would do your semicolon because you have to separate this piece and then you say for this condition I want you to do this action right here and then when you’re done that’s it you’re done so this has to be here in order for this block of code to be complete it’s kind of similar to what we do in Python except in Python we don’t have done and we don’t say do or something like that we just say for what whatever I don’t want to I don’t want to confuse you but this is a for Loop and this can iterate over a list the list could be a completely separate variable that houses a bunch of data or it could be a variable that is reading from a separate file and then you’re going through the contents of that file there’s a lot of stuff that you can do with for Loops this is a very very simple for Loop but for Loops are the essence of Automation and this is one of the most powerful things that you will learn when we actually get into scripting and creating some Advanced scripting this is one of the most powerful things that you’re going to learn is creating complex for loops and inside of a for Loop you can have an IFL state statement inside of a for Loop you can have another for Loop and there’s a lot of different great things about it and as long as you have your indentation correctly then everything should run smoothly so keep in mind that the indentation is very important in order for a for Loop to work you have the header which is this piece right here and then you have the body which is this piece right here and the same thing applies to our IFL statement this is our header of the if statement then you have the body of the if statement then you have the else which is another header and then you have the body of the else portion and then you have the finish at the very end so the indentation is very important to keep that in mind especially as we go into the the writing of the scripting you’ll notice how those things work and then so there’s a for Loop and then there’s a while loop now a while loop runs as long as the condition itself is true so while loop has a condition that it needs to be met so for examp example we have counter equals 1 that’s our variable okay so the counter equals 1 now we’re saying while the variable counter is less than or equal to 5 then I want you to Echo counter and then counter in this particular case inside of the while loop so remember there’s this ination so inside of the while loop now we have a new counter variable but what we’re doing now is we’re assigning that previous variable + one so for every time that this iterates what it’s going to do is it’s going to print the first time that it iterates it’s going to print one onto the screen then for the next line it’s going to iterate again or actually excuse me it’s going to print one onto the screen and then it’s going to add one to this variable which would then equal two right so then it’s going to Loop through this again and it’s going to say okay it’s still less than or equal to five so I’m going to Echo that onto the screen but now the value is two so then it’s going to add one to that new value which will make it three and then it’s going to go okay yes it is still less than or equal to five and then it’s going to print that onto the screen and then it’s going to keep going until it hits five so once this condition at the top right here is now saying oh it is equal to five now so if it is equal to five then you’re done right so that’s it because that at that point the condition is no longer true the condition becomes false right because once that’s done once it’s equal to five it’s going to go over here and then it’s going to add one to it so it’ll become six and then when it goes up here and it says hey is this equal to five it’s like no no no it’s not less than or equal to five now it’s six so I’m done I don’t I I’m not going to do this anymore so this is a while loop so it’ll keep running as long as the condition itself is true and in this case our condition is if this variable is less than or equal to 5 then just print it onto the screen once you’ve done printing it then I want you to add one to that value right here and then on the second iteration and then the third and then the fourth and then after that it should be at five and that’s it that’s when it’s done because the fifth iteration it’ll actually turn into six and then it’s done it’s no longer true so the condition has been met and the condition is now false and it can’t continue so that’s what while Loops are and that’s how while loops work all right now that you know how the system works and how the file system works and how to create files manipulate files and do all of that stuff with the actual system it is now time to move on to the users and the actual human beings that are going to be using the system and this is where a lot of the roles a lot of the responsibilities for uh being a system administrator actually comes in right because you have to deal more than just dealing with the system itself which is fairly simple to do especially once you understand how to find the guides and the manuals and so on and so forth now you have to learn how to deal with the people and manage the users and the groups that they are in so first and foremost creating users depending on the distribution that you’re on uh it’ll be one of these two commands so it’ll either be user add or add user and it requires pseudo permissions so uh basically administrator permissions or uh maybe root permissions even um it just you run the command and then you have the username at the end of it and that is basically it that’s literally how you add a new user now this is very simplistic typically there’s going to be multiple options that you would run along with your user ad command just to kind of handle certain things alog together um but you can also run them after the fact so you can just do pseudo username ad or user ad and then the actual name of the person and then first you would set up a password for them and I’ll show you specifically what that is um and that also requires pseudo permissions as well um then you would need to assign a home directory to them uh the default home directory if you just do user ad-h uh M if you just have the- m flag and then you do username what it’ll do is that if the person’s name is Alice it’ll add their home directory to home Alice which is typical of uh where the home directories of all the users are based on what we’ve already learned about the file system now if you want to declare a specific directory for that username then what you would do is you would add- M and then you would do- D and then you would have to declare where you want that uh usernames home directory to be and in this case it’ll be data users Alice and that’ll be the directory of their home and then from there this would be the username itself that we’re going to add so you’re adding a user and then you’re declaring where you want their home directory to be by assigning these two flags as well as the path and then this is the name of the user that’s being added um you can specify the login shell for that user so again you’re adding a new user and then you do-s and then you say that this is their login shell which is the typical login shell location for the binaries and then that would be the username itself that’s being added and then supplementary groups and we’re going to deal with group management in a little bit as well and we’re going to do all of these things by the way when we get to the practical portion I feel like I just have to keep saying that cuz this is just more of an overview and I know I’m glossing over a lot of these things but it’s mainly because the fact that we’re going to do a lot of the Practical exercises so um pseudo user ad and then you have a capital G right here and the developers would be one group and then admins would be another group and then Alice gets added to those supplementary groups so these are just some basic commands to add user details now this is a full command that you’re looking at over here right so we’re going to do Pudo user ad and then we’re going to assign a directory for this person which is going to be this place this is their home directory we’re going to assign their uh shell which is going to be at this specific location we’re going to add them to a supplementary Group which is going to be developers and admins and then we’re going to add a comment so Dash C right here is just a basic comment for this particular user who’s going to be Alex Johnson developer and then the final piece right here is the actual username itself so M creates the home directory D sets the custom home directory – s would be their uh zh uh zsh at the login shell D capital G developers and admins adds them to developers and admins and then- C is just a comment uh at the very end just to say that this is who it is and this is mostly for the system administrator that’s going to be looking at this and then finally Alice the actual username of the person and this one long command but it does a lot of things when you’re creating that user and then if you wanted to uh add their password right so typically what you do is you assign a password once you create somebody and then you have to immediately expire the password so that the next time that they log in which would be the first time they officially log in they get notification that your password has expired and you need to set up a new password so you create the user you set up a password by doing pseudo pass WD and then the username and then there’s going to be certain prompts that are going to come in so they’re going to be like yeah what do you want their password to be and then uh confirm the password and then you do this specific piece right here to expire the password that you just assigned to that person and then from there the next time that they log in they’re going to be prompted to actually set up a new password themselves so this is the the very gist the very basic gist of creating a new user now some of our basic commands for managing user so if you want to modify an existing user details with user mod that’s typically how it would work so if you wanted to change their username for example you would do DL this would be the new username and then this is the old username that’s being changed if you wanted to lock an account or unlock an account it would be with the capital L and it would lock that account and then capital u to unlock that account and there’s a variety of user modification commands that we’ll be working with once we get to the Practical section and then there’s also the user delete portion and uh the dasr right here would be a cleanup of everything so deleting the user and you’re also removing their home directory and all of the contents of that user and their directory so uh these are just basic uh user management so to speak commands that you’re going to be implementing when we get to the Practical section and then we have creating and managing groups creating and managing groups is as simple as using the group AD command and then just specifying the group name you can also uh add a user to a group using the um AG so a stands for append and then G stands for the group itself and then you can just add them to a single group or multiple group so this would be outside of the user ad uh version of adding somebody to a group so when you do user mod you would need to do the lowercase a uppercase G to be able to add them to whatever the group is and this would be a supplementary group that you’re adding them too and then of course you can just do group Dell to delete somebody or to excuse me to delete the group that you have specified so the group name that would be what you would be deleting in this particular case um understanding difference between primary and supplementary groups is very important so a primary group is the main group that’s associated for that user um when they create a file and if they create any kind of file or folder the group ownership permission of that file is set to whatever the user’s primary group is now by default every time you create a user there’s going to be a new group that’s created by their name so if it’s Alice as the user there’s going to be a new group named Alice that’s going to be created inside of your group’s directory and that’s going to be the primary group that’s been assigned to that person so when you want to change their primary group you would use the user mod command and then you would do a lowercase G and then the name of the group itself so whatever it could be one of your existing groups it doesn’t have to be a new group that you’ve created but you’re changing the primary group of that person by doing a lowercase G flag and then assigning the group name and then the username themselves and then now you’ve changed their primary group a supplementary group is additional groups that they have access to so you know based on who they are and what permissions they have or what roles they have they could have multiple groups that they’re assigned to so they can get access to the resources that are available to all of those individual groups uh so that’s mainly the big piece that you want um and multi memberships can happen in a lot of cases in a lot of cases they do happen so for example just a a manager for example a manager will have access to the management group as well as the regular employee group for example so that they have access to all of the stuff that’s inside the manager folder as well as everything that’s inside of their own regular folder or along the other groups folder so there could be a manager they could be in marketing and they could have their own regular folder as well you know what I’m saying so they have multiple groups that they’re a part of it’s very useful for granting uh people access to different sets of files and directories owned by various groups you just have to be careful that whenever they leave that department so if they’re no longer in marketing they need to be removed from marketing unless they’ve gone up a level and they do still get access to the Marketing Group but if they’ve left Marketing in general completely and they went into Tech they shouldn’t get access to the stuff that’s inside of the marketing folder and this is mainly because that you want to protect the company from uh potential uh you know if the person Le Lees for example and there is a bunch of intellectual property that’s inside the marketing file or marketing folder and now they have access to all of the stuff inside of marketing and they can steal that IP for example that’s just an example so uh it could also protect the company from any security issues if that person gets hacked and they still had access to marketing for example and now they the hacker knows that they’re a part of that group and they go inside of the marketing folder and they see all of that IP and then they can steal that data because a system administrator didn’t remove them from that group and they should have never had access to that data to begin with so this is very important to keep in mind you can have multiple groups that somebody could be a member of you just need to be careful as you go along and as the company grows and as people’s roles change that you uh modify the groups that they’re members of because as long as they’re a member of that group they have access to everything that’s inside of that group um you can add or remove somebody from a supplementary Group by using the a G so lowercase a capital G and appending them to the specific group or groups that you want them to be added to so very similar to what we did previously uh by using user mod and then lowercase a capital G and then you start assigning the various groups that you want them to be added to and speaking of permissions and ownership and all that stuff access we are now in that particular section so file permissions overview um permissions model is a very fundamental component a very concept of managing access to files and directories uh it makes sure that only authorized users can read write or execute files or delete files or modify files so on and so forth maintaining the system security and integrity um there are permission models that are involved in three levels of access so there’s the owner of the file or folder or the asset or the whatever it may be there’s the group permission level and then there’s others so it’s basically everybody other than these people so it could be guests for example um so there is the owner of the file or folder there’s the group that they are assigned to which would be the primary group as we’ve already discussed their primary group is would be the group that it’s assigned to and then any everybody else that falls inside of others the owner is typically the user who created the file or the folder they have the highest level of control over their files and directories and they can set permissions to read write or execute the file or directory even if they are not a root user specifically so if example.txt is owned by the user Alice she can control who else can access and modify that file the group is associated with a specific group who are members of this group can be given certain permissions on the fileer directory so whatever the group permissions are the group permissions determine what groups uh what group members can do with that specific file or directory like the owner group members can have readwrite or execute permissions or whatever the permissions are that have been assigned to that specific group so if example text is associated with the group developers all the users and developers can be given permissions to read write or execute the file depending on the permissions available for that group specifically okay so keep that in mind because you can modify the permissions of the group and it may not necessarily be read write and execute all the files right and then you have others so it could be all the users who are neither owner nor members of that specific group so it could be everybody outside of Developers group or everybody who’s not the actual owner uh somebody that’s a guest for example so there’s a lot of different people that could be that could fall under the category of others so as long as they’re not the owner or they’re not inside of that primary group that’s been associated with that file then they all fall into the others category um determined uh the permissions for these people determine what any other user can do with the file it can also be set to read write or execute or it could be read or it could be uh execute or it could be none of the above right so they could have no permission to access it just depends on what the the company infrastructure is like so if example text has read permissions for others any user on the system that can actually read the file regardless of the user or group status so a very uh common file that is limited to a bunch of people that or all the people that falls into others would be the Etsy uh Shadow file the etsc shadow file has the password hashes of all of the users that is inside of that environment so even if somebody is not necessarily the root owner even if somebody is not necessarily inside of the administrator’s group and they’re in marketing for example that that means they fall in the others category and they should not be able to read that file so that’s very important to understand there are some files that even if people who are not guests and they’re actually senior Executives of the company or something there is a number of people a large number of people that should not have access to the Etsy Shadow file because it has all of the password hashes for every user that exists on that system so it’s very important to keep that in mind now the way that permissions are designed they go by read write and execute and they have acronyms for them so R would be for read W would be for write and then X would be to execute so read means you can view the contents of the file or the directory uh write means you can modify or delete the file or the directory and execute or X means that you can actually run the program or the script or access that specific directory so rwx so keep these in mind okay this is typically what it looks like and I would say that this is not typically what it looks like this is just this particular example but this is what it looks like when you look at the permissions of a given file or directory the first character which is represented by this Dash right here indicates the type of file that it is so if it’s a dash it would be a regular file if it’s a d that means it’s a directory okay so this is representing the very first character which is this Dash right here so in this case we’re looking at a file if this was a d then this would be a directory now the next three characters represent the owner’s permission so in this case RW and then we have a dash right here and if it was X that means it’s also executable right so this would be rwx representing the owner’s permission read write execute in this case the owner can read and WR write there is no execution so it’s probably not an executable file right it’s just read write and then that’s it and these are all of the permissions for the actual user that created the owner that created this particular file would be this this and this right here okay and then we have the next three characters that represent the group’s permissions in this particular case um in this example they’re saying r2x but in this case it’s just r d d so the group group can only read the contents of this particular file they can’t write to it and there’s no executable permission so there’s no execution that they can do on this particular file either and then the last three represent the others permissions which also in this case are read and then no writing or modifying and then no executing so this is what it looks like so the very very first one is either it’s a file or it’s a directory so a file or a folder so if it’s a regular hyphen a dash it means it’s a file if there was a d here that was mean that this was a directory the next three characters represent the owner’s permission the next three characters represent the group permission and then the last three characters represent every single other person and this is what the outline what the frame uh the template I should say the template of permissions are when you’re viewing the permissions of a file now let’s say you want to change the permissions of Any Given file or folder so we’ve already kind of touched on this so in this particular case we’re going to do CH mod so change mode and when you do plus X which is what we talked about in the very earlier example plus X adds executable permissions but what you’re doing in this case you’ve specified that you want a u which is the user so you want the user to have executable permissions to that file whatever the file name is right so CH mod changes the mode plus X which is a symbolic method of changing we can also talk about numeric methods which we’re going to do right now um but so we’re giving the user executable permissions to that file name which allows execute permission for that user now octal values would be a completely different thing so 755 so 7 represents the maximum permission so seven is read write and execute for the user five would be read and execute for the group as well as read and execute for everybody else so the value of these things breaks down if you had all of the permissions rewrite and execute you would have all permissions for that individual group so this is actually what the numeric value looks like to have read permissions you would be getting a value of four to have write permissions you’d get a value of two and to have execute permissions you would get a value of one so if there is read write and execute the value would be seven if you can only read the value would be four so if there was 744 that means the user could do read write and execute the group can only read and everybody else can only read right if it’s zero then that means they have no permissions okay so 740 means that the others group has no permission the user that created it the owner has rewrite an execute and then the group has uh just read permission right so that’s this is the the breakdown of what the values actually the numeric values of these things look like read equals to four write equals to two and execute equals to one and then when you have the breakdown like this over here then what that means is that the three numbers that are represented on the screen so CH mod 755 the seven is representing the owner the seven represents the owner the five represents the group and then the five right here represents everybody else so these three numbers represent uh the the modificate the uh overall owner group and others categories if it was 777 that means the owner the group and everybody else also has readwrite and execute permissions so just keep that in mind there’s the numeric value value of this the numeric method of modifying permissions and then there’s the symbolic method of modifying permissions along with changing uh permissions we can also talk about changing ownership and you can have ch own and and then CH grp which I’m sure you can imagine what these things mean so CH own stands for change ownership and chgrp CH stands for change group uh these are also pseudo commands so change ownership of the user to this particular group and then the file name as well so you’re changing the ownership of the user and the group for the file name so this is the example right here we have change group so change group of the group name to the file name that’s what we’re looking at so in the particular case that uh we have seen in the first line you’re changing the ownership of the the file name right here to this particular user and the group in this case we’re changing the group for this file name to whatever the group name is okay so just keep this in mind so these is how you change the ownership of the file name and it would be the user inside of this group and then you change the group of this file name by assigning it to this actual group grou so fair fairly simple it’s like fairly intuitive as far as the commands are concerned but you can and again just keep this in mind so once the file name has been assigned to this particular group that means now all of the permissions that are inside of this group are now applied to this file name and anybody who is inside of this group has all of those permissions on this particular file this is what’s very important to understand okay when you change the owner then you’re changing all of the permissions of this owner to this file name so if they have the permission and typically it’s everything they have all the permissions or whatever the main permissions are so you change all of the permissions of that owner to this file name if you’re changing the group of this file name everybody inside of this group has access to that file name and they have the permissions for that group over this file name so just keep this in mind it’s very important and then we have special permissions so suid sgid and the sticky bit so um when the Su ID when the set user ID when that permission is set that file will execute with the Privileges of the owner not the user that’s running it so this is very important to understand if the owner has administrator privileges and the set user ID the Su ID has been set on that file if the user so the user is not the owner right so whoever the user is when they run that file when they if it’s a script for example when they run that thing they’re running with the permissions of whoever the owner was that was created it it’s very simple and it’s it’s very uh powerful to understand okay you are setting the execution privileges of that file to be the execution privileges of the owner and not the user the user could be lower privilege the owner could be an admin and if the user is running it they are running it with the Privileges of the admin which is the owner so if you want to change an add that specific specific permission to the file you would do the change mode chod and then you add on the user level the S permission right here which is the suid and then you give the path to whatever the file or directory is so this is very much for things that are executable that everybody else should be able to uh access meaning a binary so a binary for example the password binary inside of the user bin so this is for the user binary not the the binary of the entire system so just keep that in mind so this is the users’s binary for their password allows regular users to change their password securely that’s what this specific binary allows them to do because we already established that the password binary allows somebody to change somebody’s password so if the user wants to be able to change their own password they should have the permission to be able to do that so this is what this does and this is the case that it would apply to and you just got to be you got to keep in mind that this is a security issue okay so if this is a system binary and this is again something that we’ve done a lot in pentesting we search for files by the permission that is set so if they have suid permission on that given binary or on that given folder we’re going to try to go ahead and access that folder and drop some kind of a exploit in there a payload in there so that we can run that payload Lo by the permissions of the administrator for example and so if we if that directory has admin privileges and we have a shell that we want to execute and we want to switch oursel to the admin privileges we can then run everything that’s inside that directory because it has the S permission set and then now we’re running it with the admin privilege which means we can execute as an admin and then give oursel access to that system as an administrator so this is very important to understand okay that may have been a little bit confusing that last piece may have been confusing but what you need to really understand is that suid will execute the Privileges of the owner not the user that’s running it so the user could have no permissions to do anything but the suid has been set on that file or folder and now they can run it as that owner right just keep that in mind SG ID is very similar except it applies to the group so it executes with the permissions of the files group and not necessarily the person that’s running it so for directories files that are created inside of that specific uh directory are going to run with the parent directories group so whoever the group is that owns that particular directory any file that’s dropped inside of that directory will run with the permissions of that specific group so if you want to change and add this specific feature for permission for any kind of a directory this would be it you would do G Plus s and that’s it and this is a CH mod that is done with pseudo by the way this is not something that typically would be done by everybody so a a lower level user should not be able to add the special permissions to these things uh this should only be something that’s allowed to be done by somebody who has administrator or root privileges so that we know that it’s the right person that’s making these changes so and everybody should not be able to make uh the special permission or modify the special permissions on files it should only be reserved for root administrators and domain administrators so on and so forth and then there’s something called the sticky bit so when you set a sticky bit on a directory only the file owner or directory owner can delete or modify the files within it regardless of the group or other right permissions or anything else right so only the owner of that file or directory can delete or modify it nobody else can do it this is called a sticky bit and so you do chod plus T you’re adding the sticky bit so just T you’re adding that to whatever the path of the directory or file is uh this is commonly used in shared directories like the temp folder so the the file owner or the directory owner can delete or modify the files within the temp folder regardless or of the group or other right permissions that may be assigned to that temp folder right so only the owner of the temp folder or the owner of the temp directory can delete or modify the files within that directory doesn’t matter who else is trying to do it uh doesn’t matter what other group or WR permissions exist only the owner can delete or modify the files within that directory so this is something else to keep in mind because this is also a security issue so if there’s a sticky bit that’s been assigned to something it’s also something to keep in mind and it’s typically something that you do uh to make more SEC like make the environment a little bit more secure so this is opposite of the the special permissions where it technically makes it less secure if something has the special permission attached to it any user can run it with the owner’s uh permissions or with the owner’s privileges whereas the scky bit only allows the owner of that file or directory to be able to modify meaning change or write to any of those files or delete any of the files inside of that directory which brings us finally to our user authentication and pseudo permissions portion so user authentication we’ve already touched on this particular file already so the Etsy password file stores all of the user names so the user accounts themselves there is no uh password doc uh hashes or any PL text or anything like that that’s inside of this file the name is kind of misleading and I think it’s been designed to be misleading each line in this file corresponds to a single user account and contains several Fields separated by colons and those fields all stand for something so all of the users the usern names are stored inside of the Etsy password file now this is an example of a uh a specific entry so you have Alice then there’s a colon and there’s an X and this is typically where the password hash would be and then you have colon and then an ID and another ID and then uh the name right here separated by a few commas and then another colon and then we have their home folder and then we have another colon and then we have their bash path so the username Alice would be the user’s login name so this is what their login name is the password placeholder which in this if we once we actually go into the shadow file and you’ll see this is a very large hash value that’s right here um the user ID would be this piece right here uh so unique numerical identifier for whatever their user ID is the group ID would be the primary group ID that’s associated for them which is in this case their actual personal group that was created when the user was created geckos um this is the optional user information like their full name office number Etc so this is this is the comment for example this is where the comment piece would fall which is everything that’s inside of this specific uh uh separated by these two com um what I’m having a brain fart the colons the inside of these two colons this is called the geckos which is the comments essentially so user information full name office number Etc and then we have their home directory which is the path to the users’s home directory and then we have their shell the default shell that’s been assigned for this user and this is what a typical entry inside of the Etsy password file looks like then there’s the Etsy Shadow file so the Etsy Shadow file has everything else that we just talked about except now it actually has the password hashes so the encrypted password information and other details related to the password management of that person this is more secure than Etsy password so a lot of people can actually access the Etsy password file because it doesn’t have the password hash in it the Etsy Shadow file cannot be accessed by anybody other than somebody who’s the root user or on the suit’s list so it’s very important in this particular case it’s saying it’s readable only by the root user which means that only the root user can access the Etsy Shadow file because it has the encrypted password hashes of all of the users now what we can see here is the same thing so we have Alice there’s the colon and then you see all of this data right here and all of this and you see there’s these this um this uh my gosh this ellipses that’s right here because this specific hash value was actually massive um it took multiple lines I think it took two lines so it was a very very large hash value but we can see that it’s been separated by these two colons right here so everything inside of this including the dollar sign and the six and the rounds and then including this forward slash at the very very end all of this was the hash value of Alice’s password and then there’s 18661 and then there’s 0 9999 7 and then the last three colons at the end so we have the user us name which is Alice the password hash which we already talked about uh this field contains the hash including details about the hashing algorithm and any salt that’s been used and typically the very uh beginning portion right here the dollar sign six and dollar sign this typically is the hashing algorithm and based on uh just what the breakdown is of the it could be shot 256 or whatever it is that’s typically what is determined at the very beginning right here and then the last password change so this is what 18661 looks like which the date of the last password change represented as the number of days since January 1st of 1970 so 1, 18661 uh 1,00 or excuse me 18,6 161 days gosh 18,6 61 days since January 1st of 1970 that this particular uh password was changed that’s been how many days it’s been changed so so you have to actually count forward from January 1st 1970 and come 18,6 61 days forward until you understand what date it was that the password was changed and then the minimum password age in this case is zero meaning the number of days required between password changes so this particular case there is no minimum age that’s required of this so uh it can essentially last as long as possible um and you can change this and this would be the number of days so if it’s 30 after 30 days it would have to be changed my bad excuse me so yeah the minimum password age meaning that they are not required to change their password um they can or they’re not required to wait any number of days to change their password so they could change their password immediately that’s what the minimum password age means they can change their password every day if they wanted to the 99999 this thing right here means that they have 99,999 days until they are required to change their password next time so this would most likely need to be set to let’s say 90 so that you can have the user change their password every 3 months for security purposes so this is a very important field as well so the zero is kind of understandable so you don’t want you know they can change their password any day that they want they can change their password daily if they wanted to but this piece right here is a security issue so nine 999,000 days almost 100,000 days they they are given almost 100,000 days before they’re required by the system to change their password again and that’s very very important to understand as a security issue you want to change that to make it 90 days for example or 120 days or something like that so that there is a definitive limit on how long they can go with the same password and it would be required to change their password and then there’s the password warning period so we already talked about the 99999 uh the password warning period which is represented by the seven this time is the number of days before password expires that the user is warned to change it now in this particular case it would be 99992 so it’s still too long right so they need to have uh let’s say still 90 days and then on day 83 they would be notified that hey in 7 days you need to change your password the password inactivity period in this particular case is represented by this uh colon that’s separated by these two other colons the number of days after password expires during which the account is still usable and in this case there is none so if the password is expired that means they have to change their password in order for them to use their account again and then they have the another one colon right here the date when the account so actually I was uh incorrect so the very next colon is what we’re talking about for the inactivity period the second colon is the date when the account will be disabled represented as the number of days since January 1970 which has not been assigned here either and the final colon is reserved for future uses so these last three colons especially this particular one so the password inactivity would be the number of days after a password expires during the account still usable which in this case is none so as soon as it expires they have to change their password the account expiration date there is none because as long as they’re in the system as long as they’re an employee as long as it’s your computer and you haven’t deleted your username for example that’s not going to change and then you have the reserve field that’s been reserved for future uses for a variety of other reasons and so that is the full uh explanation of a line inside of the Etsy Shadow file so as we’ve already discussed the Etsy password file it can be world uh World readable meaning that everybody essentially can read it but the password placeholder should be X meaning that it would be less sensitive so that that is potentially something that you can do if you wanted to do it like that um I still think that the password file should not be uh World readable because lower privileged users should not get access to all of the usernames inside of that system because we have gone through a variety of pen testing exercises that we found a variety of different users and then we did Brute Force exercises to be able to find the passwords for a variety of different users and then kind of did uh lateral movements and then eventually had vertical movements until we got higher privileges so this should in my opinion should not be World readable but it can be World readable um so that as long as the password hash has been uh replaced by this x then technically it should be fine the Etsy Shadow file however should only be readable by the root user because of the fact that it has the encrypted password hashes and that is a very very strict um file that only the root user or root users just in case you’re working in a company and one person is out sick or uh you want to have redundancy you have a a s just a handful of people who work in it who are the highest level of management that have root user uh permissions that can go ahead and access the data that is inside of this thing and the thing is you don’t need to give everybody root permission because not everybody needs to see these password hashes anyways because you can just reset somebody’s password very easily without having access to their password hashes so that’s the that’s the piece where it’s like okay it is understandable that they say that only the singular root user is the person that has access to the Etsy Shadow file because somebody could just be an IT administrator beyond the suit’s list and they have permission to reset somebody’s password if they’ve been locked out or expired so on and so forth so um these are the major security considerations to think about when you’re looking at these two specific files uh in particular so managing pseudo permissions uh granting the ability to users to execute commands with root privileges via pseudo is a critical aspect of system administration because you want to have redundancies with your administrators you don’t want everything to rely on one person this involves adding people to the pseudo group configuring the pseudo file and applying command restrictions for fine-tuned Access Control so when somebody’s in the pseudo file essentially they can run pseudo command and they can do really basically what any root user would be able to do they may not be the root user but they could be in the pseudo Group which means that they have those administrator privileges those higher level Privileges and the pudor file is really the highest level of privilege that somebody can get because they can modify and manipulate a lot of different things inside of the system adding users to the pseudor group is as simple as doing pseudo user mod add to group and then the group pseudo and then the username that you want to add to that group so it’s very very simple to do except this is a very very powerful command the command that you see in front of you is a very powerful command that somebody can run and now whoever this person is right here they have the key to the kingdom so to speak so um basic pseudo permission adding you just use the user mod add to the group pseudo and then give the username and this is just a breakdown of how that command works so user mod AG pseudo and then the username and you already know what all of this means and how these things break down so that’s how you add somebody to the pseudo group The pseudo group and this is an example so we’re just adding Alice to the to the pseudo group so this one’s kind of redundant but uh now you’ve seen this in multiple regards so you can see how you can add somebody to the pseudo group or really again any group right so if we just replace pseudo with whatever the group name is we’re adding Alice to that group so this is how we do the fine-tuning This Is How We Do the controlling of what people can do inside of the pseudo group so the pseudo configuration file is located as the Etsy pseudo location and it controls pseudo permissions for users and groups basically showing you what commands they can actually run with pseudo if they are inside the pseudo group so if you want to edit this you would do it with v sudo and you go Pudo vudo and the explanation is that this is the safer way to edit the sudo’s file because it checks for syntax errors before you save it so you don’t want to do this with Nano or Vim because vudo actually checks for syntax errors and then it’ll open the pseudor file and the default editor that you have it just does it while checking for syntax errors and once you’re inside of the sudos file you can Bas basically uh designate or define specific commands that people can run with pseudo when they’re inside of the pseudo group so for example um giving somebody full pseudo access would be assigning their username so this would be Alice for example and they she has all all all all so she has all of the permissions the breakdown is whoever the name of the user is the first all allows pseudo access from any terminal so all of the terminals would be this one so on all terminals she has access and then all all inside of the parenthesis right here allows her to run commands as any user and as any group and then the last one is allowing her to run any command so she can do it from any terminal as any user in any group and she can run all of the commands so she’s getting granted full pseudo access if you want to Grant pseudo access without a password which is another one of those things that like now you’re just giving this person literally everything in the Kingdom you just add that no password uh parameter or that no password argument inside of this piece and now she has access to everything so uh on all terminals as all users no password required she can run all of the commands and now she has access to the entire Kingdom um if you want to restrict access to a specific command which is most likely what you’re going to end up doing this is what it looks like so you have the username she can run on all of the terminals as whatever user no password required but she can run system control the user uh this is the full path of the system control binary and the system uh and the reboot uh binary so this is how you and then it’s separated by a comma and a space right so you have the path of whatever the command is meaning the binary path to whatever the command is so whoever the username is can run the system control and reboot commands with pseudo without a password this is what we’re seeing over here so uh this is how you specify what commands they can run by assigning the actual binary path inside of the user directory inside of our root directory so root user user binaries and then system control and this is what this specific user can run without a password requirement and just to make it legit for Alice this is our example of giving Alice that permission so Alice can run on all terminals uh as whatever user no password required the user bin system CTL binary and that’s pretty much it that’s how you uh specify what Alice can run on a computer so just to give you a couple of practical examples here so we got adding a user to the pseudo group so we got pseudo user mod add to group The pseudo uh add to the pseudo group excuse me the user Bob um we have pseudo vudo that edits the pseudor file so it opens it with vudo so that you can actually get all of your syntax checked automatically by the system to make sure your syntax is correct and then we have the managing of the pseudo permissions and our summary here so granting and configuring pseudo access involves adding people to the pseudo group editing the pseudo file with vse sudo and defining the command restrictions for fine-tuned Access Control you don’t want to do all all all all you don’t want to give them everything right so you get them access to the pseudo group get the pseudo file open with vudo so that you have your syntax correct and then you just want to give the specific path to the binary that they should have access to which means the commands that they can run and this ensures that the user has necessary permissions to perform administrative tasks while maintaining security which is very very important in this regard so security is a big deal and we’re going to go into security as its individ ual chapter and everything but we are now bringing all of these Concepts into play so you you’ve talked about file permissions so far and we’ve talked about groups that they can add to and inheriting the permissions of these various groups and now we’re talking about the ultimate Group which is the pseudo group and inheriting the permissions of that group but now you want to limit the individual command that these people can run as pseudo and that’s how we assigned that was going into the pseudo file with Etsy pseudo and then opening up the pseudo file with vudo to make sure that we can actually uh get our syntax correct as we’re doing these things and then from there we say okay these are all the different commands that you can run so that’s how you do that and as a last example over here we got the full series of commands here so you have the user uh mod so pseudo user mod adding the person into the pseudo group right here and then you open up vudo and then you go and use various options so for example the no password option or giving them all terminal access so on and so forth inside of the sudor file and then the username all equals all no password so on and so forth this means that this person whoever this guy is this username person can run this particular system control without any password requirement and that’s the only thing that they’ve been given pseudo access to without a password or even if they had a password this is the only thing that they have access to that they can run they don’t have all assigned to all of the commands that they can run because that would be silly that would be careless on our part as system administrators so this is it this is the very final cheat sheet kind of a thing that you get for permissions and uh pseudo and so on and so forth and we can now jump into the next piece right here which is file management and file systems all right so let’s get deep into the file management and file systems uh so the file system hierarchy standards and the directory structure uh is what we’re going to be talking about uh as a review these are the key directories that we have inside of our FHS our file uh file system hierarchy standard um the FHS defines the directory structure and contents in Linux helping maintain consistency across a variety of distribution so you will see this in pretty much every distribution of Linux the main directory is the root directory then you have the binaries then you have the Etsy you have home VAR temp you’ve we’ve talked about all of these things so none of this is news to you unless you actually skipped that particular uh section of this training but it was towards the very beginning where we went through the file system hierarchy so these are our key directories and since these are the key directories and I mean there’s also a lot of uh a variety of other directories as well that kind of fall under the the ne not NE primary directory so if you go back to the section that we were talking about partitioning uh it was in chapter 3 uh if you visit that or if you kind of re remind yourself of everything that we talked about we have the primary partitions and then we have the extended and The Logical partitions and uh this goes into directly into the concept of mounting and unmounting file systems or partitions within file systems so here’s what that uh is going to look like so we have the mounting of file systems so when you mount something you’re attaching the file system to a directory so that it’s accessible within the larger directory tree and you would do this with the mount command so uh you have any given file system you attach that file system and this could be the xfs file system it could be a fat 32 or a variety of other file systems you you’re essentially mounting it you’re attaching it to Any Given directory and then we’ll as you can see right here we’ll just talk about a few different ways that you can do that but when you do this you’re making that file system accessible you attach it to your directory uh hierarchy so that it can be accessible uh through the OS and through for whatever the user is that’s using it so on and so forth so uh in order to do that you would use the mount command and this is a pseudo command so you uh you have the pseudo uh pre-command I guess we can call it or the qualifier at the beginning of this so pseudo Mount and then you have the device name which is going to be mounted onto the mount point so you would find out what the device name is running Dev and again we also talked about this as we were going through the installation uh part of chapter 3 and I showed you how to find the various devices that are currently mounted or plugged in so to speak the the devices that are currently plugged in to your system and then from there you would get the path to that device and uh it usually uh is something like you know disk 4 or something like this it’s like it’s not a complicated thing so it would just be for slev and then there would just be the device name or the the name that the system has given to it and then from there you would give it the mount point which is the path that you wanted to be attached to so using an option like T would specify the type of the file system so if it’s the xt4 means that that’s the type of file system that you’re working with and then this is so this is what I mean right here so you see how it says Dev device name in this case it’s saying sdb1 so it’s the standard dis uh portion one of that so this is the second one that the second physical uh partition the physical disc that’s been attached to this particular system cuz SDA would be the first one sdb would be the second one SDC would be the third one so on and so forth and then you have the first portion of that physical device the first partition of that physical device that’s being mounted onto the mount uh directory inside of our root directory so that’s typically what it would look like and you’re just saying that I’m dealing with an xt4 or I want to be mounted as an xt4 right so there are various options and we’re going to go through all of this again I have to constantly remind you we’re going to go through all of these as we go through the Practical portion of the command but this is what essentially it looks like you use the mount command to mount any given um file system or any given specific partition a primary partition and extended partition you can mount that onto a mount Point um you can also declare automatic mounting and this is done inside of the ETFs tab uh configuration file that defines all the file systems that should be automatically mounted when you start your system so when you boot your system there would be either one or two or a handful of file systems that you want to be mounted as the system starts so you don’t have to manually mount everything this is a this is considered a manual Mount what we’re doing over here and sometimes you want to do that sometimes you want to unmount something as well but automatic mounting is typically for this the file systems and the partitions that you’re going to be using regularly so you’re going to use an automatic mounting configuration that’s inside of the fs tab file inside of the Etsy folder the ety fs tab is essentially where all of that stuff is housed so the dis partitions or any other file systems that you have that should be automatically mounted all of those configurations are going to be inside the F FS Tab and then each line in the fs tab represents a file system and the mount options that are attached to it so this is what it would potentially look like inside an example or typic the typical FS tab uh file so um you have the file system first and then you have the mount point you would have the type of the file system and then any options that you want to attach to it The Dump location and the pass location so as an example uh we have have the file system right here so just breaking this down it could be the dev sd1 which refers to the first partition on the first sat drive so you have SD for the SAT drive you have the a representing the first SATA drive and then one representing the first partition of that SATA drive so that would be the file system that would fill in right here a more robust way to specify this using the universally unique ident identifier would be this piece right here so you could do a uu ID and then fill in The UU ID instead of giving the path to the partition itself that you want to use obviously that requires that you know The UU ID which you can also find running the previous commands that we ran to find our partitions that are attached to our system uh you could have a label and the label variable equaling the value of whatever that label name would be and specifying whatever that file system is by its label typically this is like um what do I have I have like a SanDisk uh something something something I don’t know I don’t know what the name of the the the drives are but let’s say you have an external drive and it’s a SanDisk drive and it has the label name the label name is very different from its uuid obviously and label from the partition location or the partition path that’s been assigned inside of your system this is actually what it looks like when you plug it in and then you see your list of connections and then it says oh you know this is this particular one and then you can change the label name so I always change them because uh number one I don’t like spaces in the names I want the whole thing to just be one word and then I turn it into an acronym that I understand and I also put the size of it so uh primary 2 terabytes for example that would be like one of the labels that I would rename mine to so that I I know exactly which one I’m dealing with as I go through my uh my standard discs or my external hard drives so to speak speak um and then we have the server share path as well so this is a n network file system like for example the SMB or CFS file system uh those are network connections that are being accessed from a separate location from a remote location so it’s sitting on a file server somewhere and then you’ve connected to that file server and now you have the path to that file server that’s going to be inside of your file system variable so all of these fall under this very first piece right here which is referring to the file system so since now we have the location that we’re pulling the file system we now need to give it the mount point which is the location that it’s going to be mounted upon so the mount point is the directory where the file system is going to be mounted so it could be the very root directory it could be inside of the home directory it could be inside of the mount data directory um which typically is the Mount point for any addition additional data drives the user directories so all of the home directories for users are going to be mounted on to the home directory and this would be you know for every user that you have you typically would need to assign inside of the ET CFS tab entry inside of the ETFs tab directory or configuration file my mat uh so inside of this configuration file you would assign the actual home directory for the user inside of the home directory path that we have for the route so uh this would be done one time inside the fs Tab and then as soon as the system is booted and the it’s up and live and running uh it’s going to be accessible and everything is going to be mounted you don’t want to be doing that every single time especially if you have like literally dozens or hundreds or thousands of employees and in a lot of environments especially when it gets that big and you have that many people to deal with it becomes something that is done with a script because you would probably get a dozen employees per day or something per week or something like that and that’s just not something that you want to do manually because it’s so much faster to do it with a script and then you can also avoid making any errors if you use a script so uh root file system the user home directories the mount and data points for any additional data drives so we have the location that it exists we have the location that we want it to be mounted to and now we need to give the type of the file system that we are dealing with so xt4 xfs NF FS V fat which is one of the variations of the fat file system so fat 32 so on and so forth so we had uh all of these we have already talked about these various file system types but you would need to know what type of file system it is that you’re mounting so that you can give it the type in this particular variable and then the system will know how to interact with it and uh for the most part the most commonly used is going to be xt4 which is for Linux file systems in you know personal computers or in small environments xfs is the high performance file system that usually is used for a file server um and for a web server for example which would most likely be combined with some kind of a network path because if you’re accessing it remotely from a uh from a series of uh computers that are in a different city for example uh that is going to require the network path for the file system name right here instead of the the file system nickname or something like that you actually need to give it the network connection path that you have over there and the NFS would also be the type that would be used in in that regard so if it’s internal and you’re hardwired to it then you would use x xfs as the type but if you’re accessing it from the network connection then it would be NFS for the network file system and then vat is just one of the fat file systems that’s often used to USB drives or external hard drives that are being attached via a USB wire or even a fire wire or something like that it’s just something that’s being externally attached it’s an external hard drive that’s being connected to the machine and then we have our options that we need to attach to this particular path uh so it could be the mount options that control uh how the file system is actually mounted um it can be multiple options that are attached to this and the default options you can uh refer to them by just saying defaults and those are the wsid Dev exec Auto no user and async all of these happen all at the same time I’m not going to break these down right now you’re more than welcome to Google them if you want to but these are all mounted or assigned to this particular uh FS tab entry by just using the defaults uh entry or the defaults option um there is no a time which prevents the file system from updating the access times there is r o which mounts the file system as a read only there’s RW which mounts the file system as a read WR which is what’s represented right here um and then a variety of other options that you’re more than welcome to go check out and uh I’ll actually just show you what all of those options are real quick okay so we have the defaults option as we already mentioned which will include all of these options right here and the example entry would look like this so this is the partition it’s being mounted onto the root it’s an xt4 type of of a file system and we’re adding the defaults to it and the defaults would add all of these individual uh options as one entry which would be this piece right here then you have the RW by itself which is this guy right here and RW represents the read and write mode so that somebody that anybody that gets access to it has read and write capabilities you have r o which represents read only uh you have no exec which prevents the execution of any binary on the mounted file system um and that is actually not included on here because this is the exec meaning that they can execute all of the binaries on that particular file system no suid disables the set user identification or set group identifier bits that we just talked about in our previous uh sections uh making sure that you can or the the person who’s running anything from this doesn’t do it with the permissions of the owner or the creator of it then we have the no Dev which does not interpret character or block devices on the file system and really what that means uh is that the users cannot create device nodes within that specific directory structure which enhances the security by restricting access to that devices management functions on a mounted partition so the users can’t create devices on that specific file system they can’t attach uh external devices or create any virtual devices attached to that particular file system so no Dev if you can recall what we were talking about with the actual Dev partition or the dev portion of the file system and how it creates a file structure for a printer that may be added or for the driver of the printer for example or a USB drive or something like that so when you think about what that’s what Dev does then you can think about no Dev meaning that it doesn’t allow them to do that on this particular file system that’s been mounted uh no a time is the does not update the access time on files when they are read so this will improve performance potentially uh but it’s essentially the access time on files so you open a file and then there’s a little uh meta data that’s updated that says oh this file was accessed last at this particular date and time so this is the piece that you’re going to remove from there no dur a time does not update the access time on directory so very similar to the last piece uh ra a Time updates the access time only if the previous access time is older than the current modify time which obviously it would have to be uh which improves the performance while maintaining some of the a Time functionality so it will add an access time only if the previous access time is older than the current modified time which again it it would seem like how would somebody go go and access something at a date that is uh newer than the old like what that part is like if the previous access time is older meaning so the very last access time was June 1st and then I’m accessing it today on June 10th so obviously the previous access time is going to be older so that’s the that’s the thing that I’m like okay whatever that doesn’t make sense but okay um then there is the user which allows a normal user to mount the file system um meaning that somebody that is not an administrator or a pseudo user they to mount the F system and in no user only the super user can mount the file system meaning the super user being the root or the administrator or somebody that’s on the pseudo list and then you have Auto which is the mounting of the file system at boot automatically which is one of those things that was actually included I believe in our big list at the top right here yep we have the big list and then you have no user that’s also one of the ones that’s here Dev meaning that you can mount other uh file systems onto it or create device nodes onto it there is the suid which means that people can actually access and run this with the permissions of the person who was the owner and then async so let’s see if we can go and check that out specifically at the bottom right here uh no Auto preventing it from being automatically loaded we have sync which is all input and output operations are done synchron synchronously um and then you have async which all input and output operations are done asynchronously and let’s find find out what those things mean specifically okay so actually this is very uh easy to understand so all input operations being input output operations being done asynchronously means that when the computer performs the operation for example reading from a file or writing to a network it doesn’t have to wait for that operation to finish before continuing with the other tasks meaning if it is synchronously it has to wait for the previous task to finish so that it can move on to other one uh if you do asynchronously it can run those things simultaneously so the program can send the input output request and continue executing other code while the input output operation happens in the background receiving a notification only when the operation is completed which is actually very helpful so that’s probably why this is one of the defaults which is it is async right asynchronously um and then we have the ACL so it enables the access control lists for the file system system and the access control list uh whatever that is on your uh specific uh computer that’s been or on your system that’s been configured it’ll just enable that access control list for this particular partition as well so we have a few examples in this case or one at least one example in this case where the example entry would be something like this where you have the SATA uh a so the very first uh disc and then the first partition of the dis it’s being mounted Ed on the mount storage extension it is an xt4 file system it has the defaults as well as no a time and then we have zero and two and so we can really quick find out what the zero and two are as we go through the rest of our options but it gives us right here so no dump option so uh the zero represents the D dump option which is set to zero for file systems that should not be dumped which means they shouldn’t be downloaded you should not have the option to be able to dump everything from from that file system onto your current machine and then the pass option sets the order in which the file systems are checked at boot so when you’re running the boot this particular file system is going to be done second in line and that’s what those dump and uh pass options stand for and we literally just talked about it so zero would in this case represent do not dump and then one would represent dump defile system so that is something that would be the those are the only two options that are available for the dump parameter in this particular case uh which means that uh it’s a backup type of a feature right so it indicates whether or not that file system should be dumped and backed up um if it has the one that means it is going to be dumped and it is going to be backed up uh if it has a zero it’s not going to be dumped and it will not be backed up uh when you look at a a hash dump or a data dump or anything like that that means it’s being taken from where it’s currently sitting and it’s being dumped onto this particular Mount Point okay so it’s going to go from here and it’s going to be dumped onto here at boot right because this is something that we’re putting configuring for our boot options when we start the system and as it goes through all the various partitions and file systems that we have it’s going to check for this and then if it should be dumped or should not be dumped and it’ll if it is supposed to be dumped it’s going to go from this location and it’s going to be dumped onto that location and then of course pass as we already just talked about it as well so um zero would be do not check so this is the order in which to file systems are checked at boot time by the fs check utility so zero would be don’t check it one would be checked the system first usually the root file system and then two would be check the system after the first one has been checked so you have zero don’t check it one check it as the root which is typically the root file system so we’re not going to mess with it unless it is the actual root file system you’re not going to turn that into one the only thing that should be checked primarily before anything else should be the root file system and then you have the two which is basically check it after the other file system meaning after the root file system you can check this one and then you can assign two to essentially all of the other file systems that have been mounted onto this particular machine uh because they all come after the file system does the root file system they all come after the root file system does okay okay so this is an example of our FS tab file so uh we have these columns so the file system itself the mount point the type of the file system any options that would be attached to it and then we have the dump uh whether or not it should be dumped and then the pass whether or not it should be passed and so you have The UU ID in this particular case and then you have this one external uh or this uh extra partition that’s been attached because it’s the SAT disk B which would be secondary and then the first partition on that uh drive so you’re looking at the uuids here this particular file system is being mounted onto our root so this is standard this is going to be a standard entry on the fs tab because you have to mount the root every single time that you boot a computer it is an xd4 type file system and it’s going to be mounted with the default options and of course it’s going to be passed uh at one meaning it’s going to be checked primarily before anything else is checked and then you have the secondary file system which is going to be everything inside of the home location which would be all of our users so xd4 again all the defaults no dump required and it’s going to be checked second in line after the rout has been checked and then you have this particular one which is our swap space so if you remember talking about our swap partition for our uh partitioning section this is our swap partition and it is a swap type and it is a s SW option uh because it represents Swap and then there is no dump required for it because it’s going to be rebooted or wiped every single time that we reboot the system and it’s not going to be checked because for the most part at boot this is empty right so if you reboot the system the swap space gets emptied out the swap file gets emptied out so when you’re booting the system which is what this whole thing represents right so the Etsy FS tab is the configurations that are uh run when the system is booted and all of these things are being mounted on boot so when you boot the system the swap space is empty so there’s no requirement for us to check it to run FS check on it and then you have the dev sdb1 which is one of our external or secondary uh file systems that is being mounted onto this particular location and it’s going to be mounted on to our mount data partition and the sdb 1 is also an xt4 type of a file system there is no access time that is assigned to this so no defaults or anything like that it’s just a no access time that’s been uh assigned to it and then it has a dump of zero we don’t want to back it up and then it’s being checked after the root as well so if you notice all of these things are going to be checked so if this was a longer list essentially every other thing would also be checked with the exception of the swap space cuz it’s unnecessary to check it and then the options would be the only thing that would change for it um and then you would have the order in which they’re being checked which apart from the root all of them are going to have an option of two or zero which would be the swap space itself so this would be the example of the Etsy FS tab file and so this is the actual explanations of what I just gave you so if you want this you can just pause the screen real quick take a screenshot of this but it literally just explains everything that I just explained in this particular slide right here so if you want to take a screenshot or pause it or take these notes whatever it is feel free to do so right now 1 2 3 moving on okay so unmounting the file system is something else that you would need to do this typically is not going to be done with the fs uh CH or the fs uh okay so unmounting the file system will typically be done manually using the um Mount command this typically is not in the fs tab configuration file and it is just done manually using pseudo umount and then the mount Point itself or pseudo umount the device name so either one of those things so since we have the uh the mount point you could ount via the mount Point itself or you could umount uh by the device name that’s been attached to it right there so um let’s go back to this direction okay so there we go so that is the command that we’re going to have to be able to unmount something and it’s a crucial step to prevent any data loss or corruption especially uh for removable storage devices like USB drives and things like that so this is actually um in a GUI in a graphic user interface you would typically either right click on it and unmount it or there’s like an eject button that’s next to it that you would click on the eject button and it would unmount it for you but when you’re inside of a uh command line interface like a lot of Linux servers are and they don’t have a gue uh when you’re inside of that environment you would need to use the umount option so that you can unmount it and make sure that it’s saving everything it needs to save and then usually if something is open inside of that partition inside of that for example that USB drive if there’s a file that’s being accessed from the USB drive and that file hasn’t been saved usually you’re going to get some kind of a notification saying hey there’s something from this USB is being used and you need to save and close it before you unmount this and that’s always a notice that you would get if you’re running Windows or Mac OS or something that has a g when you try to unmount it or disconnect it instead of just you know obviously hopefully you’re not just pulling it out of the USB location um I sometimes do that but I only do it when I know for a fact that I have all of the files closed or I’ve saved what I need to save and I can just pull it out and just make sure that you know I can be certain that nothing is going to happen to the file that I was dealing with but if you haven’t saved something or or if it’s still in use then you’re going to get that notification and basically just save it and then close out whatever software is using it and then you can unmount it without any problems and just to kind of articulate the reasons to unmount just to make sure that you’re all good and you understand what we’re doing so um there’s something called a buffered right so os’s often use a process called buffering to improve performance instead of writing the data to the dis immediately they temporarily just store the data in your random access memory which is volatile memory which is temporary memory because it’s going to get wiped when the system reboots so when you unmount it it takes all the buffer data and actually writes it to the disk so it takes it out of the ram out of the random access memory and it actually writes it to the dis and then that way you can when you unmount you can be sure that everything that’s currently buffering and just kind of hang hanging out in limbo it’s actually being written onto the disc so you don’t lose any of your data um then there’s the flush cach a so when you uh unmount something you flush the cache and cach a is kind of another one of those temporary storages that just houses some data uh meaning anything that’s pending uh any kind of pending right operations are completed and the data is accurately transferred to the storage device and then it comes out of the cache and it actually goes into the device itself it’s written onto the device and you don’t have to worry about losing that data um another reason would be the Integrity of the file system so uh when you unmount it you cleanly close the file system and you make sure that the data inside of it and the state of it is consistent and it hasn’t been corrupted in any way um which is actually very useful it’s like a good habit to have because most likely you’re not going to ruin the Integrity of the USB especially in a lot of modern USBS most likely you’re going to be fine but if you do that repeated there could be some uh mess you know something that happens where the the corruption in this particular case corruption is like a very strong word but the Integrity of the USB may be messed up the Integrity of your file system may be messed up it may not run as quickly it may not save what it needs to save if you just keep doing it repeatedly without properly unmounting it and then of course uh the corruption of the data so if the network file is removed or disconnected connected or the USB drive USB drives typically are fine I’m not I’m still going to tell you to unmount it and just make sure that you eject it properly so that part is going to be fine but a network file system if you remove it or disconnect from that without properly unmounting within that relay of data because if it’s Network connected that means there’s definitely something hanging in The Ether in that connection that you have with it and if you don’t unmount it then the the data will potentially be lost because it’s just going to be hanging in The Ether in the medium portion of that connection it’s not going to be on the actual server that you’re pulling from and it’s not going to be on your actual machine it’s going to be somewhere in the middle and if you don’t unmount then you’re going to lose all of that data in transit and then it makes the file unreadable or the file system itself will become unusable uh most likely it’s not going to destroy the file system but it is going to lose some data in that Transit and you just don’t want to do that then you have preventing access conflicts so files may get locked so if you open a file and process might still be running or something might still be interacting with that file system unmounting it make sure that all the file handles are closed and no processes are using it so this is what I was referring to earlier when I said like something may still be using it a software may be using it so for example when I’m editing a video as an example all of my video files are stored on an external driver because I don’t want to take any space on my actual computer and if I still have Premiere Pro open and it’s still using any of those raw files or my project files and I try to unmount I’m going to get a notification saying I can’t unmount this like my computer will say this can’t be unmounted because of the fact that or you know dis can’t be ejected that’s another way that it says it because the fact that there’s a program that’s still using it in some capacity so this is the portion right here where it’s like it locks the file right it won’t disconnect I mean I can still pull it out right I can still pull it out of the USB but that process right there might lose my last save my auto save or it might affect some of my data or my video files or something like that so in that regard I’m like oh okay let me just go save Premiere let me close out Premiere and then that way I can unmount easily or eject the dis easily um there are shared resources and again this is inside of the networ connected file systems multiple users or uh systems may be accessing the same exact resources and if you unmount them it manages these resources and prevents the issues arising from unexpected disconnections right um Mak sure that everybody who is using it is notified number one that it’s being unmounted and it was unmounted or it just saves all of their work because it is a shared resource and you don’t want them to lose their stuff because it’s important stuff everybody’s working on something that’s relevant to the organization so you just want to make sure the shared resources are protected again I feel like Network file systems in my opinion are much more volatile and much more sensitive they uh than USBS right so they USBS are typically very resilient uh Network file systems are susceptible to uh loss of data because the fact that it’s being connected via an internet connection so that’s the piece that I think is kind of important to understand espec especially in a network file system you really want to make sure that you are unmounting that file system all right so some sample commands here so pseudo umount Dev stb1 pseudo umount and then the mount location itself so this would be the device name this would be the mount location um and this is the network file system this is a USB drive right so you mount and then whatever the name of it is inside of the the file system or the path of the network file system and that’s basically it it’s very very simple and so in summary for this particular section we want to flush all of our buffers you want to maintain Integrity you want to prevent any conflicts to close all the file handles and any processes um if you unmount correctly you help Safeguard the data and maintain the reliability of the file system and you always want to unmount before physically disconnecting USB drive or network shares to just make sure all the Integrity of the data is preserved and you don’t have any conflicts and now that we understand the mounting and unmounting and the importance of those two actions we want to talk about the actual management of the Diss and the various partitioning tools so Fisk the fixed dis is one of the tools that you’re going to use to create partitions and modify them or delete them so this is a command line utility that’s used to create modify or delete partitions and so uh you run it with pseudo and then you run f disk and then go into whatever the individual dis is that you want to partition so you would have to run pseudo f disk and the location of the actual physical drive and the physical file system or the physical disc that you’re going to be opening for partition and this could be done with a USB drive as well so this above Command right here it opens this particular dis for partitioning and then you can replace this with any device that you want to open for partitioning um a common command that you can run with it would be the print partitions command so it would be just the f disk and then p as the option which prints the current partition table showing all the existing partitions on the disk and we ran something very similar to this to find out what our partitions were earlier as we were installing Linux so it would be P for partition list right or print the partitions excuse me um you have n to create a new Partition so creating the partition you’ll be prompted to specify whether you want a primary or extended partition and the partition starting and ending sectors and then you have the usage for this which is going to be n right here so choose P for primary or E for extended which would be combined with the N uh command and then you can specify the partition number starting sector and ending sector or size and all of that will go into when we actually run these things practically I also think I have a couple of examples commands as we go through this as well so um deleting a partition would be done with the D command so deletes an existing partition you’ll be prompted to specify the partition number to delete and then specify the partition number to delete using the D command uh you can write changes so once you’ve made the changes you need to write them so you can enforce them so anything uh that is made all the changes to the dis uh that are made before you exit you have to write the changes otherwise they’re not going to be saved so you could be doing all the work creating partitions doing everything that you’ve done unless you actually write everything that you just did it’s not going to save when you exit so that’s the big piece that you got to keep in mind whatever edits you’re making deleting modifying adding anything that you’re making you need to use this W to make sure that it’s being written so that it saves it um you can set a partition as a bootable by using the a command with f disk which means that the partition will active uh will activate for boot purposes meaning on Startup when you start the partition or when you start the computer that you’re running that specific partition is going to be added to your list of bootable partitions meaning it’s going to go to the the fs tab uh configuration file so that’s what it means when you make something bootable so here’s our F example workflow with f dis so you open the dis using the f disk and then whatever the path is that you want to partition you partition uh or you print the partitions for help so P enter right so cuz you’ve opened it and you’re inside of the mode where you’re now interacting with Fisk you press p and then enter um you press n enter and then P for primary after you pressed n you are saying that you want to create a new Partition you would then enter either P or E so P for primary or E for an extended partition and then specify the partition number starting sector Etc and then you write the change is meaning you press W enter and then you can exit F dis safely because all of your work that you’ve done would be save lsblk so you should be familiar with ls because we already talked about LS inside of our introduction to the command line BLK would be the block devices so you’re doing lsblk to list the block devices um it’s displaying the information about all of the available block devices which are basically our partitions right the discs and the partitions in a tree like format and it’s a clear overview of what you have on that system as far as the storage configuration the device name the types the sizes the mount locations the amount points and every other piece of data uh that would be relevant to knowing about what your disk structure looks like and what your partitions look like the basic usage is you literally just press lsblk and you press enter and you see a tree format of what everything is on your computer and this is what the sample output looks like for list blocks devices so you have the name MCH Min RM size Ro O type and amount point so in this particular case we would have the first physical disc and then it has three partitions and then you have the second disc and it has one partition and then we can actually go and look at uh the various uh columns right here and what each one of them mean but for the most part you can try to understand what it is right so the very first line the type is that it’s an actual dis and then one two and three are the partitions of that disc and then you have another disc and you have a partition of that disc you have the mount point locations for each one of them uh this one right here is a swap space as you can see right here the size of them are going to all be listed here as well but just to give you a good idea of what all of these columns stand for uh we can go and look at our handy dandy AI assistant here so we have the common fields for the lsblk output so the name would just be the device name as we’ve already established the m and Min is the main major and minor device numbers in this particular case 80 or 81 uh the RM is then indicating whether it’s removable and for the most part if it is if it does have it that means it is a removable uh partition so zero would mean that it’s non- removable one means that it is removable the size would be the size of the device the r o indicating whether it’s read only so zero would be its read and write one would be its read only the type so dis part partition or ROM which is readon memory or swap as we just saw U Mount point would just be the path the path that has been mounted on the fs type would be the file system type The UU ID would be the universally unique identifier and then the label that has been attached as if it’s a backup or a data or so on and so forth so these are the individual uh fields that we have right here and we actually didn’t even have the label or anything like that so uh what we do see in this particular example is the the mag Min right here we have the removable so these ones are not removable this one because it is an external it is removable you have the sizes of this and then you have the the read only which means these are not read only um and then you have uh dis partition dis partition the mount point would be swap so on and so forth so that’s what we got that’s the output example and the breakdown of it so in the example that we saw with the AI there was the type label and uui and that would be done with the lsbl k-f so that it gives you the full I believe that’s what the f would stand for so it gives you the full display of all the block devices along with all the information like the type label and uuid and the output for that actually looks like this so it removes a few of the other pieces that we saw so the readon or the the swap location or the swap type for example those things we see that they’ve been removed from this particular location or from this particular output but now now we have in its place we have the UI uuid we have the label and the mountain point is also still there uh the fs type is still there as well and here’s a quick little screenshot just in case you want this so if you want to pause it to be able to write this down or if you want to screenshot this you have one two 3 seconds and we are moving on if you’re looking for specific data to be printed onto the screen you can actually use the dash o uh flag or argument or option to be able to list what you want this has to be all in caps lock and then you you need to know the actual name itself of the column that you want to print onto the screen so in this particular case it would print the name of it the size of it the fs type and the mountain Point location and it you can just customize your output in whatever way that you want if you do the dasd option it’ll display the information about the whole dis only excluding the partitions of the dis meaning that uh you have the dis right here so this is the disk and then these are the partitions of the disk so these will be removed if you use the DD option you will only see the diss themselves and so this is a quick little summary so lsblk provides the detail tree view of the devices with the F option it’ll include the file system information it is customizable so you can actually use uh the- o to be able to view whatever you want to actually view on the output and yeah that is the list block devices mkfs very similar to MK dur is a making of a file system so you can make file systems it’s used to format a partition with a specified file system type and you can use it with the mkfs so and very similar to what it looks like in Python structures you would use the dot to uh declare what type of file system you want to create so you would do pseudo mkfs do xt4 and then on this particular disk so you can see that uh this is the obviously the file system type that you want to create and this will be the the partition which is on the SDA dis on the first position and it’s being created as an xt4 file system uh the xfs file system is a high performance file system known for robustness da d da so you can do pseudo mefs uh xfs so pseudo mkfs do xfs and you can now create an xfs file system same thing with the fat 32 uh in this particular case you would actually need to say it’s a v fat type of a file system I love saying that the V fat type of a file system and then you do a-h capital F and then say that it’s a 32 extension of it so these are the older ones that are typically used for USB drives and things like that and they are widely uh supported meaning that you can use them on Windows as well as Mac OS and so on and so forth um but they’re typically limited to the amount of space that they can tolerate which is 2 terabytes as we discussed earlier in the training so uh this is specifically for USB drives for the most part or something that is maxed out at 2 tbytes um it is a little bit slower it’s a little bit older so that part of it you should already know about but if you want to create it instead of saying fat 32 on the dot afterwards you would say vat and then you would go- F32 and then the rest of it stays the same NTFS would be the new technology file system that is a Windows uh ingrained or intuitive to Windows and uh this is for advanced features like journaling support for large files formatting and partition with this can be useful for compatibility with Windows and you would say make FSS make fs. NTFS and then give the location that you wanted to do it so this is our example workflow so you want to First identify the partition so you would use your list block devices or Fisk list to list whatever block devices you got so for example lsblk would just be the you type it out you press enter and it gives you the list and then you choose the appropriate m mkfs m uh the make FS command based on the desired file system type so if you want to do xt4 you would say mkfs do xt4 at this particular partition and so again so keep this in mind right so what if you’ve create you’ve identified the partition that you want using the lsblk now you want to create a file system on the partition that you’ve uh found or you’ve design designated declared right from that that file system you’re going to create or from that partition excuse me from that partition you’re going to create the file system that’s going to be mounted onto it and after that’s done you can verify the file system type using the lsblk DF command and press enter and then you’ll verify what you got onto that file system so in summary the make FS the mkfs is used to format partitions with different file systems um you can uh use file system types like xt4 xfs fat 32 NTFS Etc um and then you just need to choose what you want based on the compatibility um there’s something called the dis utility and so just to compare it to uh something that would be on Mac OS for example there’s something called the Disk Utility tool that will do this for you with a gue type of an environment so you connect the USB drive and or an external drive whatever it is um and then you say that you want to reformat you go to the dis utility you select the USB drive that has just been connected to your device and then you go to format and then you choose the formatting option that you want and it would be xt4 xfs whatever is available for that specific drive and then you uh finalize it right so this is what this is doing uh it’s looking at whatever that connection is whatever that USB drive or external drive or disk is it’s going to look at that and then once you know exactly where it is what is mounted on as what its name is so on and so forth you then use pseudo Mech fs and then you reformat it right you format that drive to be one of these types of file systems and to completely be like very brutally honest and just declare like really really note you you know what I mean like disclose this for you uh this is going to wipe the file system and most in most environments in most cases it’s going to wipe everything that’s on that drive so when you format a USB drive it’s going to clean it out it’s going to delete everything that’s on there uh because it needs to reformat the thing so you need to make sure that if there’s anything on that dis that you want to be backed up you got to back it up first before you do this command because it is going to wipe the file system it’s going to wipe the current formatting of that USB drive or that disc and most likely you’re going to lose whatever is on there so just keep that in mind right if it’s not empty if it’s not a brand new USB or external drive that you’re trying to change or if it’s not a brand new disc that you’ve installed if it’s not one of those things and it’s already empty you need to back up whatever needs to be backed up before you run this because it’s going to wipe it fsck for file system check so this is what actually is ran um from that uh the fs tab file when we declared what we wanted it to run on boot and we were looking at the various partitions and discs that were going to be launched on on boot the very last option was do you want to pass or not and we were going to say yeah I do want you to pass or I don’t want you to pass which means you’re going to check and run FS check on it so it’s the file system consistency check that’s basically what it stands for and it detects and repairs any file system issues and you should pretty much use it so obviously if it’s on the the fs tab configuration file it’s going to give you the option of running it on boot so it’ll run the check of that whatever file system that’s been mounted as soon as you boot up your computer um but if you haven’t shut down the computer properly or if it crashed or something and you just want to make sure that uh nothing has been lost when you reboot it you will run FS check to just make sure that the file system is still all good uh to identify any kind of issues if there are any issues hopefully there aren’t any issues but sometimes there’s an improper shutdown you lose power or the system crashes for whatever reason so that’s all good if you manually reboot or shut down the system without properly unmounting your file system that would be another reason to run FS check on it to just make sure that it’s all good any kind of an error that happens so if there’s a boot failure for example and you start up your computer and for whatever reason it doesn’t boot that specific file system you run FS check to try to diagnose what just happened and why it didn’t Mount or why it didn’t uh boot it um same thing for mounting so if you can’t boot it or if you can’t mount it um then you run FS check to see what is going on with that particular uh partition and resolve any kind of underlying issues because you need you need to understand that you can connected right so it can be connected to the computer but it won’t be mounted and that is a very uh nuanced difference so you can have it be physically connected but it may not necessarily be mounted and so if it’s not mounted you need to find out why it’s not mounting then we have periodic maintenance so just preventative checks right so to just make sure that uh it’s all good and uh there’s nothing wrong with it you just run routine maintenance um if there’s any kind of an update or a health issue or any kind of a minor issue you can catch that before a major issue comes into play and before there’s a crash or before you lose any data and then there’s schedule check so many systems are actually configured to run this automatically at boot like we already talked about after a certain number of reboots or Mount operations um you can write scripts to do it for you every week or something or every month just to make sure your amount uh points and your file systems are all good so uh it’s better to do it regularly instead of waiting for some kind of a crash to happen or some kind of error to come up so that if there is something going on you can catch it in advance and then you can fix it so running it at boot is as simple as making sure that the pass option has been turned on with your FS tab configuration file uh manual invocation would be to just run pseudo FS check on whatever that partition is or whatever that dis is and it’s as simple as just running pseudo FS check it’s really this part is not is not that complicated to explain um you can also have unmounted partitions that you want to check uh before um running to avoid any kind of potential data corruption so you can unmount the partition first and then run FS check on it to just make sure everything is all good and that’s another strategy that you can run to run FS check um there’s interactive mode that can actually prompt you to fix anything um within the interactive mode and you can use the flags to automate the process um so for example Dy will be automatically answering yes to all of the prompts that come up so do you want to automatically repair something you just do Dy and then it’ll go ahead and uh run the entire autocorrect for you or the automatic repair for you uh without you constantly having to say yes yes yes so it can be opened in Interactive mode for the repair of any file system if it needs to be file repaired or any partition that needs to be repaired and then if you just attach that- y it’ll do everything automatically so as a summary you have any proper shutdown or file system error or any periodic maintenance that may be required that to run FS check on it and it just makes sure that the file system remains consistent uh if there’s any data loss you can be preventing it or catch it in advance if there is mainten maintenance that needs needs to be done um making sure that the system just runs smoothly all the time without uh risking the loss of any data it’s very very important um as an extra reminder just make sure that you back up the important Data before running FS check because sometimes through the uh through the repair process you may actually lose certain data because if there’s something with that file system that has been uh corrupted or if there’s malicious data or some kind of a malware or something that’s been installed on it uh you most likely are going to uh in the process of wiping that malware you’re going to lose other data that may be important to you so you want to make sure that you run backups uh or save the important files before you run FS check just in case it wipes certain things as an attempt to repair your file system which brings us to the configuration and management of the swap space so um we’ve kind have gone over this so we’ll just re-review this cuz this actually was a little bit of a concept for me to wrap my head around as well so it’s virtual memory so the swap is virtual memory on your hard drive that’s used when physical processing power is maxed out right so your RAM is technically part of the processing power of your computer so you’ll you know you’ll hear people like computer nerds you’ll hear computer nerds be like oh I got a 64 gab RAM on my computer there was literally somebody that actually commented on the channel that they were like yeah I remember my first computer had 24 megabytes of RAM and then now I have a 64 gigb of ram I’m like holy crap dude 24 megabytes like you know the system is old when you’re measuring the Ram with megabytes instead of gigabytes so you’ll have these nerds that really really compete over the size of their ram my Ram is bigger than yours kind of a thing and it’s mainly because it helps run things much much faster and usually in modern computers if they have 64 GB of RAM or 128 GB of RAM if they have that much RAM you’re most likely not going to worry about swap space but if it’s limited so if you have 4 GB of RAM or if you have 8 GB of RAM or something like that the swap space is very important because it’s going to serve as virtual memory to save uh life processes so that you don’t lose anything especially when you put the system into hibernation or it goes into sleep or something and you want to turn it back on and pick up where you left off all of that data is either inside of the ram the random access memory or it’s inside of the swap space so the swap is a buffer to prevent out of memory errors meaning that the system can’t run because the physical memory has been maxed out and so it’ll take whatever is being uh it’s kind of over the threshold of the physical RAM it’ll hold on to that um by offloading the inactive processes from the ram so if you have too many things open and you’re actively using something the ram processing power is going to be dedicated to what you’re actually using and everything else that’s in the background is going to go into your swap space if you want to create a swap partition you can create that swap partition using the Fisk command so you can create a partition uh simply by just opening up whatever it is that you’re trying to open up and then you would create a swipe uh swap type of a partition so uh a dis partitioning tool which is f disk you’re going to uh use it uh on the specific location so in this particular case it’s sdb as you can notice so it’s sdb which is going to be our external or secondary dis and it’s going to be where the swap partition is going to be created you can replace it with whatever identifier that you want to use and then you actually go through the process as you interact with Fisk to create the partition right so you would open up f disk on that location press n to create a new Partition press P to create a primary partition and the partition number for example would be two and then you accept all the default uh first sectors to accept the default last sector and then you have to change the partition type so this is the part right here where you change the partition type to swap you press t for swap I don’t know why it’s t maybe it’s temporary that’s what stands for maybe and then you press 82 which is the Linux swap number and then that’s it and you press enter and then after all of that is done you press W and press enter to write all of the changes to the disk and then exit f disk to make sure all the work that you did is going to be saved so that’s how you create it right so kind of review that cuz I’m not going to keep repeating myself I sometimes I keep forgetting that I’m doing a recording and then I am I beat something to death that’s like a bad phrase uh you know beat a dead horse or something whatever the phrase is I just keep repeating myself because I want to make sure that you get it but then I’m like oh you know what they could technically just rewind this and go replay it over and over again so my bad I’ve gotten comments from people that are like did you have to talk about that thing as long as you did and I’m like yeah well I just want to make sure that people get it so I try to say it in different ways so that it connects excuse me for trying to help you learn Jesus God I’m I’m having a conversation by myself in my room as I’m recording just just so you know okay moving on so now you need to format that partition as a swap space so now that you have that specific disc that you did everything with Fisk now you turn it into a swap Space by using the MK swap make swap command and you format it as a partition um you it’s very similar that we already have this understanding so we’re going to move on um then you turn it on right so you’ve you’ve used Fisk you’ve created that partition then you use me swap to turn it into a swap partition and then you use swap on to actually turn it on as the swap partition and very simple I’m I’m not going to keep repeating all of these things because the instructions are meant to be redundant um but here I’ll just leave this on in case you want to screenshot it 1 2 3 moving on and so verifying the swap space you just go show you use the D- show option with swap on to just make sure that it’s actually been created as a swap space and it’s actually turned on and it’s running as smoothly as it is so it’ll be the active swap spaces that are viewed when you run the swap on command with the show option so creating a swap file is similar to having a swap partition so creating a swap file to turn a swap file or to create a swap file it would be done with the F allocate and then you use the dash l so don’t confuse this with a one right so pseudo F allocate dasl and then 1 Gigabyte is going to be allocated to wherever the swap file path is right so you have a swap file instead of a swap partition um the F allocate allocates this uh space for that file the dash L 1G would be the limit right so you’re doing a 1 gigabyte uh limit for this particular file location and then you have the swap file which will be the path to whatever the swap file is and then you do chod 600 so you change the mode the permission of it uh to that swap file to set the file permission to read and write only for whoever the owner is and then that should allow it to actually read and write as necessary and then you would turn it into a swap so this part is pretty much exactly the same as what we did with the swap partition which in this particular case we’re just making swap uh the actual swap file path and then you would turn it on right so swap on for that swap file and then of course once it’s been turned on you just want to make sure that it is active and it’s running so you would do swap on– show so pseudo swap on– show and it’ll show you whether or not that specific swap file has been activated the last step with your swap partition or swap file would be to make sure that it actually loads every time you boot your computer right so you need to add it to your ety FS tab file as we’ve discussed uh so far so you need to configure the actual ET CFS tab file uh we can do it with our trusty Nano text editor by just opening up the ET CFS tab with Nano of course you would need to do it with a pseudo permission so you can actually write to it and save it um then you would uh add whatever the swap partition is uh in that particular FS tab according to the formatting that we’ve talked about so far so you would provide the name and then the location and then the type of the partition which would be Swap and then whether or not you want it to be dumped which is no you don’t want it to be dumped whether or not you want it to be passed um in the fs check when you reboot and you don’t want it to do that either because it’s going to be wiped every single time that the computer is rebooted anyways that’s the whole purpose of the swap partition it’s not going to store any memory or any kind of data on it because every time it gets rebooted it just gets deleted so that’s the whole purpose about this so you don’t need to dump it and you don’t need to check it with FS check when it gets started on but you do need to provide the name you need to provide the path and then you need to provide uh what type of partition it actually is which is going to be swap or what type of file system so this is the piece for the file system and so on and so forth so this is how it looks uh notice how all of these things are exactly the same with the exception of whether or not it’s a swap file or a swap partition and even this looks very similar right you’re just giving it the path to the partition or you’re giving it the path to the swap file that’s basically it and this is the explanation of what we just looked at right so you can screenshot this one 2 three so the first portion was the swap partition itself um it’s not going to be mounted onto anything because it is a swap partition it is the swap space and then s SW for the Swap and those are the options that you got and then so once you you have written everything into the fs tab file inside of Nano you’re going to save the changes doing crol o press enter and then controll X to exit out and then you can move on in summary creating and configuring swap space whether as a partition or a file is essential for maintaining system performance by following these steps you can ensure that your system has adequate swap space available and it’s probably a good idea to actually have this uh whether or not you have a massive amount of RAM installed on the computer um the Mac OS whether it gets uh you know 16 GB 32 GB whatever the the actual Ram is or Windows whether it has all of that massive amounts of ram it still has swap space so it’s probably a good idea if you’re going to be a Linux administrator and you’re going to use the type of interface that doesn’t have a GU or it doesn’t come preconfigured with with these types of things it’s probably a good idea for you to make sure that there is swap space that’s been configured and if it hasn’t been configured you go ahead and configure it the way that we just talked about and of course we’re going to review all of this stuff when we go into the Practical section and this is your little cheat sheet so if you want to create a swap partition you can create it using FS disk or f disk excuse me you can create f disk and then you have the creating of the partition then you format it to become a swap space and then you turn it on using swap on and then same thing F allocate L1 GB to the swap file pseudo chod the swap file and then make it a swap file and then turn it on as a swap so once you’ve created the swap space now you got to manage the swap space um the swap space usage and ensuring that the configuration persists across all of your reboots and restarts of the computer is very important for system administration these are the detailed steps that you got to do to check the usage so checking swap usage you would go swap on Das s excuse me swap on- S and it displays the summary of the swap space usage listing all of the active swap areas and this is the example output that you have over here so you would have the file name swap file and then you have the dev sdb2 which is the partition for the Swap and then these are the sizes that they have and then for the swap file 2,000 uh kilobytes has been used and it has a priority of minus two and then this has a priority of minus one and it hasn’t been used at all as you can see over here because it doesn’t need to you know for whatever the situation is this is 1 GB worth of space so it seems to be fine this is just a breakdown of the fields that we just looked at so the path to the swap file the type of file that it is or the type of partition that it is um the total size of it the used amount of it and a priority of the swap space so lower numbers mean that it’s a higher priority and the higher numbers mean that it’s a lower priority uh which is again I guess it makes sense right so number one would be a higher priority than number two um so that is the breakdown of these fields another way that you can check the usage would be using the free command so free- provides a human readable format of memory and swap usage and this is what the output would look like so you have the memory itself and then you have this swap space or swap partition swap file whatever whatever that allocation is when you’ve created your swap partitions or swap spaces um the total amount of memory and then the total amount of Swap and then the used amount of memory and the used amount of swap how much you have free whether it’s a shared portion or if it has any shared portions and then the buff cache as we talked about the buffer and the cache and then the available that’s going on with it so if you look at the the stuff that is sitting inside of the shared portion of this memory as well as the stuff that’s sitting inside of the buffer and the cash egg these are the things that we were talking about when you’re talking about unmounting to make sure that this 5 GB of data isn’t lost so you want to make sure that you properly unmount or you properly shut down whatever the system is or whatever the partition is so that you don’t lose this data because out of the 15 gigabyt losing 5 gabt of data would be catastrophic because it represents a large amount of data on this particular system so free shows you the free space the free memory as well as the free swap space that is available on your computer and these are the breakdowns of the fields so you have the total amount of memory you have the used amount of memory the free amount of memory the shared memory usage by the temp uh FS the shared memory that’s available to the temp file system you have the buff and the cache the buffering and the cach a the memory that’s used by the kernel buffers and Page cache and of course anything that’s available um after all of these things are considered right so after this is considered after the shared and the buffering and C cache is also considered after all of that is considered this is the final available amount of space that is available on that individual partition or disk and this is how we look at the permanent swap configuration so uh to make sure that it’s available after the system is rebooted you have to add it to the fs tab right so this file contains information about this partition so on and so forth um so where are we at I feel like I accidentally clicked something um so there we go so to be able to edit the fs tab file we’re going to use pseudo and Nano and then we’re going to go inside of it and just make sure that the swap partition is designed as it is so we’ve already talked about this this is just part of the permanent swap configurations as we’ve discussed and you should already know what all of these things stand for so I’m not going to explain this again uh you can do a one two three screenshot or pause it or take it down so on so forth and we’re going to move on so add the swap entry to it would be as a swap file same exact concept uh the breakdown is exactly the same so again I’m not going to say this this just redundancy so that you understand that these actions fall into different categories and so managing swap space or creating swap partitions both have a permanent feature and both of those features are inside of the fs tab configuration file that is inside of the Etsy FS tab location and then you save and exit because you’re inside of Nano so you just got to make sure that you save it and even when you try to exit without saving it’ll prompt you to whether or not you want to save it and you just press Y and press enter and I’ll save it and make sure that all of your changes have been saved to that document um activating it is exactly the same way as doing what we did before so pseudo swap on and a would be the activating all of the swap entries inside of the fs tab so you can do pseudo swap on and then give the path or the location or the the the identity of the swap space itself or you can just do- a to just make sure all of your swap configurations have been activated inside the fs Tab and then you just do das Das show to show you uh what the status of everything is and whether or not they’re actually active and correctly configured so in summary we’re going to monitor and configure swap space to make sure that the system can handle anything that’s memory intensive and remains stable under a heavy load which typically if it’s a high traffic environment if there are a lot of users that would be necessary so even if you have a massive amount of physical RAM you do want to have a good amount of swap space as well because you have a lot of data that’s being processed by a lot of different users and is your little cheat sheet so to check the swap spice using swapon you can do swap on- S that will list all the active swap areas and then you can do free- that’ll do a human readable summary of the memory and the swap usage and then if you want to make permanent edits you would do it inside of the fs tab configuration for reboots all the time so that it just loads every single time and then you can make sure that you have swap on- a so that it activates all of the swap entries that are inside of the fs tab file all right real quick let’s go into process and service management so you got to understand the difference between processes Damons and services okay a process is an instance of a program that’s running in memory uh each process has its own P ID its your own process ID and there’s a parent ID which is a PP ID so a parent process ID uh processes can be in a lot of various States or I guess three various States uh it could be running sleeping and zombie States so it’s a program it’s an instance of a program that’s running in memory and it has a p and a parent ID and it can run in various States including running sleeping and zombie States a Damon are background processes they’re running without an interactive user so the it’s basically Al what’s running behind the scenes in order for your machine to operate the way that it does um and it’s starts at boot and it continues to run while the uh the computer is on so that all of the operating system functions can actually run the way that they need to uh so for example a web server Damon like the HTTP D so the d stands for Damon um and anything that ends with with the D so you have the secure shell Damon or you have the cron Damon these are all considered the background processes that are attached to each of one of these individual processes so SSH process or the HTTP process so on and so forth okay and then you have a service uh service are the higher level concept meaning that they group one or more Damons together and that provides a specific functionality so the httpd service runs the Apachi HTTP server Damon right so the httpd service runs the Apachi HTTP okay so that’s the thing to understand so the service itself is a combination of a few Damons or at least one or more Damons that provide a specific functionality and they’re typically managed with system M D or CIS vinet as we talked about earlier these are our initialization processes and for the most part it’s probably going to be systemd on all of the modern computers in Linux cisv it is a little bit of a legacy um and it just depends on the distribution of Linux that you have and the version of Linux that you have so uh you have processes you have Damons and then you have Services okay these would be the commands that would manage the processes so if you want to view the list of processes you would just type PS and press enter that’s just by itself will actually show you the snapshot of all of the current processes if you do psox so PSA it shows all the processes that are running on the system with their details like their pids the user that is running it the CPU usage and the memory usage that it’s processing or that is taking up the the power that is taking up and then you have ps- EF which will display all the processes with the additional details like the parents process ID and the full command line the full and these are this is like where you a lot of these things are actually very much used inside of uh security analysis and incident response and the uh pen testing exercises that we run and the command line that is executed to launch that process can either be done by the system or it can be done by the parent ID the parent process that’s running that individual process or it can be done by a user so when a user clicks on something it actually processes a command in the background and that command has a command line entry so even if you double click on something to open it there’s actually a command line entry that’s running in the background and that is what’s launching that process so everything whether or not you use the command line and everything has a full command line entry associated with it and typically it would for a lot of people that can’t read this stuff it’s mostly jargon but uh in a lot of instances it’s the uh individual command that’s associated with it and the path of the binary that is attached to that process which is launching and actually running so remember all of these things have binaries or system binaries that are assoc associated with them and each of one of those system binaries is housed inside of the bin or sbin uh directories inside of our file system okay so keep all of that in mind so uh you don’t necessarily need to know much about this stuff uh you need to know the details that are presented and then as we go through the Practical portions of this training series we’ll start looking at a lot of these things in Greater detail and we’ll run a lot of the common options that come with PS and looking at the various processes so you can see see what all of that looks like when it’s displayed onto the terminal so that you can have a better understanding and just really kind of get that into your system into your uh your personal system as the person that’s watching this not your computer system okay moving on um so we have realtime process monitoring that happens with top and htop and top is typically what’s available on all Linux systems it’s actually also available on Mac Mac OS because Mac OS has a Unix like uh type of an operating system that’s running in the background um it’s a dynamic display of everything that’s going on so top actually when you press it uh or when you type it in and you press enter it shows you uh everything that’s currently running and it actually changes uh their uh their line item uh positioning right so the order of everything that’s being presented onto the screen changes in real time because something may be coming to the Forefront something may be using more processing power or CPU or memory and you can get a good I idea of what specific process is using up how much of your memory in real time and this is one of the things that you would look at to see uh if there is something that’s running that you don’t recognize and if you need to uh kill it right so uh the apart from the list of processes that are running which we saw with PS top shows you the the real time Dynamic view by usage right so it it’ll typically show you what’s currently running that list changes in real time and it’ll show it’ll show you what CPU and memory it’s using um you can press Q to quit or k to kill a process and R to renis which means to adjust the priority of the various processes htop if it’s installed it’s essentially top but it has a userfriendly interface because top is literally command line and it’s just lines of entries uh on a black screen and if you don’t know what you’re looking at it can be kind of overwhelming so htop is the userfriendly interface it’s like uh it’s a visual uh visually pleasing version of top that you can run and it’ll still show you real-time monitoring of all the processes that are running there’s colorcoded information and a better navigation interface so that you can navigate better and uh according to the color coding that’s done you can prioritize things better so top again is just a black screen with a bunch of white lines of data on on top of it htop is more user friendly it has color coding available so on and so forth once you’ve established a process that needs to be killed for example if you notice something that might be Shady or that might be taking up too much CPU or memory usage you can kill it with either kill or pkill for process kill and kill is a very simple uh command so you say kill and then you provide the P ID that you want it to kill and it sends a signal to the process ID and it just kills it um kill Das 9 PID sends the signal kill to forcefully terminate the process if it’s having uh if it’s taking too long to uh terminate or uh for whatever reason it might be slowing down something uh so it’s a forceful termination uh very similar to what happens in the task manager inside of Windows where you want to close something but it won’t close so you open up the task manager task manager and then you force quit that individual process that’s running and it just no matter what’s going on it’ll just crash and kill the process and then pill allows you to kill it by its name so kill requires the process ID pill can kill it by the process name and then at the same time you can also terminate all instances of a specified process by doing a kill all command and it requires pseo pseudo permission and then you do pseudo kill all and give it the process name so for example in this case let’s say you have you know 10 chrome windows open that’s kind of what my computer is like all the time I have uh different Google Google accounts Gmail accounts that are dedicated to different parts of my life and so for each one of those Google accounts there’s a chrome window that’s open and sometimes it can be overwhelming and sometimes you kind of you kind of you might want to close Chrome and general all together and then just start from scratch and open up a brand new Chrome window so when you do that you for me in Mac OS I can just right click on the Chrome logo at the bottom of my screen and then quit and then it’ll close all all of those instances of chrome that I have but when you’re inside of the command line and you don’t get the option of right clicking then you would just see okay there’s you know there’s 10 instances of this particular process and I want to close it because I want to free up CPU usage or I don’t know what it is I don’t recognize it it seems like it might be spamware or something like that so what I want to do is I just want to kill all instances of this process so then you’ll do pseudo kill all and then you would give it the process name and it’ll kill all of those processes and free up whatever memory or CPU usage that those things are currently using and as we manage processes we also need to learn how to manage services so managing Services happens with system CTL and we’ve actually already gone through this a little bit earlier um and system CTL is one of the big things that comes with systemd um which is the more uh modern version of the initial the init process for uh Linux and it’s a primary command for managing these services in that environment so uh starting or stopping a service is pseudo system CTL start the service or system CTL stop the service very simple enabling or disabling it is very much the same so pseudo system CTL enable or disable the service if you want to check the status of it it will be system CTL status of the service so it’ll show you whether or not it’s active and any logs that would be associated with the recent activity for it and then restarting or re reloading it would be say same restart the service or reload the service so it’s very intuitive I love the Syntax for this particular command because if you want to start something you literally just type start if you want to stop it you just type stop so it’s very very intuitive actually this is probably one of my favorite commands when it comes down to everything that we’re running cuz the syntax is just real language it’s real words that you’re using so this is kind of an easy one to remember and to deal with but uh system CTL manages Services right PS manages processes system CTL manages services so keep these things in mind these are the types of things that you need to kind of internalize for yourself as you move forward so when you want to manage a service you use system CTL now the service command is used in systems that have CIS vinet so CIS vinet is the older version of system d right so we look at this right here system D is in the modern distributions of Linux CIS vinet is in the older versions of Linux or the older distribution and it’s fairly similar to what you went through with system CTL except the syntax is a little bit different so you do pseudo service the N the name of the service and start or stop or get the status or restart it so these are the basic commands that would come with CIS vinet now what I’m going to be doing uh in the Practical portion of our exercises are not going to be using the CIS vinet commands we’re going to be using system CTL commands because we’re installing a current version of Linux or the most upto-date version of Linux because I don’t want any security vulnerabilities to be uh loaded on my computer as I’m going through all the exercises so I’m going to use the most recent version that I’m going to get from the Linux website so we’re going to end up using systemctl but these are just the series of the commands that you would get for CIS vinet and then there’s a series of commands that are also available inside of the Google document that I’ve created for the notes for this course and then if all else fails you can go to something like gemini or chat GPT and you say I’m running a system that has CIS Vin it as the service uh Management Service and what you’re going to go through is you know I need to do X Y and Z within this computer what are the commands that you would need I would need to run and then it would tell you what you got to do but the the main things to remember are that CIS vinet is for older versions or older distributions that run uh services and you would need to manage them as such and then systemd comes with the modern versions and the modern distributions of Linux and then we have job scheduling and KRON jobs are more so uh repeated tasks that you need to schedule and then the at command is used for a singular command that’s running uh for that one time so if you want to do anything that’s recurring you would do it with the Kon or the KRON jobs which uh essentially just handles all of the recurring scheduled tasks for you so if you want to edit the user’s Kon tab so KRON jobs are defined in the Chron tab files and then when you open up somebody’s Chron tab file you’ll see something that is in this kind of a format and that it has these asterisks that represent the minute hour day of the month month and day of the week so this would be the minute hour uh month itself or yeah minute hour day of the month sorry so minute hour day of the month the month and then the day of the week so in this particular case right what we have over here we have not uh it doesn’t need to run every minute but it needs to run every 5 hours or at 500 a.m. so my bad so this is the uh hour itself which is running the uh fifth hour of the day so it needs to run at 5:00 on that day and then it’s going to run every Monday because Monday is the first day of the week so 5:00 a.m. every single day uh doesn’t matter day of the month is empty doesn’t matter the month that part’s empty where you’re going to do it every Monday so it’s being run Weekly right so this is an example of this now there are a lot of great editors that are available online that if you want to modify something if you want a job to run at a specific time or day or whatever you can go to that editor and you say that this is the path to the job that I want to run and I want to run it at this time of this day or this month or whatever it is and it’ll just create this version this format of the command for you so it looks like this but I mean this isn’t really complicated to understand so I feel like you should be able to wrap your head around this so the first day of the week is represented by uh one and in this particular case it’s not Sunday Monday the first day of the week would be Monday and then the first hour would be 1:00 a.m. and then 2: a.m. 3:00 a.m. so 5 would represent 5 a.m. and the uh day of the month itself would be the first day of the month or the 2nd or the third and I think it’ll go up to 28 because we do have limitations for February so it’ll go all the way up to 28 and then you have the month itself so it would be the first month of the year which would be January the second month so on and so forth so you can look at all of that by going to Chron tab edit Chron tab e to look at whatever user that you’re logged in um to be able to modify those KRON jobs those specific schedule tasks if you want to look at all of these schedule tasks you can instead of opening the document itself you can just do KRON tab DL and you can list all of the KRON jobs the schedule tasks that are available for the user that you’re logged in on so this is recurring jobs with KRON now if you have a one-time job you can schedule that with at so for example you would say Echo the command which is it’s technically just running uh Echo and it’s going to print command onto the screen do a pipe and then do it at the time right so you the time could be a specific hour or relative time for example it could be at now plus 5 minutes or at now plus 5 hours an example of this could be Echo the path to this script right here it could be this specific thing I want you to Echo this meaning just print it print this path and then you pipe it so you take the output of this result that’s what piping does uh you take this result the output of this gets piped into this particular command meaning that at 10:00 a.m. it’s going to Echo this piece right here and that’s what at does so it will uh it’ll run something one time for you if you needed to run daily or more than once you would do it with Chron jobs if you’re just trying to run it once you can run it with the at command you can look at the scheduled jobs for at with atq and you can remove them with ATR r m and then the job number itself so you would do atq and it will bring you all of the jobs that have been scheduled with at and then you will get the job number with each job that’s been scheduled and if you need to remove anyone you just do atrm at remove and then do the job number and then just like that you can remove the scheduled task with at this training series is sponsored by hackaholic Anonymous to get the supporting materials for this series like the 900 page slideshow the 200 Page page notes document and all of the pre-made shell scripts consider joining the agent tier of hackolo anonymous you’ll also get monthly python automations exclusive content and direct access to me via Discord join hack alic Anonymous today

By Amjad Izhar
Contact: amjad.izhar@gmail.com
https://amjadizhar.blog


Discover more from Amjad Izhar Blog

Subscribe to get the latest posts sent to your email.

Comments

Leave a comment