This comprehensive guide explores essential Linux system administration tasks, focusing on security, resource management, and cloud technologies. It covers network configuration, firewall management using ufw and iptables, and secure communication via SSH and GPG. User authentication methods, including password-based and key-based authentication, are examined. Furthermore, the guide details file system security, including file permissions, Access Control Lists (ACLs), and the use of chroot jails for isolating processes. Disk usage analysis, cleanup procedures, system performance monitoring tools like top, free, and vmstat are explained. Finally, it provides an introduction to virtualization and cloud computing concepts, Docker, and container orchestration using Kubernetes and Docker Swarm.
Network Fundamentals and Security: A Comprehensive Study Guide
Study Guide Outline
I. Basic Networking Concepts * IP Addressing: IPv4 vs IPv6 * Subnets and Subnet Masks: Calculation, Network vs Host Bits * Domain Name System (DNS): Resolution Process, Hierarchy (Root Servers, TLD Servers, Authoritative Servers)
II. Linux Network Configuration * Interface Configuration: ifconfig (Legacy) vs ip (Modern) * Network Manager Command Line Interface (NMCLI): Connection Management, Wi-Fi Management
III. Network Troubleshooting * Ping: Testing Reachability, Packet Loss * Traceroute: Path Analysis, Hop Count * Netstat & SS: Monitoring Network Connections, Listening Ports
IV. Network Security Fundamentals * Firewall Management: Uncomplicated Firewall (UFW), IP Tables * AppArmor: Application Security Policies * Password Management: Best Practices, Multi-Factor Authentication (MFA)
V. Encryption and Key Management * GPG (GNU Privacy Guard): Public Key Cryptography, Encryption/Decryption, Key Management (Import/Export)
VI. System Monitoring and Logging * System Logging: Syslog, Authentication Logs, Kernel Logs * Disk Usage Analysis: DF, DU * Process Monitoring: Top, Htop * Memory Monitoring: Free, VMStat
VII. Virtualization and Cloud Computing * Virtualization Concepts: Virtual Machines (VMs), Hypervisors (Type 1 vs Type 2), KVM * Containerization: Docker, Docker Commands
VIII. VM/Container Management Tools * Libvert: Vert, Vert-install * Docker: Docker CLI
Quiz: Short Answer Questions
- What is the primary difference between IPv4 and IPv6 addresses? IPv4 uses a 32-bit numerical label while IPv6 uses a 128-bit alphanumeric label. IPv6 was developed to overcome the address limitations of IPv4.
- Explain the purpose of a subnet mask. A subnet mask is used to divide an IP address into network and host portions, determining how many addresses are available within a network. It also defines which part of the IP address identifies the network and which part identifies the host.
- Describe the steps in the DNS resolution process. The DNS resolution process begins with a query from a client to a DNS resolver, which may recursively query root servers, TLD servers, and authoritative servers until the IP address corresponding to the domain name is found. The resolver then returns the IP address to the client.
- What are the key differences between using ifconfig and ip commands in Linux? ifconfig is a legacy tool for network interface configuration while ip is the modern replacement; ifconfig is still in use. ip is part of the IP Route 2 package and offers more comprehensive functionality and features than ifconfig.
- How does the ping command help in network troubleshooting? ping tests the reachability of a host by sending ICMP packets and measuring the round trip time for those packets. This helps identify network connectivity issues and packet loss.
- What information does the traceroute command provide about a network route? traceroute identifies the path a packet takes to reach a destination, including each hop (router) along the way. It also measures the time it takes to reach each hop, helping pinpoint delays or failures.
- What is the role of the Uncomplicated Firewall (UFW) in Linux systems? UFW is a user-friendly interface for managing iptables firewall rules in Linux. It simplifies the process of configuring firewall rules to allow or deny network traffic based on specific criteria.
- Explain the purpose of Multi-Factor Authentication (MFA). MFA enhances password-based authentication by requiring users to provide multiple verification factors such as passwords and one-time codes sent to a phone. This reduces the risk of unauthorized access even if the password is stolen.
- Describe the difference between Type 1 and Type 2 hypervisors. A Type 1 hypervisor (bare metal) runs directly on the hardware, offering better performance, while a Type 2 hypervisor runs on top of an existing operating system. Type 2 hypervisors tend to be easier to install.
- What is the purpose of Docker containers? Docker containers package applications and their dependencies into portable units that can run consistently across different environments. This ensures that the application behaves the same regardless of the host system.
Answer Key: Short Answer Questions
- IPv4 uses a 32-bit numerical label while IPv6 uses a 128-bit alphanumeric label. IPv6 was developed to overcome the address limitations of IPv4.
- A subnet mask is used to divide an IP address into network and host portions, determining how many addresses are available within a network. It also defines which part of the IP address identifies the network and which part identifies the host.
- The DNS resolution process begins with a query from a client to a DNS resolver, which may recursively query root servers, TLD servers, and authoritative servers until the IP address corresponding to the domain name is found. The resolver then returns the IP address to the client.
- ifconfig is a legacy tool for network interface configuration while ip is the modern replacement; ifconfig is still in use. ip is part of the IP Route 2 package and offers more comprehensive functionality and features than ifconfig.
- ping tests the reachability of a host by sending ICMP packets and measuring the round trip time for those packets. This helps identify network connectivity issues and packet loss.
- traceroute identifies the path a packet takes to reach a destination, including each hop (router) along the way. It also measures the time it takes to reach each hop, helping pinpoint delays or failures.
- UFW is a user-friendly interface for managing iptables firewall rules in Linux. It simplifies the process of configuring firewall rules to allow or deny network traffic based on specific criteria.
- MFA enhances password-based authentication by requiring users to provide multiple verification factors such as passwords and one-time codes sent to a phone. This reduces the risk of unauthorized access even if the password is stolen.
- A Type 1 hypervisor (bare metal) runs directly on the hardware, offering better performance, while a Type 2 hypervisor runs on top of an existing operating system. Type 2 hypervisors tend to be easier to install.
- Docker containers package applications and their dependencies into portable units that can run consistently across different environments. This ensures that the application behaves the same regardless of the host system.
Essay Format Questions
- Discuss the evolution of network configuration tools in Linux, comparing and contrasting ifconfig and ip. Explain the advantages of using ip over ifconfig in modern network management.
- Explain the significance of the Domain Name System (DNS) in the context of network communication. Describe the hierarchy of DNS servers and the steps involved in resolving a domain name to an IP address. What security vulnerabilities are associated with DNS?
- Analyze the role of firewalls in network security and discuss the advantages and disadvantages of using UFW and IP Tables for managing firewall rules. In what scenarios might an administrator prefer one over the other?
- Compare and contrast Type 1 and Type 2 hypervisors. Discuss the advantages and disadvantages of each type, providing specific examples of virtualization technologies that fall under each category. In what scenarios would you recommend each type of hypervisor?
- Explain the benefits of containerization using Docker. Discuss the key Docker commands and concepts, such as Docker images, containers, and Dockerfiles. How do Docker containers improve application deployment and scalability?
Glossary of Key Terms
- IP Address: A unique numerical identifier assigned to each device connected to a network, enabling communication.
- Subnet Mask: A mechanism for dividing an IP address into network and host portions, defining network size.
- DNS (Domain Name System): A hierarchical system that translates domain names into IP addresses.
- Resolver: A DNS server that performs recursive queries to resolve domain names.
- TLD (Top-Level Domain) Server: DNS servers for top-level domains like .com, .org, and .net.
- Authoritative DNS Server: A DNS server that holds the definitive answer for a domain’s DNS records.
- ifconfig: A legacy command-line tool for configuring network interfaces on Linux.
- ip: A modern command-line tool for configuring network interfaces on Linux, part of the IP Route 2 package.
- NMCLI (Network Manager Command Line Interface): A command-line tool for managing network connections in Linux.
- Ping: A network utility used to test the reachability of a host.
- Traceroute: A network utility used to trace the path a packet takes to a destination.
- Netstat: A command-line tool for displaying network connections, routing tables, and interface statistics.
- SS (Socket Statistics): A modern command-line tool that provides similar functionality to netstat.
- UFW (Uncomplicated Firewall): A user-friendly interface for managing firewall rules in Linux.
- IP Tables: A powerful firewall utility in Linux for configuring packet filtering rules.
- AppArmor: A Linux kernel security module that allows administrators to restrict application capabilities.
- MFA (Multi-Factor Authentication): A security measure that requires users to provide multiple verification factors.
- GPG (GNU Privacy Guard): A tool for encrypting and decrypting data using public key cryptography.
- Hypervisor: Software that creates and runs virtual machines (VMs).
- Virtual Machine (VM): A software-based emulation of a physical computer.
- Type 1 Hypervisor: A bare-metal hypervisor that runs directly on the hardware.
- Type 2 Hypervisor: A hosted hypervisor that runs on top of an existing operating system.
- KVM (Kernel-based Virtual Machine): A type 1 hypervisor integrated into the Linux kernel.
- Ver: Command line tool that interacts with KVM.
- VirtualBox: A popular type 2 hypervisor for running virtual machines.
- Containerization: A virtualization method that isolates applications and their dependencies into portable containers.
- Docker: A popular containerization platform for building, shipping, and running applications in containers.
- Image (Docker): An immutable, packaged snapshot of an application and its dependencies.
- Container (Docker): A running instance of a Docker image.
- Libvert: A toolkit providing APIs and management tools for virtualization environments.
- df: Displays disk space usage for file systems.
- du: Displays disk space usage for files and directories.
- Top: Displays a dynamic real-time view of running processes.
- Htop: Displays a dynamic real-time view of running processes with a user-friendly, colorful interface.
- Free: Displays the amount of free and used memory in the system.
- Vmstat: Displays information about virtual memory, system processes, and CPU activity.
- Syslog: A standard protocol for logging system events and messages.
- Chroot: An operation that changes the apparent root directory for the current running process and their children.
- ACL (Access Control List): A list of permissions attached to an object. It specifies which users or groups have access to the object and what operations they are allowed to perform.
Linux System Administration and Networking Fundamentals
Okay, here’s a briefing document summarizing the key concepts and ideas from the provided text, with quotes as requested.
Briefing Document: Networking and System Administration Fundamentals
This document summarizes core concepts and tools related to networking, security, and system administration within a Linux environment. The information is derived from a training series focusing on fundamental principles and practical commands.
I. Networking Fundamentals
- IP Addressing: IP addresses are unique identifiers for devices on a network, enabling communication. “IP addresses are unique identifiers assigned to devices that are connected to a network. they allow you communicate with each other uh and are very important for Network management and communication.”
- IPv4: The original IP addressing scheme, using 32-bit numerical labels. Limited to approximately 4.3 billion unique addresses. Each section can range “from zero and uh go all the way to 254”. The standard notation includes numbers separated by three dots, such as 192.168.1.1.
- Subnet Masks: Defines the network portion and host portion of an IP address. Example: “192.168.1.1 was a subnet subnet mask of 255 2555 2555 so the first three o octets are exactly the same which means that this portion 1921 1681 this first three the numbers right here represent the network and then the last piece would be the actual host”. A subnet mask of 255.255.255.0 means the first three octets represent the network, and the last octet identifies the host.
- Reserved Addresses: Two addresses within a subnet are always reserved: the network address (often .0) and the broadcast address (often .255).
- DNS (Domain Name System): Translates domain names (e.g., google.com) into IP addresses. This process involves a hierarchy of DNS servers.
- The user’s computer sends a DNS query to a “DNS resolver.” The resolver then contacts root DNS servers.
- “The resolver will contact one of the root DNS servers and these are at the top of the DNS hierarchy so these are the actuals and the orgs and the Nets.”
- Root servers direct the query to the appropriate Top-Level Domain (TLD) server (e.g., .com, .org). The TLD server then finds the IP address.
- “the authoritative server is going to be at this particular location so that gets sent back to the local DNS server”.
- The DNS resolver receives the IP address and provides it to the user’s computer, loading the website.
- DHCP (Dynamic Host Configuration Protocol): Automatically assigns IP addresses to devices on a network.
II. Network Interface Configuration
- ifconfig (Interface Configuration): A command-line utility used to configure network interfaces on Unix-based systems (Linux, macOS). Allows viewing and assigning IP addresses, controlling interface states (up/down).
- Despite being “deprecated supposedly,” ifconfig remains in use on some systems. The command ifconfig without arguments lists all network interfaces and their configurations.
- “The simplest version of the command is to just type if config and press enter and it’ll lists all the network interfaces that are on your system along with all of the current configurations meaning the IP addresses that are assigned to them if there are any network masks or broadcast addresses and everything else that would be appropriate for that particular configuration”
- ip (IP Route2): The modern replacement for ifconfig. Provides similar functionality for managing network interfaces. IP space a or IP space address displays network interfaces and details. The command ip a will produce a result very simliar to ifconfig
- nmcli (NetworkManager Command-Line Interface): A command-line tool for managing network connections on Linux.
- nmcli connection up <interface>/nmcli connection down <interface>: Activates or deactivates a network interface.
- nmcli device status: Displays the status of network devices (connected, disconnected, unavailable).
- nmcli device wifi list: Lists available Wi-Fi networks, including SSIDs, signal strength, and security type.
- nmcli device wifi connect <SSID> password <password>: Connects to a Wi-Fi network.
III. Network Troubleshooting Tools
- ping: Tests the reachability of a host (computer or server) by sending ICMP packets. Measures round-trip time.
- “you basically ping the IP address or you ping the website and you can also measure the round trip time for the messages that are sent to that to just uh establish how strong the connection is or how quick that particular host is uh to respond to you”
- The -c option specifies the number of packets to send.
- The -i option sets the interval between packets.
- The -f option floods the target with packets.
- traceroute: Tracks the route a packet takes to reach a destination by incrementing the “time to live” (TTL) value. Helps identify delays or failures along the route.
- The -m option specifies the maximum number of hops.
- The -p option sets the packet size.
- netstat (Network Statistics): Displays network-related information, including connections, routing tables, and interface statistics.
- Options include: -t (TCP ports), -u (UDP ports), -l (listening ports), and -n (numerical addresses).
- ss (Socket Statistics): A modern alternative to netstat, offering better performance and more detailed output. Part of the IP Route2 suite.
- Options are very similar to netstat, such as displaying TCP, UDP, and listening ports
IV. Firewall Management
- ufw (Uncomplicated Firewall): A user-friendly command-line interface for managing iptables firewall rules.
- sudo ufw enable: Activates the firewall.
- sudo ufw disable: Deactivates the firewall.
- sudo ufw allow <service>: Allows traffic for a specific service (e.g., SSH).
- sudo ufw deny <port>: Blocks traffic on a specific port.
- sudo ufw status: Shows the current firewall status and active rules.
- sudo ufw allow from <IP address> to any port <port>: Allows traffic from a specific IP address to a specific port.
- sudo ufw logging on/off: Enables or disables firewall logging.
- sudo ufw allow in: Allows all incoming traffic.
- sudo ufw deny in: Denies all incoming traffic.
- sudo ufw allow out: Allows all outgoing traffic.
- sudo ufw deny out: Denies all outgoing traffic.
- iptables: A more complex, low-level firewall management tool.
- Uses chains (INPUT, OUTPUT, FORWARD) to define packet filtering rules.
- sudo iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE: Enables NAT masquerading, hiding internal IP addresses. “it’ll change the source IP address when it gets sent out to the world to whatever the masquerad the disguised IP address would be”.
- The mangle table allows for packet alteration, such as changing the type of service.
- sudo iptables -L: Lists the current rules for the filter table.
V. Security Enhancements
- Chroot Jails: Creates an isolated environment for a process, limiting its access to the file system.
- “effectively you’re isolating a subset of the file system and you create what’s known as the chroot jail”.
- Enhances security by restricting the damage caused by untrusted programs.
- Useful for testing, development, and system recovery.
- Steps include creating a directory, populating it with necessary binaries and libraries, and using the chroot command.
- File Permissions and Ownership: Controls access to files and directories based on user, group, and others.
- Permissions: Read (r), Write (w), Execute (x). Numerical values: r=4, w=2, x=1.
- chmod: Command to change file permissions. Can use symbolic notation (e.g., chmod u+rwx file.txt) or numerical notation.
- chown: Command to change file ownership.
- Access Control Lists (ACLs): Provides fine-grained control over file and directory permissions, allowing specific access levels for multiple users and groups.
- “Access Control lists are a way to provide more more fine grained control over file and directory permissions”.
- setfacl: Sets ACL entries. Options include -m (modify), -x (remove specific entry), -b (remove all entries), and -d (designate directory).
- getfacl: Views ACL entries.
- AppArmor: A security module that confines programs to a limited set of resources.
- “AppArmor is a security module it’s actually installed natively in ubuntu it enhances the security of an application or a set of applications it works by creating these profiles that will confine the action of the application or the group of the applications you’re protecting to that profile”.
- Modes: Enforce (blocks unauthorized access) and Complain (allows access but logs it).
- aa-status: Displays the current AppArmor status.
- aa-enforce: Sets a profile to enforcing mode.
- aa-complain: Sets a profile to complain mode.
- Password Security: Strong passwords are crucial. Multi-factor authentication (MFA) enhances password security.
VI. Encryption
- GPG (GNU Privacy Guard): A versatile tool for securing files and communications using public and private key pairs.
- “it’s a very versatile tool for securing files and Communications using public and private key pairs”.
- Commands include:
- gpg –gen-key: Generates a new key pair.
- gpg -e -r <recipient> <filename>: Encrypts a file for a specific recipient.
- gpg -d <filename.gpg>: Decrypts a file.
- gpg –import <public_key_file>: Imports a public key into the key ring.
- gpg –export -a <user_id> > <public_key_file>: Exports a public key to a file.
- gpg –list-keys: Lists the keys in the key ring.
- SCP (Secure Copy Protocol): Securely copies files between systems. Uses SSH for encryption.
- “securely copies files between a local and a remote machine or between two remote machines”
- scp <source> <destination>: Copies files.
VII. System Monitoring and Troubleshooting
- Log Files: Crucial for system administration and troubleshooting. Located in the /var/log directory.
- syslog (Debian-based): General system log.
- messages (Red Hat-based): General system log.
- auth.log: Authentication events.
- secure (Red Hat-based): Security-related events.
- dmesg: Kernel-related messages.
- Use tail -f <logfile> to monitor logs in real-time.
- Disk Usage Analysis and Cleanup:df: Displays information about available and used disk space. The -h option provides human-readable output.
- du: Estimates and displays disk space used by files and directories. The -sh option provides a summary in human-readable format.
- Process Monitoring:top: Displays a dynamic real-time view of running processes. Allows sorting by CPU usage or memory usage.
- htop: An enhanced version of top with a more user-friendly interface.
- Memory Management:free: Displays the amount of free and used memory in the system.
- “provides human readable output which essentially gives you the measurements in what it thinks are the best measurement”.
- watch -n 1 free -h: Monitors memory usage in real-time.
- System Statistics:vmstat: Reports virtual memory statistics, including memory usage, CPU performance, and I/O operations.
VIII. Virtualization and Cloud Computing
- Virtualization: Enables running multiple virtual machines on a single physical machine.
- “Virtual machines are basically simulations of physical computers”.
- Hypervisors: Software or firmware that creates and manages virtual machines.
- Type 1 (Bare-Metal): Runs directly on the hardware. Examples: VMware ESXi, Microsoft Hyper-V, Xen.
- Type 2 (Hosted): Runs on top of an existing operating system. Examples: VirtualBox, VMware Workstation.
- KVM (Kernel-based Virtual Machine): A type 1 hypervisor integrated into the Linux kernel.
- virsh: Command-line tool for managing KVM virtual machines.
- virsh start <VM name>: Starts a virtual machine.
- virsh list –all: Lists all virtual machines.
- virsh shutdown <VM name>: Shut down a virtual machine.
- VirtualBox: A popular type 2 hypervisor.
- “commonly used for testing and deploying environments”.
- vboxmanage: Command-line interface for managing VirtualBox VMs.
- vboxmanage startvm <VM name>: Starts a virtual machine.
- vboxmanage list vms: Lists all virtual machines.
- vboxmanage controlvm <VM name> poweroff: Powers off a virtual machine.
- Containers (Docker): Package applications and their dependencies into portable containers.
- docker: Containerization tool.
- docker run <image>: Runs a container.
- docker ps: Lists running containers.
- docker stop <container_id>: Stops a container.
- docker rm <container_id>: Removes a container.
- Cloud Computing: Provides on-demand access to computing resources (servers, storage, databases, etc.) over the internet. Types: IaaS, PaaS, SaaS.
- “IaaS is one version of what they would provide for you which is access to the infrastructure that you would otherwise maintain if you weren’t using the cloud”.
- “PaaS would be the service to develop all the platforms”.
- “SaaS which is software as a service that we’re going to provide this software or an email or anything like that on demand”.
I hope this is helpful! Let me know if you’d like me to elaborate on any of these points.
Networking Fundamentals and Security: FAQ
FAQ on Networking Fundamentals and Security
1. What is an IP address, and why is it important?
An IP (Internet Protocol) address is a unique numerical identifier assigned to every device connected to a network. It enables devices to communicate with each other and is crucial for network management and communication. IPv4, the original version, uses a 32-bit numerical label format (e.g., 192.168.1.1), while IPv6 was developed to address the limitations of IPv4’s address space.
2. What is a subnet mask, and how does it relate to IP addressing?
A subnet mask is used to divide an IP address into network and host portions. For example, a subnet mask of 255.255.255.0 indicates that the first three octets of the IP address represent the network, while the last octet identifies the host within that network. Different subnet masks allow for varying numbers of hosts within a network. Two addresses are reserved in each subnet for the network address (usually the first address) and the broadcast address (usually the last address).
3. What is DNS, and how does it work to resolve domain names to IP addresses?
DNS (Domain Name System) is a hierarchical system that translates human-readable domain names (like google.com) into IP addresses that computers use to communicate. When you type a domain name into your browser, your computer sends a query to a DNS resolver, which may then contact root DNS servers, top-level domain (TLD) servers (like .com or .org), and authoritative DNS servers to find the corresponding IP address. This process, although complex, happens very quickly in the background.
4. What are ifconfig and ip, and how are they used to manage network interfaces?
ifconfig (interface configuration) is a command-line utility used to configure network interfaces on Unix-based operating systems. It allows you to view interface configurations, assign IP addresses, and control the state of interfaces. The ip command, part of the iproute2 package, is intended as a modern replacement for ifconfig, offering similar functionalities with a different command syntax. Examples of using the ip command are ip a or ip addr.
5. How can nmcli be used to manage network connections in Linux?
nmcli (NetworkManager Command Line Interface) provides a powerful command-line interface for managing network connections on Linux systems. It allows you to view and modify connections, assign static IP addresses, control connection states (up/down), and manage Wi-Fi networks. For instance, you can use nmcli device wifi list to see available Wi-Fi networks and nmcli connection up <connection_name> to activate a connection.
6. How do the ping and traceroute commands help in troubleshooting network connectivity issues?
- ping tests the reachability of a host by sending ICMP packets and measuring the round-trip time. It can help determine if a host is online and how reliable the connection is.
- traceroute tracks the route packets take to reach a destination, identifying the intermediate routers and delays along the path. This helps pinpoint where connectivity issues or delays occur.
7. What are firewalls, and how do tools like ufw and iptables contribute to network security?
Firewalls act as a barrier between a network and the outside world, controlling incoming and outgoing traffic based on configured rules. * ufw (Uncomplicated Firewall) is a user-friendly front-end for managing iptables rules, making it easier to set up basic firewall configurations. Examples include sudo ufw allow SSH and sudo ufw deny 80. * iptables is a more complex command-line tool that provides direct control over the Linux kernel’s packet filtering capabilities. It allows for highly customized firewall rules.
8. What is a chroot jail, and how does it enhance system security?
A chroot jail is an isolated environment created by changing the root directory for a process and its children. This limits the access of that process to a specific subset of the file system, enhancing security by preventing compromised programs from accessing or modifying files outside the jail. It’s useful for testing software in a controlled environment or repairing a system from a rescue environment.
Network Security: UFW, IP Tables, SELinux, and Best Practices
Network security is crucial, requiring firewalls to act as barriers between internal and external networks by monitoring and controlling traffic based on established rules. Important tools for network security include Uncomplicated Firewall (UFW) and IP tables.
Uncomplicated Firewall (UFW)
- It is a simple but powerful firewall with an easy syntax.
- To activate, use the command sudo ufw enable.
- Traffic can be allowed or denied by specifying incoming or outgoing along with the port. For example, sudo ufw allow in port 22 allows incoming traffic on port 22 (SSH), while sudo ufw deny out port 80 denies outgoing HTTP traffic on port 80.
- To check the status and active rules, use ufw status.
- Traffic can be allowed from specific IP addresses using ufw allow from [IP address] to any port 22.
IP Tables
- It is a more complex tool that allows detailed control over the network and enables creation of complex rules for packet filtering and network address translation.
- To view current rules, use sudo IP tables -L. The default table is the filter table, displaying input, forward, and output chains.
- To add a rule, the command is IP tables -A INPUT -p tcp -dport 22 -j ACCEPT to allow TCP traffic on destination port 22. To block traffic, use IP tables -A INPUT -p tcp -dport 80 -j DROP.
- To save rules, use iptables-save > /etc/iptables/rules.v4. To restore rules, use iptables-restore < /etc/iptables/rules.v4.
SELinux (Security-Enhanced Linux) is a security module in the kernel that provides access control policies. SELinux defines rules for processes and users accessing resources, enforcing strict policies. Its modes of operation include enforcing (blocks violations), permissive (logs violations), and disabled. Common commands include SE status to view the status and setenforce 1 to enable enforcing mode, or setenforce 0 for permissive mode.
App Armor is another security mechanism that uses application-specific profiles for access control. Commands include aa-status to get the status of App Armor, and aa-enforce to enforce a profile for a specific application.
Additional points on network security:
- Changing the default SSH port (Port 22) can reduce the risk of automated brute force attacks. This is done in the sshd configuration file.
- Disabling root login forces attackers to log in as standard users and escalate privileges. This is configured in the sshd configuration file by setting permit root login no.
- Limiting SSH users involves whitelisting specific users who can log in via SSH by using the allow users parameter. The SSH service must be restarted to apply configuration changes.
- GPG (GNU Privacy Guard) is used for encrypting data. It uses asymmetric encryption with public and private key pairs.
- Secure file transfer can be achieved with SCP (secure file copy) or SFTP (secure file transfer protocol). SCP securely copies files between hosts.
- Analyzing authentication logs can reveal unauthorized access attempts. Key log files include auth.log and secure log.
- rsync can be used to back up data, including syncing over SSH for secure transfers.
Linux File Permissions and Access Control
File permissions are essential for system security, dictating who can access and modify files and directories. Understanding and managing these permissions ensures that sensitive data remains protected and that only authorized users can make changes.
Levels of File Permissions
- Categories: Permissions are assigned based on three categories: the owner (a specific user), the group, and others.
- Permissions: Each category has three types of permissions: read (r), write
, and execute (x). Read permission allows users to view the file’s contents, write permission allows modification, and execute permission allows running a file or entering a directory.
- Numerical Values: Each permission has a numerical value: read is 4, write is 2, and execute is 1. These values are combined to represent the total permissions for each category. For example, read and execute (4+1) would be 5.
Commands to Change Permissions
- chmod (Change Mode): This command is used to change the permissions of a file or directory. It can be used in two ways:
- Symbolic Mode: Uses symbols like r, w, and x to add or remove permissions. For example, chmod u+rwx,g+rx,o+rx file.txt gives the owner read, write, and execute permissions, and the group and others read and execute permissions.
- Numerical Mode: Uses numerical values to set permissions. For example, chmod 755 file.txt gives the owner read, write, and execute permissions (7), and the group and others read and execute permissions (5 each).
- chown (Change Owner): This command changes the ownership of a file or directory. For example, chown user:group file.txt changes the owner to “user” and the group to “group”.
- chgrp (Change Group): This command changes the group ownership of a file or directory. For example, chgrp group file.txt changes the group owner to “group”.
Access Control Lists (ACLs)
- ACLs provide a more fine-grained control over file and directory permissions, allowing definition of permissions for multiple users and groups on a single file or directory.
- Entries: Each ACL entry specifies permissions for a user or group, consisting of the type (user or group), an identifier (username or group name), and the permissions.
- Types of ACLs:User ACL: Specifies permissions for a specific user.
- Group ACL: Specifies permissions for a specific group.
- Mask ACL: Defines the maximum effective permissions for users and groups other than the owner.
- Default ACL: Specifies the default permissions inherited by new files and directories created within a directory.
- Commands:setfacl (Set File ACL): Sets the ACL for a file or directory. For example, setfacl -m u:user:rwx file.txt adds read, write, and execute permissions for the user “user” on “file.txt”.
- getfacl (Get File ACL): Displays the ACL entries for a specified file, showing all users and groups with their defined permissions.
Removing ACL Entries
- -x option: Removes a specific user or group entry from the ACL. For example, setfacl -x u:user file.txt removes the ACL entry for the user “user”.
- -b option: Removes all ACL entries from a file or directory. For directories, the -d option is used in conjunction with -b to remove default ACL entries.
By understanding and utilizing these commands, file permissions and access control lists (ACLs) can be effectively managed to maintain a secure and well-organized Linux system.
User Authentication Methods: Password, MFA, and Public Key
User authentication involves methods to verify the identity of a user trying to access a system or application. Common methods include password-based authentication, multi-factor authentication (MFA), and public key authentication.
Password-Based Authentication
- This is the default method where users enter a username and password to gain access.
- To improve security, password-based authentication can be enhanced with Multi-Factor Authentication (MFA).
Multi-Factor Authentication (MFA)
- MFA adds an extra layer of security by requiring users to provide multiple verification factors.
- This often includes sending a code to a user’s phone or email, or using biometric methods like fingerprint or face scans.
- MFA reduces the risk of unauthorized access, even if an attacker obtains the user’s password.
Public Key Authentication
- This method uses a key pair consisting of a private key and a public key.
- The private key is kept secret by the user, while the public key is placed on the server.
- Public key authentication is more secure than password-based authentication and is not subject to brute force attacks.
- It allows for automated, passwordless logins, which are useful for scripts and applications.
- To generate a key pair, the command SSH key gen is used.
- After running SSH key gen, a file path to save the key is required, and a passphrase can be set for additional security.
Key Transfer and Authentication
- To enable passwordless access, the public key must be transferred to the authorized Keys file on the server.
- The user must authenticate themselves with a password at some point before transferring the key.
- Without initial password authentication, the system will not trust the user to transfer the key.
System Monitoring with top, htop, free, and vmstat
System monitoring is crucial for maintaining system performance and troubleshooting issues. Key tools for this purpose include top, htop, free, and vmstat.
**top**
- Provides a dynamic, real-time view of running processes and their resource usage.
- Displays CPU usage, memory usage, and process IDs (PIDs).
- To sort by CPU usage, press P while top is running.
- To sort by memory usage, press M.
- To quit, press Q.
**htop**
- It is a user-friendly alternative to top with enhanced features and an intuitive interface.
- Offers interactive process management and color-coded output.
- Can use function keys (F1-F12) or keyboard shortcuts for navigation.
- F3 key can be used to search for processes.
- F9 key can be used to kill a process.
- To quit, press Q or F10.
**free**
- Displays information about the system’s memory usage, including physical memory and swap space.
- The command free -h formats the output in a human-readable format (KB, MB, GB).
- Shows the total, used, free, shared, buffer, and cached memory.
- To monitor memory usage in real-time, use watch -n 1 free -h.
- Detailed memory information can be obtained from the /proc/meminfo file.
**vmstat**
- Virtual memory statistics (vmstat) monitors system performance, providing statistics on CPU, memory, and I/O operations.
- The basic command is vmstat 1 5, where the first number is the update interval in seconds, and the second is the number of iterations.
- Key fields in the output include processes (runnable and blocked), memory (swap, free, buffer, cache), swap (in and out), I/O (blocks received and sent), system (interrupts and context switches), and CPU usage (user, system, idle, wait, stolen).
- The st field refers to the CPU steal time, which is the percentage of time a virtual CPU is waiting for resources because the hypervisor is allocating resources to another VM.
- Running vmstat 1 updates data every second until interrupted, while vmstat provides a single snapshot.
These tools provide different perspectives and can be used together to get a comprehensive understanding of system performance.
Virtualization, Cloud Computing, and Containerization Technologies Overview
Virtualization is a technology that allows multiple virtual machines to run on a single physical machine, improving resource use and providing isolated environments. Key concepts include virtual machines and hypervisors.
Virtual Machines (VMs)
- VMs are software-based simulations of physical computers, each running its own operating system and applications independently of others on the same physical host.
- VMs offer isolation, so a failure in one VM does not affect others.
Hypervisors
- A hypervisor is software or firmware that creates, manages, and deploys virtual machines, allocating resources to each.
- There are two types of hypervisors:
- Type 1 (Bare Metal): Runs directly on the physical hardware without needing a host operating system, common in enterprise environments for high performance. Examples include VMware ESXi, Microsoft Hyper-V, and Xen.
- Type 2 (Hosted): Runs on top of an existing operating system. It uses the host’s resources and is suited for desktop virtualization and smaller environments. Examples include VirtualBox, VMware Workstation, and Parallels Desktop.
Advantages of Virtualization:
- Resource Efficiency and Scalability: Virtualization allows efficient use of physical resources and easy scaling up or down based on needs.
- Isolation and Security: Each VM operates independently, isolating it from other VMs on the network. A compromised VM does not affect the rest of the network.
- Flexibility and Agility: Enables easy testing, deployment, and development in isolated environments. New virtual machines can be quickly deployed.
- Disaster Recovery: Simplifies backups and recovery by storing entire virtualized environments that can be easily accessed and restored, especially with redundancies in place.
Kernel-Based Virtual Machine (KVM)
- KVM is a type 1 hypervisor integrated into the Linux kernel, transforming the OS into a virtualization host.
- It leverages Linux features for memory management, process scheduling, and I/O handling.
- KVM supports hardware-assisted virtualization via Intel VT or AMD-V technology.
- The virsh command-line tool manages KVM-based VMs. Common virsh commands include virsh start to start a VM, virsh list to list running VMs, and virsh shutdown to shut down a VM.
VirtualBox
- VirtualBox is a type 2 hypervisor developed by Oracle, compatible with various operating systems like Linux, Windows, and macOS.
- It offers an easy-to-use GUI and command-line interface for managing VMs.
- Key features include snapshot functionality for backups and guest additions to enhance performance.
- VBoxManage is the command-line interface for VirtualBox, with commands like VBoxManage startvm to start a VM, VBoxManage list vms to list VMs, and VBoxManage controlvm to control VMs.
Cloud Computing Cloud computing provides on-demand access to computing resources over the Internet, including servers, storage, databases, and software. It allows users to provision and manage these resources easily.
Cloud Service Models:
- Infrastructure as a Service (IaaS): Provides virtualized hardware resources like virtual machines, storage, and networks. Users deploy and manage operating systems, applications, and development environments. Examples include AWS EC2, Microsoft Azure Virtual Machines, and Google Compute Engine.
- Platform as a Service (PaaS): Offers a development and deployment environment in the cloud, including tools and services to build, test, deploy, and manage applications without managing the underlying infrastructure. Examples include AWS Elastic Beanstalk, Google App Engine, and Microsoft Azure App Service.
- Software as a Service (SaaS): Delivers applications over the Internet on a subscription basis. Users access these applications via a web browser without needing to install or maintain anything. Examples include Microsoft Office 365, Google Workspace, and Salesforce.
Advantages of Cloud Computing:
- Scalability: Easily scale resources up or down based on demand.
- Cost Efficiency: Reduces upfront costs by eliminating the need for physical hardware.
- Flexibility and Accessibility: Access services from anywhere with an internet connection.
- Reliability and Availability: Redundant locations ensure high availability and reliability.
- Disaster Recovery: Scheduled backups prevent data loss.
- Automatic Updates: Services are automatically updated without user intervention.
Major Cloud Providers:
- Amazon Web Services (AWS): Offers a wide range of services, including computing power, storage, and networking.
- Microsoft Azure: Provides seamless integration with Microsoft products and a variety of cloud services.
- Google Cloud: Known for capabilities in data analytics and machine learning, with a robust set of cloud services.
Containerization Containerization involves packaging applications and their dependencies into portable containers that run consistently across different environments. Docker is a popular containerization tool.
Containers vs. Virtual Machines:
- Containers share the host operating system’s kernel, making them lightweight and fast to start.
- They include everything needed to run the application (code, runtime, etc.) but do not include an OS.
- Virtual machines, in contrast, run on a hypervisor and include a full operating system.
- Docker containers run on anything supporting Docker, ensuring consistency across development, testing, and production environments.
Benefits of Containers:
- Efficiency: Lightweight and use fewer resources.
- Scalability: Easily scale up or down based on demand.
- Portability: Can be transferred and run on various operating systems.
- Isolation: Multiple applications can run on the same host without interfering with each other.
Basic Docker Commands:
- docker run -it <image_name>: Runs a container image interactively.
- docker ps: Lists running containers.
- docker stop <container_id>: Stops a running container.
- docker pull <image_name>: Downloads a Docker image from Docker Hub.
Container Orchestration Container orchestration tools automate the deployment, scaling, and management of containerized applications.
- Kubernetes (K8s): An open-source platform for automating deployment, scaling, and management of containerized applications. Key features include automated deployment and scaling, load balancing, self-healing, and secure management of sensitive information.
- Example commands: kubectl create deployment nginx –image=nginx to create a deployment and kubectl scale deployment nginx –replicas=3 to scale the deployment.
- Docker Swarm: Docker’s native clustering and orchestration tool, simpler than Kubernetes. It offers simplified setup, scaling, load balancing, and secure communication between nodes.
- Example commands: docker swarm init to initialize swarm, docker service create –name web –replicas=3 -p 80:80 nginx to create a service, and docker service ls to list services.
Virtual Machine Management (libvirt)
- Libvirt is a toolkit with an API for interacting with VMs across different virtualization platforms like KVM, Xen, and VMware.
- It provides a unified API for managing VMs across different hypervisors, simplifying VM management.
- Key features include virsh for management and virt-install for creating new VMs.
Common libvirt Commands:
- virt-install: Installs a new virtual machine.
- Example: virt-install –name myubuntuvm –memory 2048 –vcpus 2 –disk path=/var/lib/libvirt/images/myubuntuvm.qcow2,size=20 –os-variant ubuntu20.04.
- virsh destroy: Forcibly stops a specified VM.
- Example: virsh destroy myubuntuvm.
- virsh list –all: Lists all VMs managed by libvirt.
The Original Text
this training series is sponsored by hackaholic Anonymous to get the supporting materials for this series like the 900 page slideshow the 200 Page notes document and all of the pre-made shell scripts consider joining the agent tier of hackolo anonymous you’ll also get monthly python automations exclusive content and direct access to me via Discord join hack alic Anonymous today okay now it is time to talk about networking and the fundamentals of networking again this is not going to be a replacement for Network Plus or anything like that but it will be fairly comprehensive and we’re going to go through a lot of the fundamentals as well as some of the commands and tools that you will need to uh navigate the network and the network connections and the interfaces of a Linux environment so first and foremost let’s go into some basic networking Concepts IP addressing so IP addresses are unique identifiers assigned to devices that are connected to a network they allow you communicate with each other uh and are very important for Network management and communication uh so anything when you hear something like a network or anytime that you hear the word network uh think IP addresses and uh IP addresses are very much the main uh the I mean address the main identifiers that uh are assigned through various devices so your TV has an IP address your phone will have an IP address address your computer obviously will have an IP address um anything that’s connected to a network anything that’s connected to the internet will have an IP address ipv4 is the original IP address and it’s a 32bit numerical label uh what you see right here in the green right here that’s traditionally what it looks like it’s separated by three dots and it has three digits or it can have up to three digits on each portion of this thing and it can go from uh one actually Zer it can go from zero and uh go all the way to 254 I want to say um we’ll verify that in a couple of slides uh so what it does is it provides approximately 4.3 billion unique addresses but then what happened is that a lot of devices were developed so you know the average household has or the average person even has multiple uh devices that are connected to the internet and quickly uh way faster than I think people anticipated uh the ipv four addresses ran out um but what happens is that each a uh ISP each internet service provider assigns a series of private IP addresses to each individual person and uh for the most part you will not run across a a duplicate IP address although being that there’s only 4.3 billion unique variations uh it can run across so it can actually have duplicates and that’s one of the issues that resulted in them developing IPv6 so uh ipv4 is the most commonly used version of Ip but because of the fact that there were so many devices they developed a new uh IPv6 format and the IPv6 format looks very different from what we saw previously it is 128bit whereas ipv4 is 32bit so when you have a 128bit identifier it obviously looks a little bit different so in this particular case this is a s Le of an IPv6 address and instead of offering the billions this actually offers 340 unilan addresses and I don’t I I had not even heard of this word prior to looking at IPv6 IP addresses um clearly it is way more than what is available with ipv4 so it’s designed to replace ipv4 eventually but your current computer my current computer uh they have both of these so they’ll actually have the ipv4 as well as the IPv6 But ultimately at some point not exactly sure when IPv6 will replace ipv4 IP addresses to understand networking and IP addresses you also need to understand subnetting so what a subnet is and subnetting is a method that’s used to divide a larger Network into smaller chunks so that they’re easier to manage um and those small chunks are called subnet and it improves the network organization uh the efficiency of the network and the security of the network it also helps to reduce the congestion of the network meaning that there won’t be uh too many things happening at the same time it won’t be blocked off or clogged so to speak um a subnet mask is what you can see as an example in this particular case in the green right here so subnet mask determine the network and the host portions of the IP address so for ipv4 a common one is what you see here now the network portion are these first series of 255s and then the host portion would be this very last thing that is represented by a zero here so in this particular case we have three octets that represent the network portion so that’s one octet that’s another octet that’s another octet Al together you have four octets and four time octet so octet repres repr is8 bits right so when you have 4 * 8 you have a total of 32 so this is a total of 32 bits now when you have the first three octets that are represented by the network that means that these are the network itself so in uh in this particular Network these three portions are going to look exactly the same and each device is going to have a different number at the very end of it so the first three portions will be exactly the same because that’ll be that Network work that they’re connected to and then that last bit is what’s going to change that last octet is what’s going to change to assign an unique identifier for each one of those devices so for example in this particular case we have you know 1 192 16811 was a subnet subnet mask of 255 2555 2555 so the first three o octets are exactly the same which means that this portion 1921 1681 this first three the numbers right here represent the network and then the last piece would be the actual host and then if there’s three hosts that presumably it would be one 2 and three so 1 1921 16811 1 1921 16812 1 1921 16813 so on and so forth so this is the uh subnet that is represented with this mask right here we have this subnet mask now if we wanted to expand this this is what it actually looks like right here so the binary representation is we have eight ones here8 ones here 8 ones here and then we have the zeros at the end representing the portion that can change and this is the subnet mask right here uh the network bits in this case are 24 so you have 8 * 3 which would be 24 the host bits would be eight this last eight uh octet or this eight bits right here the calculation it’s a little bit complicated but not really so you have the number of the hosts per subnet which would be two to the power of the number of host bits so the number of host bits in this case would be eight so 2 to the power of 8 is what we see here minus 2 and I’ll explain what this means right here but 2 to the^ of 8 minus 2 would be 256 so 2 to ^ of 8 would be 256 minus 2 which ends up being 254 so this particular subnet can have 254 individual IP addresses okay so 254 hosts can reside on this particular subnet mask that’s how it breaks down now we look at the next sample of this where you have a subnet mask that actually has the first two octets reserved for the network and then you have the next two octets reserved for the host so you have 16 bits this portion 2 * 8 is 16 bits you have these 16 bits reserved for the network and then you have these 16 bits reserved for the host and when we actually do the calculation here it would be 2 to the^ of 16 which is 65,536 65,536 potential host except you have to subtract that two so it ends up being 65534 so very different from this 254 that is on this one octet right here if you just free up two of these octets for this particular particular subnet mask you now have 65,535 potential host IP addresses that you can assign to people inside of this particular subnet or this particular Network right so this is what it looks like now why do we subtract two this is a very important question so there are two addresses that we need to reserve for any given Network and the first one is the network address so the first address in the subnet which is reserved for the network itself which ends up being represented by just it could be the zero right for example and then the next one is the broadcast address which is the last address that’s represented in the network which would be technically the 256 for example or 255 excuse me so the zero and then the 255 for example would be the ones that are reserved so you actually can go up to 254 right so it go from 1 to 254 for as an example but for the most part and you can assign this when you’re actually assigning your gateways and you’re uh developing your network subnet and The Mask itself you will assign your network address and then you will assign your broadcast address and then that will take up two of those variations and then the remaining uh 253 of the variations will allow for the rest of the devices the rest of the hosts that can reside on that subnet so as a summary we can have have this subnet that allows for 254 potential hosts to reside on it and then you can have this subnet that can allow for 65,535 potential hosts so uh this would obviously be a company this would be probably a home or a company that has less than 254 devices and being that each employee might get let’s say I don’t know their cell phone would be one their work cell phone would be another one their computer would be one maybe an IP address or iPad or something like that a tablet that would be one their TVs that would be uh if they have any Smart TVs sprinkled around the office so on and so forth so for the most part you’re going to have maybe around 50 employees that can reside in a network that like that looks like this but anything more than that they would have a larger subnet because there’s just multiple devices per employee per person and so you just need more than 254 potential addresses and this is how subnet masks and the calculations of those hosts work when a device connects to a network it is assigned an IP address this address can either be ipv4 IPv6 depending on the Network’s configuration subnetting helps organize the network by breaking it into smaller segments making it easier to manage and enhance security by isolating different parts of the network from each other so this is the whole purpose behind it number one to make it easier to manage and enhance the security by isolating different parts of the network from each other so uh when you have multiple subnets it makes it easier for you to find out which host was uh for example uh exploited and which host was hacked into and because they reside in their own little segment they won’t affect the rest of your network and it can be contained uh within that specific segment it can be isolated within that segment the domain name system is the next level up when it comes down to addressing so it’s basically the phone book of the internet typically domain names are related to actual websites but each website actually has an IP address as well so what happens is that most people aren’t going to remember the IP address for the website so you the IP address for Google for example is whatever the series of numbers are but you’re not going to remember that but you will remember Google so you can remember example.com in this case but example.com actually is pointing to an actual IP address right so even websites have IP addresses web servers web applications all of these things actually have IP addresses if they are connected to the internet but for the most part people aren’t going to remember the IP address most people aren’t that good with numbers but they are fairly good with names everybody can remember facebook.com they won’t remember the Facebook IP address but they’ll remember the version of that IP address which would be their domain name and so the do domain name will go through the DNS uh Port as well as the domain name system protocol right and so this is how IP addresses get translated into domain names now how DNS works is that when you type something into your browser the computer needs to find the IP address that’s connected to that actual name the domain name so because they’re most uh easier for us to remember and we enter that the computer how works with the individual IP addresses so it needs to resolve that domain name to some kind of an IP address so what it does is that it’ll send out a query right so it’ll looks like this right you will put your uh input as the user you’ll enter a domain name and that’ll be example.com and you’ll put that into the browser you press enter and then the resolver of the DNS in the computer sends a query right the computer will send send the query to the DNS resolver the resolver is typically provided by the ISP the internet service provider and then the if the resolver doesn’t actually have that in its cache so if you don’t have it stored in your cache or if the ISP doesn’t have it stored in its cache then it performs an actual lookup and it finds it for you and then it stores it right so it’ll query multiple DNS servers to find the correct IP address and then once it finds that IP address address it will resolve it and it’ll connect that IP address to the domain name so that the next time that you go and enter that name it’ll just load it for you quickly if you clear your cache then it’ll perform that process all over again the next time that it goes through the search for the DNS resolver right so you enter something the computer will send that query to the DNS resolver that’s typically with the internet service provider so AT&T for example it’ll send it to AT&T’s DNS resolver and the AT&T’s DNS resolver will go through its database of all of the IP addresses that are connected to domain names and if it has it in there it’ll just send it and uh it’ll send you to the website that you’re trying to go to if it doesn’t have it it does a recursive look up and it try to find what that IP address is connected to or what that domain name is connected to and then it’ll load that IP address onto your browser all of this happens in a matter of a second or two depending on how fast your internet services so all of these things happen very very quickly um and you don’t really see them in the foreground all you see is you typed in the IP address you pressed enter a second or two later a website loads but this is what’s happening in the background the DNS server hierarchy um is connected via the uh the resolver itself so the resolver will contact one of the root DNS servers and these are at the top of the DNS hierarchy so these are the actuals and the orgs and the Nets so these servers are at the top of the DNS hierarchy and they direct the query that DNS query that you made to the appropriate top level domain server which would be any of these guys so if it’s a website it’ll go to that domain server and then from that list of domain names it’ll find the IP address if it’s a.org it’ll go to that server and from that list will pull whatever it is right so there are individual uh servers typically because the fact that there are literally probably billions of domain names at this point as well so there’s a bunch of different dos. org.net now there’s doco and. us. coffee there’s a bunch of different top level domain names now so because of the fact that there is so many different variations each one of them are going to have their own server and then when you send that query that query is going to go to the appropriate server so that it can find the domain names and this is mainly to make the process a little bit faster if they were housing doc.org domnet and every other top level domain name if they were housing all of those things on one server at AT&T it would probably take a much longer amount of time for that domain name to be pulled up and for that IP address to be found so the TLD DNS server um once they’re contacted if you’re looking for example.com it would find the cont I already explained all this so I’m just going to I’m just going to Breeze through this particular portion I got a little bit ahead of myself but again so so if you’re looking for the com it’ll go to the contact of theom TLD server and it it’ll direct the query to that specific authoritative dnf server to get you example.com the authoritative DNS server which is the final step of this whole thing is the one for this particular example for example.com these servers um host the actual domain name itself and then they have the actual DNS records that map that to the IP address and then that’s where the data gets pulled so it goes from the the top level server the TLD server it’ll find that and then from there It’ll point to the authoritative DNS server that’s actually hosting that domain name and that could be at AWS it could be at GoDaddy or a variety of different hosting providers um once it is found and once the data has been pulled it returns that to you right it sends it back to your computer and then it just uh populates it onto your browser so that you can actually look at whatever you want want to look at and watch the video or watch Netflix or all these things so all of this stuff is happening so that you can get access to the data that you want to get access to so to give you a visual representation of everything that we just talked about typically these things are a little bit easier when you look at kind of the flow of it like this right so on your computer you would look for self-repair apple.com so selfrepairing which would be uh either the cache that you have on your computer the cache and host file that you can see on the left right here or it’ll be with your ISP right so if that doesn’t exist uh if you don’t have it if you haven’t checked it then what’s going to happen is that this local DNS server is going to send that question out it’s going to send that query out and it’s going to say where is this place where is this location what’s the IP address of this place and it’ll go to the root server and in the root server says I have no idea try the authoritative server for the people right for all the dot domain names and the authoritative server is going to be at this particular location so that gets sent back to the local DNS server local DNS is like all right fine you don’t know so I’ll go to this guy and this would be the Doom top level domain authoritative DNS server so it’ll go to those guys and it’ll say hey where is this thing and they’re like I have no idea why don’t you try apple right why don’t you try the actual Apple server so that you can find out what the the self-repair apple.com location for that IP address is so that would be the TLD authoritative servers response and then that comes back and then the local DNS is like okay fine let’s go to Apple so you and every single time you see that this one this one says I don’t know why don’t you look for the dot server that’s hosted at this IP address and then this is like oh okay so let me go to that IP address and they go and then this is the top level domain authoritative server and then this thing sends a respon back I have no idea why don’t you go to this IP address which belongs to Apple and he’s like oh okay fine and it’ll come back and it’ll be like hey Apple where is this particular thing and then Apple says I don’t know why don’t you try the authoritative server for repair. apple.com so now there’s the extra piece that comes in for this and then it says that is housed over here and then finally it sends it over here and it says okay fine hey where is this thing and then this says oh no problem there is no self-re there is no self-d out repair that awful and it’ll send it back and then the website says okay there is nothing and this is where like you probably will get an error or something on the screen that doesn’t load it now if there is something like that then what will happen is that it’ll say oh this is the IP address for it and then it’ll send it to the DNS server and the DNS server will send it to your computer and it’ll load it for you right that’s the process right here this I just think that this is so funny that it send this like it send this on this wild goose chase and you go back and forth and then you’re like oh there is nothing here and it’s like oh Jesus um and so uh presumably right so if this uh particular location was not self. repair. apple.com if it was I don’t know repair. apple.com is it would stop at this portion right it would stop at this place and it would be like yeah okay here’s the IP address for repair. apple.com if we were just trying to go to apple.com and you send it Hey where’s apple.com and it’ll say I don’t know why don’t you try a doom server and you’ll come to theom TLD authoritative server and you say Hey where’s apple.com it’ll be like here’s the IP address for apple.com right so it depends on how deep we’re trying to go with this inquiry that we’re making and where we’re trying to land obviously but it goes from your local DNS server it’ll go to the root server the root server will send you to theom tldd server that place will send you to the potential authoritative server for that actual domain name for that actual website right and this is the process of literally everything that we just talked about with this whole DNS part so just to give you that explanation we have the DNS servers which are the specialized servers that are responsible for handling the process of translating a name to an IP address and there are different types of server so you have the DNS resolver that we just talked about which is the recursive resolver and it receives the original query from your computer so that was that thing in the bottom left corner it receives that thing from your computer and it handles the process of contacting everybody else to find that for you and then finally it’ll send you if it finds it it’ll send you to the page that you were looking for from that the DNS resolver from that location it’ll go to the root DNS server which was the thing in the top right and then it’ll say this is the first stop and this is what we’re looking for why don’t you go to the appropriate TLD server which might be.com or might be.org or whatever and then that will go to the actual or.org or.net whatever that top level domain server and then from there it’ll redirect you if it has the address it’ll redirect you if it doesn’t it’ll send you to the authoritative DNS server that will find help you find the other thing but typically this is where it stops so it’ll go from the TLD DNS server and then it’ll say okay go this is the Doom location for the authoritative DNS that you’re looking for and then you can go to this particular place place and these are the servers the authoritative DNS servers are the ones that actually store the records for domain names and provide the final IP address in response to what you’re looking for in that particular case we were trying to go to store.apple.com and then we were trying to go to repair. store so that went a little bit further but typically this is where it stops you’ll go from T the TLD server which is like it’s a.org website oh okay I want to go to.org website it’ll send you to that particular authoritative server and then that.org server will say oh okay this is the address for this particular website that you’re looking for it makes it a lot easier so instead of you having to remember IP addresses you just remember a name that’s it that’s kind of one of the biggest benefits of uh DNS right once you get past the user friendliness it becomes scalable so if you wanted to look at all of the freaking devices that are on the internet and the domain names that are on the internet and the IP address that are connected to these domain names on the internet and you want to expand that then you’re looking at scalability so DNS supports all of the massive number and it’s constantly growing so it supports the growing number of devices that people keep buying and they’ll add another TV to their house and all of the domain names that people keep buying so it supports that scalability right that the massive number that keeps growing and gets bigger and then when you’re considering the amount of of queries that are being made to the Internet so imagine how many billions of queries are being made right now as you’re watching this how many millions and billions of people are trying to access millions and billions of locations across the internet when you consider the massive amount of traffic that’s just going back and forth as you’re watching this right now across the internet you need to also consider well these things can’t crash right there needs to be some kind of a a redundancy means there’s a backup server for Microsoft for example so if there initial server there’s some kind of earthquake at the first server’s location and that particular building has a power outage there needs to be a backup server that kicks in immediately so that microsoft.com doesn’t crash and you can still go and access that website that was actually something that happened relatively recently where the Microsoft service um and all of the computers and all the devices that r Li on Microsoft couldn’t work and this was like a couple of months ago it was pretty recent that that actually happened and in that day Microsoft lost over I think $150 million something like this some crazy number that they lost during that one day that they had an outage so this is where redundancies come in this is where reliability comes in the DNS system and the variety of different uh TLD servers so it’s not just one TLD server that has es all of the domains it’s literally dozens of them probably in the hundreds that are across the globe in different locations so just in case one crashes you can go to the another one and you still get reliable connections if you wanted to and just expand that extrapolate that to the orgs and the Nets and all of the different countries and the different cities and so on and so forth this is where redund redundancy is a big deal this is where this structure this hierarchy of structure that’s available is a big deal and it provides that reliability that people actually really need because I mean what would you do without the internet and once you get past DNS once you understand DNS then you go into DHCP which is the dynamic host configuration protocol and this is a network management protocol that’s used to automate the process of configuring devices on IP network so this is essentially what assigns the IP addresses to any new device that’s connected to your router to your internet right so when somebody comes and connects to your Wi-Fi this is the protocol that assigns the IP address and any other configuration parameter to that device that just connected to your Wi-Fi so it’s DHCP that assigns the IP address to your Wi-Fi to your to the device that just connected to your Wi-Fi uh the way that it works is that a device like a computer whatever connects to your Wi-Fi it connects to the network when it does that it doesn’t have an IP address so it sends out a discovery broadcast message to a d DHCP server the DHCP server makes the offer and it so it receives that Discovery message and then it responds with the offer message and it says hey here’s the available IP address that we have on our Network and the network configuration information that you need like the subnet mask that’s connected to uh the default gateway that would exist and any DNS server addresses so on so and most people really don’t give a crap about this they just want Wi-Fi so most people really don’t care about this and of course the devices don’t display all of this stuff to the person who’s connecting their cell phone to the Wi-Fi that you say oh I have Wi-Fi connection but what happens is that your cell phone sends out the request and then the DHCP server actually responds with an offer and says this is the IP address that I got for you and these are all of the rules and uh configuration details for our specific Network and then the device receives that offer and responds with an actual request message that says hey I accept the offer that you’ve made thank you for this IP address and thank you for giving me connection to this network and finally the server will acknowledge that request and it’ll say all right cool this is your actual IP address from now on and every time that you connect to me this is the IP address that’s going to be assigned to your particular device and now the device can actually use the internet because it has its official address and it can communicate with the network the vast wide network of the internet the worldwide web there are a lot of benefits to this obviously right so it simplifies the network setup imagine if you had to be the one to assign an IP address to every single device that connected to your network and believe it or not at a certain point in the history of the internet this was actually the case and people had to manually assign IP addresses to the computers and to the devices that were connected to the network so uh especially when you have a massive Network where configuring each device would not be possible which is what Microsoft has or what Amazon has they have literally tens of thousands of employees so imagine if somebody had to sit there assigning IP addresses that would be like a full-time 247 type of a job so um it avoids the conflict of addresses this is the other piece that is uh very much prone to human error right so somebody might forget that oh shoot I already assigned this address to this other device and now I can’t reuse this address and now I got to go do this other thing and try to find a new address own and so forth the DHCP protocol the the uh system itself just keeps track of all these things and avoids any conflict so it assigns an actual unique address to the person or to the device that’s being connected to the network and then there’s the management of the IP address so um people want to use their IP addresses uh efficiently if there’s any kind of a security issue or a disconnection from the network the IP address can be resigned to another uh reassigned excuse me to another device um if it has to do anything with uh firewalls which we’re going to get into a little bit as well it helps with the management of IP addresses and blocking or meaning denying access to the network via the IP address or allowing access to the network so on and so forth so it’s the management of these IP addresses that comes in um and again it has to do a lot with uh networks that have a lot of temporary or what they call transient devices so if you go to a uh a coffee shop Wi-Fi hotspot right so if you go to a coffee shop and you’re connecting to that Wi-Fi that specific Wi-Fi at that coffee shop has probably seen tens of thousands of devices that just roll through and they’re there for just one day because they need Wi-Fi and then they leave you know the person came into town for a trip and then they left and they’re never going to visit that uh Wi-Fi spot again but that IP address that was temporarily assigned to that person now needs to be freed up so that it can be assigned to somebody else right so this is how these transient devices or these temporary devices get managed through the DHCP protocol and this is very important to understand especially when you just go into a large Enterprise environment these are the types of things that are very very useful so when you’re thinking about the management of IP addresses what assigns the IP address inside of the network that would be the DHCP protocol what assigns the the name to an IP address that would be the DNS protocol right what types of IP addresses are there ipv4 and IPv6 right so these are the key things that you need to remember when it comes down to the networking fundamentals when you go further along with this the DHCP lease is basically the time the the temporary amount of time that that IP address will be assigned to that given device right so in a public Wi-Fi environment to the DHCP lease time is obviously way less than your personal home internet so when the lease time is about to expire the device has to renew the lease by sending a new request to the DHCP server and then the DHCP server will send the offer um if it stays connected to the network the server typically just renews the lease which extends the time that the address is assigned so that you don’t have to get a new IP address every single time for your laptop to be connected to your home Wi-Fi right so if it’s constantly connected to it or it’s connected uh perpetually like you never take your desktop computer out of your house it stays right the laptop might be put in your backpack and it’ll leave and it’ll come back but the desktop will never get disconnected from that Wi-Fi so in that particular ular case it’ll just renew the lease and it’ll just keep granting access to that particular IP address but when your friend comes over and that you don’t see them for the next 3 months or 6 months or whatever that lease time will lapse and then they’ll need to reapply they’ll need to submit a new request to the DHCP protocol the DHCP server so that they can get assigned a new IP address when they come back to your house so as a summary DHCP streamlines Network management it assigns IP addresses to devices making sure that the IP addresses and any conflicts and everything are all handled without you even having to worry about it um it also simplifies the process of connecting devices to a network by making sure that the network efficiency is overall uh just running smoothly it it’s enhanced and again you just don’t have to worry about it all of these things happen behind the scenes you don’t even think about these things for the most part if you don’t know anything about networking you probably have never even heard of this and you are like oh wow I didn’t know all this was happening but yeah there’s something that assigns the IP address to somebody that gives them access to your Wi-Fi and that is called DHCP all right now we need to address what a network interface actually is so the interface configuration uh Legacy meaning the old school version and actually this is what’s running on my current Macbook so this is not uh Legacy in the regard that it’s no longer being used there’s still a lot of conf uh computers that use if config um but this is the interface configuration that happens with this particular tool or command so I have config uh short for the interface configuration is a command line utility that is used to configure your network interfaces on Unix based operating system so for example Linux or Mac OS um it connects it it creates those interfaces it configures the network interfaces to the actual IP address so it’s been deprecated supposedly um but it’s uh still very much in use so but when I try to run this just to test on my MacBook uh and I ran IP uh it didn’t work it said IP doesn’t exist when I ran if config if config worked so uh it’s not as deprecated as they make it sound and it’s still very much in use um when you go to Windows if config is IP config and it serves the same exact purpose the simplest version of the command is to just type if config and press enter and it’ll lists all the network interfaces that are on your system along with all of the current configurations meaning the IP addresses that are assigned to them if there are any network masks or broadcast addresses and everything else that would be appropriate for that particular configuration this is an example output right here so eth0 is the actual interface in question these are all the flags that are attached to it so it’s up and running it has the broadcast so on and so forth multicast um the inet IP address for this is this piece right here the network mask for it is this so this is the subnet mask that we were talking about and this is the broadcast IP address that’s assigned to this particular Network right so this is the this is just a sample of the output of what would happen when you ran if config now if you wanted to configure the IP address then you would do pseudo if config and then that specific interface that we were just talking about and then the IP address that you would want for it and the net mask that you would want and then you would do up to just make sure that the IP address has been assigned to it and the subnet mask is what it is and then the up keyword brings the interface up meaning it actually activates it right so if you wanted to take it down you would just type down but this is a configuration command so you say uh this specific IP address is what I want to assign to this particular interface and I want it to be on this type of a subnet mask and I want it to be up I want it to start running right so this is what uh how you actually configure something like that you don’t have to necessarily do it because typically the DHCP protocol will do it for you but in case you actually need to do it manually then this is what it would look like to configure an IP address manually for any given interface which is in this case the eth0 and this is the detailed breakdown of everything that I just said so uh maybe I I just should like click forward and see if I actually have these notes in the future slides um so pseudo would be running the command as a super user which is the administrator privilege the eth0 is the inter uh the network interface that you want to configure so it could be anything that has to do with your interface it could be eth1 it could be TP typically not Lo which is the local uh host you typically would not modify that that ends up having the same exact IP address on every single machine which is 127.0.0.1 .1 um so eth0 would be the particular interface in this case that we’re going to be configuring this is the IP address that we’re going to assign to this interface this is the network mask the subnet mask that we are going to assign to this interface and then we want to activate the interface and make sure that it’s up and just like you can bring an interface up you can shut it down or you can deactivate it essentially and so without having to configure the IP address or anything like that if you just wanted to make sure that this particular interface is active you would do ETA Z up if you want to take it down or deactivate it you would just do ea0 down they control the state of the interface itself when you bring it up you activate it when you take it down you deactivate it um bringing interface up or down the usage itself as an act uh as an example over here would just be exactly what we repeated in this particular case and so these are just some examples so if config e0 up or down in this particular case so in conclusion and summary we you know that if confi is supposedly deprecated but it is not because it is active and running on my computer right now on my MacBook right now um it remains a widely recognized tool for managing network interfaces on Unix based systems and allows you to view the interface configurations assign IP addresses control the state of interfaces and that is what I if config does IP is supposed to be the modern replacement for if config it’s part of the IP Route 2 package and it basically does everything that we just talked about with if config except the command in this particular case would be IP you would just do IP space a and press enter or you would just do IP space address so addr and then press enter and it’ll just display all the network interfaces very similar to what if config would do including the IP addresses Mac addresses and any other detail that is relevant to them and this is what the example output looks like so this is very similar to what we talked about previously um The Local Host right here is what this is at the very very top right here and the IP address as I mentioned for the local host on every single device that I have scanned and uh pen testing or anything like that this is the Local Host IP address so this one is universal across every single machine that I’ve ever messed with okay then there’s the eth0 which is the actual interface that is being assigned an IP address on your Wi-Fi on your local network and in this particular case this is the IP address for it right so this is your act actual IP address for or in this particular example this is the actual IP address for this particular machine right and so this would be the broadcast address and then where is their inet mask we see the mask or no I think the mask is not available on this particular example but you see the MAC address as well you see the so this is the MAC address that is connected physically to the ethernet and the MAC address can be spoofed that’s a whole other conversation but uh Mac addresses are not permanent neither are IP addresses you can also spoof an IP address I don’t know why I even said that anytime I think Mac address I’m like oh Mac address spoofing um but this is typically what it looks like so it’s very very similar to the if config output um and in this particular case what we’re seeing over here is the MAC address as well and honestly in a lot of cases when I run if config I also see the MAC address for my devices on if configs output so this is not just limited to the IP Command itself um you have the the assigning of IP address is very similar to what we did with if config uh which would be pseudo IP addr and then you’re going to add this particular IP address with the back uh the forward slash of 24 to the e0 interface and then there’s going to be the breakdown of this so I’m not going to try to explain all of it on this particular screen but what we’re doing is that we’re assigning this particular IP address with a subnet mask which is 24 bits so meaning the first three right here the first first three octets are assigned to the network and then the last octet is assigned to the host itself and then so you’re assigning that particular subnet mask to this particular piece actually so we can actually see this right here so instead of showing the 2555 255 2550 this is the piece right here that tells us what the subnet mask is so in this particular case it’s saying that the the network is assigned 24 uh bits which would be three octets which is what this represents right here okay so breaking it down you have the pseudo command that runs it as an administrator you have IP address add which indicates adding an IP address this is the IP address that we’re adding and we’re putting it with a subnet mask of three octets which would be 24 bits and then we’re going to add it to the interface which is this particular interface the eth0 interface which is typically for the most part also what I’ve seen is that that is the interface name that is assigned to the very first IP address that you get assigned on your particular computer and the overall concept of bringing an interface up or down basically activating or deactivating an interface is also it also applies here it’s just the command is a little bit different so it would be IP Link set eth up or IP Link set eth0 down and this is how the whole thing breaks down so up obviously activates down deactivates it but you’re talking to the interface a little bit differently and you’re saying link I want you to set this particular interface up or down I want you to activate it or deactivate it and these are the examples of what it would look like if you wanted to bring something up or deactivate it and put it down so displaying the routing information um is mostly done so that you know what the paths that the network traffic is taking to reach various destinations uh it includes the information about the default routes that it will take the specific routes that it will take and the interfaces that have been used uh this is to try to troubleshoot any kind of connectivity issues and see if there is any individual connections that are being made along that particular path that the the traffic takes to actually reach its particular destination if any of those things are glitches this is also uh information that can be used for security analysis and Pen testing as well but it’s mostly used for network connection troubleshooting so it displays the routing table which shows the paths that the Network traffic will take and so this is what it looks like this is the sample output for this piece so you’ll have this whole thing right here and I’ll actually break this down in a little bit further detail for you so you kind of understand what this going on it’s not I don’t think this is actually inside of the the scope of the Linux plus uh studies and the examination but just to give you a good idea of what you’re looking at when you’re looking at the routing information so that your networking knowledge is a Little Bit Stronger so let’s actually look at what this piece right here means and what all of these different elements represent okay so now breaking it down in particular segments here so the first portion was the default via 19216811 Dev eth0 so the default indicates the default gateway which is used when no specific route for Destination is found in the actual routing table the Via 192 yada yada yada uh specifies the next hop address so it’ll go from the default gateway to the next address which is going to be the IP address of the default gateaway router that the traffic will be sent through so it’s going from this it’s technically just doing this it’s going to go from the Gateway but it’s going to go as this IP address instead of the default gateway and then this is going to indicate the network interface that’s going to go uh through the traffic that’s going to be routed through right so the default via so this is going to say this Gateway via this IP address on this particular interface is going to start traveling right so it tells the system to send any traffic that doesn’t match a specific route in the routing table to the default gateway at this particular IP address using this particular interface now the next line is this piece right here and we’re going to just break this down so we have this portion that represents a specific route for the IP address range which goes from 1.0 to 1.25 five where the 24 is the subnet mask which would be this piece right here so you should already know this now Dev eth0 is the associated with the network interface eth0 itself protoc kernel signifies that the route was added by the kernel and we already we should know what the kernel is right so the kernel usually as a result of configuring the network interface so it’s going it was added by the kernel this particular route that we’re looking at was added by the kernel and scope link indicates that the route is valid valid only for directly connected hosts on the same link which is the local network so this particular piece what we’re seeing in this line right here is only available for people who are actually connected to the same Wi-Fi the same internet that this particular device is connected to and then this is the source IP address so this is the IP address to be used when sending packets to this particular subnet so if we looked at it from the very very top right here you have the Gateway saying that this is going to be my IP address for communicating and then it’s going to say the rest of it for this particular device that’s connecting so it’s saying that this represents a specific route for this so it’s going to go from any of these to any of these and then it’s going to say that this is my interface this is my actual device that’s going to connect to it this was done by the kernel and it’s going to be on the same exact internet that we’re connected on so the local network or Wi-Fi this is what we’re going to be loc located on and then this is the IP address of the actual device right so it’s not to be confused with this IP address cuz this IP address was for the default gateway this is the IP address that is for this particular device that is communicating with the internet so when we combine both of these lines traffic for any unspecified destination so the default route will be sent through the Gateway at that address via that interface and traffic that’s specifically destined for this particular subnet will be routed through this particular interface using the IP address that was assigned to this interface again this might be a little bit confusing it might be a little bit overwhelming we’re not talking about networking we’re not talking about Network Plus or at least not in this depth I just wanted you to have this just so you can kind of see what this particular output right here kind of represents and these are this is just a sample of what would happen when you run the IP route command so in summary even though though I have config is supposedly deprecated it remains a widely recognized tool for managing network interfaces on Unix Bas systems it allows you to view the interface so on and so forth uh very similar to what is done with IP next on our list of tools is the network manager command line interface so NM CLI uh it’s a command line tool as the name implies uh and it helps you manage the network connections on Linux systems so it’ll interact with the network manager um which is a system service for managing the interfaces and the connections to the interfaces uh it’s commonly used in desktop environments and provides a convenient way to configure and control network settings without a graphical interface because the name is implying this is a command line interface so nmcli interacts with your network manager and it helps you manage the interfaces as well as the connections and so on and so forth uh these are some of the commands so if you want to look at the Active connections on your particular Network you just type in nmcli connection show and it’ll show you all of the Active network connections on your system and then it’ll show you the uu IDs for them the connection name type of the connection device associated with each connection so on and so forth and then you have the particular output as an example here so if we do the show this is typically what it looks like you have the name so this would be the interface right so this would be the machine itself this is the Wi-Fi uh router and then you have the uu ID for both of them and then you have ethernet for this guy and then you have Wi-Fi and then the device itself and then the device uh name or the device ID in this particular case so it’s going to be eth0 and then this is WLAN for this particular set of devices the set of network connections here so if you want to configure a static IP address using nmcli you would go through this now static IP addresses are IP addresses that don’t change so this is going to be a permanent IP address right so pseudo nmcli connection modify on this particular interface it’s going to be an ipv4 address and this is going to be the IP address that’s going to be assigned and it’s a subnet mask of 255 255 255 so it has three octets right the command assigns a static IP with a subnet mask of 24 to the network connection eth0 and this is our breakdown so we have pseudo we’re running it as a administrator nmcli connection modify is indicating that we’re modifying a network connection it’s going to be for this particular interface so this is the network connection that we’re going to to configure and then we’re going to have the IP address and subnet mask that we’re going to assign to it and we’re using an ipv4 uh version of the IP address and then we’re going to add it to this particular subnet which is the 24 so the 24 represents the the number of octets right so 24 divided 8 would be 3 which means 255.255 255 I’m just repeating a bunch of things that you should buy now know so if you want to enable or disable connections uh it’s very similar to everything else we’ve done so we are using the up and down keywords in this particular case we’re going to say sudo nmcli connection up for this particular interface or connection down for this particular interface and we’re going to bring the connection for that interface either up or down activate it or deactivate it and this is what that looks like right here so nmcli connection up eth and C connection down eth yada yada yada so if you want to view the status of the devices now you would look at the nmcli device status and it’ll display the status of all of your network devices whether or not they’re connected disconnected or unavailable and this would be what the example output for that look like so the device you can see right here so this would be the internet this is the Wi-Fi this is the loop back or the local uh host and then the ethernet is connected the Wi-Fi is connected and the loop Act is not being currently managed right so if you want to available it list available all the Wi-Fi networks that you have uh for your particular connection or for your particular device and this will probably be a really big list depending on how big your building is or who’s around you and then you’ll just do nmcl device Wi-Fi list and it’ll list all the available Wi-Fi networks like their ssids and the signal strength and security type and this is what that output potentially looks like so this is the SSID which is the the name essentially of the Wi-Fi so you have my Wi-Fi another Wi-Fi and then there’ll be AT&T y y y Spectrum y y y um the mode right and then the channel that it will be on the rate the speed of it the signal so if it has a signal of 70 it has a stronger signal and a stronger connection for what you have bars would be another version of looking at the signal and then you have the security which is WPA2 or WPA2 in this particular case but uh these are the Wi-Fi uh Securities the Wi-Fi encryptions and WPA 2 is one of the more common ones that are modern and most secure compared to all the the Legacy or outdated versions of wi-fi security if you want to connect to any of those networks you would run a pseudo command with nmcli and you want to connect to the device Wi-Fi and you want to connect to it and this would be the SS ID I guess that would be the name in this particular case and then the password would be password because most often than not you actually need the password to connect to a Wi-Fi and then it’ll just connect to that Wi-Fi using the provided SSID and password that you have given to it and this is the particular uh usage here so you will replace SSID with the actual name of the Wi-Fi and then you’ll replace password so in this particular example you’re connecting to this particular interface which would be for example my Wi-Fi in this case and then from there you will provide the password that would be inside of the password quotation and that will will be the actual password for that Wi-Fi network so in summary the nmcli provides a powerful command line interface for managing network connections on the Linux systems it allows you to view and modify connections assigning static IP addresses control connection States and of course manage the Wi-Fi networks so that is the power of NM CLI now on to troubleshooting so ping is a very useful utility um it’s used to test the reach ability of a host so a computer or server so you basically ping the IP address or you ping the website and you can also measure the round trip time for the messages that are sent to that to just uh establish how strong the connection is or how quick that particular host is uh to respond to you and when you send it you literally just say ping and then you give it the IP address and then what it does is it’ll send a bunch of Internet control message protocol packets so icmp packets anytime you see icmp think ping so it’ll send Internet Protocol packets and there’s something called the icmp flood and this is actually a way that you can uh do a dods uh attack or a Dos attack which is a denial of service attack by flooding that particular host with a bunch of requests with ping and because it’s getting so many requests you may take it down and it may go out of service so this is something that you can see regularly icmp will be associated with ping so just keep that in mind if you see icmp that means they’re trying to Ping it so it’ll send the request to the specified destination which is either the host name or the IP address of that host and then the receiving response will tell you whether or not that thing is up so typically if it doesn’t uh if it’s not up it’ll respond with some kind of a you know host is down or something similar to that if it is up it’ll just say connection was a established and the amount of time that it took for that connection to get established the roundtrip time so the rtt is where the measurement of the time that it takes for the echo packet to actually go to the host and reply back to you so it’s the round trip very very simpler uh very simple to understand um packet loss is the number of packets that were sent but then they got lost somewhere in the process right so the network connection or other issues so if it sends a packet and it didn’t come back or if it sends a packet and it wasn’t received it was lost in the process in the in the transition um in the transport so to speak um if that happens it’ll also report that back to you and this will help you understand the the strength of the signal to that particular host and how reliable that signal is the basic usage looks like this so you just say ping the host name or the IP address so ping google.com that’s an example and or ping you know one 5 whatever I’m just coming up with some kind of IP address but you basically just put in ping and end either the host name or the IP address itself and it’ll ping it for you just to tell you whether or not it’s up for the most part the reason why we use ping is just to see whether or not that host is actually up and when it is up this is what the result looks like so it will just say 64 bytes from this particular location was sent and being that actual host right the host IP address there were 64 bytes that was was sent back to you and this is the the response time which was 12.3 milliseconds and the next one was 11.8 12.1 but essentially this is exactly what it looks like when a host is up when it’s not up you don’t see anything like this it will typically just say uh connection was not available or host is down something along those lines but if you see this and then if you just see this every uh second or every one or two seconds and it just keeps coming cuz uh typically what happens is that unless you stop it so you would have to do uh control C which is actually I think on the next slide so we have the the breakdown here for what we just saw so we have the 64 bytes from the IP address which was the reply that we received from that IP address which was Google in this case uh the sequence of the packet so it starts at zero and it’ll just increment by one and then we have the TTL the time to live value indicating that the maximum number of hops that the packet can take before being discarded and then you have the actual time this is the roundtrip time for that packet to be received and so if you want to stop it right you would do control C which cancels that request unless you actually say that you wanted to send out 10 packets for example or five packets just to make sure that it’s up which is I think would be done with the dash C for count and then you would do that so that it’ll run and then after 10 packets it’ll stop running but if you just do pay Google by itself it’ll keep running until you stop it and you would have to do that with contrl C additional options oh there we go this is the count right here so- C is the actual count I I just keep getting ahead of myself so specifying the number of packets that you want to send and then after five packets it’ll stop pinging and then you can also set the interval between the packets to be sent and that you would do that with the I option and it’ll be a two it’ll be the number of seconds that it’s going to wait so if you don’t want it to show up on their on their intrusion detection system that you’re pinging them and if you want the Ping to wait you know 5 seconds or 10 seconds you can do that just to make sure that you still have the connection and it’s a live connection but it won’t show up on their IDs or their Network intrusion detection system as somebody’s massively pinging us and trying to see whether or not we are up whether or not we have service um you have some additional options here so we were talking about flooding right so if you wanted to flood this particular case um what happens is that you you are trying to potentially uh do a denial of service attack so if you do an F option so if you do the dash F option it’ll flood meaning it’ll send as many packets as fast as it possibly can and that you know the milliseconds that you saw will be much much smaller so it’ll be a massive flood of packets that will be sent to google.com and in this particular case I mean I don’t think Google will even care um but this is typically done to to slow down the actual host that you trying to reach because you want you want to either test their ability to handle a flood of traffic or you want to actually do a denial of service attack and stop them from operating in summary the Ping command is a vital Network troubleshooting tool because it actually tests the connection between devices and makes sure that the device is actually up it also tests the amount of time that it takes for packet to go back and forth to see how reliable that connection is and if you have any kind of a network issue it’ll provide information to you about packets that were lost in the process the length of time that is taking for the package to be sent and come back and of course if there is a status of the connection meaning if the connection is actually up and whether or not you can send a ping to it and receive something back trace route is the type of tool that helps you uh track the route that some a piece of traffic is taking or a packet is taking to go to a particular location so when you run it it’ll send a series of packets to whatever the destination IP address is and it’ll gradually increase the time to live values and that what that does is that it determines the maximum number of hops or the number of internet routers that the packet can Traverse right so when you send something out it’ll go through multiple Wi-Fi routers or multiple uh internet routers before it actually lands at its particular location and it’s rarely ever the same number of hops so um it could be you one hop it could be 10 hops right it just depends um when you increment the TTL value uh it’ll start at one and then it’ll travel only to the first hop before it gets discarded and then with each packet that goes after the fact the TTL value is going to be incremented by one so meaning it’ll take two hops on the second run before it gets discarded and then it’ll take three hops on the third run before it gets discarded and the number of hops again will be the connection to the routers the connection to the the destinations at the the kind of the intermediary connections before it actually lands at its final destination um it’ll send icmp packets icmp messages so when it the packet is discarded due to the expiration of the TTL it’ll handle the packet sent as an IP icmp which will give you the I the time exceeded message back to you as the source so it’ll include information about the actual router that I was trying to connect to allowing trace route to identify the Hop that did not make it the Hop that was discarded along the way and then when it completes the path it’ll keep going right so the process itself will continue until it actually reaches the destination and completes its path and it’ll tell tell you what the maximum number of hops that it took to actually get there um when you get the information from each hop trace route will actually construct the route that was taken by the packets and this is for again to try to measure connectivity and to try to measure the strength of a network and the strength of your particular host trying to connect to another host the strength of the internet connection that you have and the routers that you have and how long it’ll take for something to get through if something has been discarded along the way that is something that you think about if there are too many packets that have been discarded along the way that’s something that that again is kind of like a a red flag that it’s like okay we need to troubleshoot this connection because for whatever reason we’re dropping a lot of packets a lot of packets are being discarded and if it the paths are completed then trace route will actually record that information for you and it’ll show you how many of your uh pings that were sent out how many of the packets that were sent out were actually reaching its destination um if you want to run it it’s very similar to running the Ping command so you just do traceroute google.com for uh this particular example or you could say trace route and IP address.com and it’ll start sending packets out and it’ll start tracking the number of hops that it takes and whether or not those things are going to be discarded and the connection is actually being established or whether or not those things are going to reach the Final Destination that you want it to take and then when it does it’ll show you the route that it took and how many hops that it had to take in order for it to get to that particular destination so when you run it this is the kind of output that you’re going to see right so the very first one is the very first packet that was sent so this is the TTL right and so it went and it took a millisecond and then the second attempt it took 789 milliseconds and the third attempt it took 699 and then it’ll send two and then in this particular case 1.62 milliseconds and it incremented uh it was a longer time obviously because it’s sending longer packets the third one it it sends three right and then obviously the time in response is going to be longer because more packets are being sent out right and so this is what that actually means so this is kind of the breakdown of what we just saw so the very first one that we saw the IP address that we see at the beginning that’s the IP address of the first hop router and this is the first hop that it took the response time the round trip time for these three packets that perent to that first hop were the shortest that we got because it was a single packet right the lines after the fact were the IP addresses and roundtrip times for each of the Hops that we had along that particular path and if we wanted to let that keep running cuz that ultimately it’ll just keep running so this is in this particular case it took three Hops and it stopped running this what we can kind of assume but more often than not uh you’ll see a few dozen maybe it’ll be like 20 something hops that it’ll take and then the data points for each of the routers that it was connected to be before it finally landed at the location that you wanted it to land which is in this particular case the 142 250 74 etc etc this is the the IP address that the was the first hop that it took and then it got back to this particular guy and then this guy responded back to us in this particular case there was only three hops that were taken it went from us to them and it was only three hops HS which is very short to be brutally honest with you more often than not it doesn’t take only three hops to go from where we are to where this particular location was so it just traces the route that your particular Gateway took to get to the actual IP address of Google in this particular case right if you want to try to specify the maximum number of hops that you want the uh the trace route to take it’ll be the M option for Max um and you can give it the the number of hops that you want it to take before it gets to google.com um and then you can also M uh set the number of uh packet bytes the size of the packets that you can declare which can in this particular case be done with the dash P option right here so if you want to dedicate the maximum number of hops you would do it with- m if you want to dedicate the size of the packet you would do- p in summary we have the trace route command that’s a powerful tool to diagnose network connectivity issues by identifying the path that packets take to reach a destination it helps to pinpoint where the delays or failures are along the Route by looking at the actual time stamps and which packets were actually discarded and then it makes it very valuable for you to network uh to troubleshoot your network connections because you will see where how many packets were dropped what uh was taking the longest amount of time to get to a location and the specific routers that were responsible for those network uh connection drops or the delays in the connection net stat short for the network statistics is another command line tool that’ll Display Network related information like the actual connections that you have the routing tables the interface statistics masquerade connections this a really fun word and multicast memberships it’s used very regularly for monitoring and troubleshooting network issues and the basic usage would be to run it with a variety of flags so if you want to view the active listening ports using netstat you can run it with these various flags that you have right here so T would be TCP ports and TCP connections U would be UDP ports and UTP connections L would be only the ports that are in listening mode and then n would be the numerical addresses instead of of the host name so you would get the IP addresses instead of the host names in this particular case so if you just did T it’ll only show you TCP connections if you just did U it’ll only show UDP if you did if you didn’t include L it would show if it’s listening or not listening right so this is essentially looking for all of the connections TCP or UDP that are in listening mode and then you want to see the IP addresses for those particular connections so this is what the output potential would actually look like so you have the protocol which is TCP TCP 6 UDP udp6 so whether or not it’s TCP or UDP this would be right here received and send queries the local addresses that are assigned to them as well as the ports that are assigned to them so this is Port 22 this is Port 80 which is the HTTP server this is the secure shell server these two ports I don’t know actually it’s they are UDP ports we can understand that but I haven’t memorized what they would stand for or what service they would have stands for and then these are the foreign addresses if any and the state that they are in which would be listening state so the TCP ports on these particular addresses on our local addresses are actually in listen mode these UTP ports are not in listen mode SS is the socket statistics um which is the modern uh alternative to net stat so it’ll provide essentially similar information similar functionality but supposedly it has better performance and it has a more detailed output so it’s also part of the IP Route 2 Suite uh which was very similar to what we dealt with with IP itself and it’s preferred in a lot of Linux uh distributions and you run it like this so very similar to what we just did right so you just do instead of net stat you would do SS and then you would give it the flags that you want it to run and exactly the same that we had before we have the TCP UDP listening ports as well as the numerical addresses that would be resolved for them and this is what the output looks like so similar to what you saw previously except instead of the listen or uh unconnected uh state right now that we see here that was previously at the very end right here it’s at the very beginning right here it’ll give you the send number of packets that was sent to them the addresses and the ports that are connected or uh listening and then you have the pier address and ports that would be on the other side so uh this is the example output for the SS command the uh what is it called again the socket statistics command that we use um so to get some additional options here we can look at a established connections so these would be TCP connections show active established TCP connections this would be all TCP connections listening and established which is right here so you can again you can do singular uh Flags or options you can combine the flags and options um you can also look at process information so Tu TCP UDP give me the the network the so name uh resolution deny it so I just want to see the IP address I want to be in listening and and then I want to see the P ID the process ID and program name also as well when it comes down with this so you get the process information that would be connected or running in this particular case so we got everything we just added that last uh flag to it as well so that we can get the process information in summary we have both netstat and SS which are very powerful tools for monitoring and troubleshooting our network connections and they’ll give you a lot of great details whether or not it’s been connected if it’s in listening uh with the IP addresses so on and so forth netstat is technically older it’s very well known um SS is the modern version but they both offer essentially the same information that you’re looking for um I did like the output of ss a little bit better cuz it seemed a little bit more uh friendly to the eye but for the most part they give you the same information that you would need and finally we have arrived at our network uh fundamentals conclusion here which is going to be our firewall settings so so the ufw which is the uncomplicated firewall this is the most common it’s like the uh firewall management uh that is the simplest that it possibly can be via the command line so it’s straightforward and it creates and manages IP tables firewall rules and it’s particularly popular on Ubuntu and all of the derivatives because it’s simple and it’s very easy to use so this is just an example here if you want to enable the firewall you just do pseudo ufw enable and so it activates the firewall and it it enforces any and all of the configured rules that you have for this thing so once it’s actually been en enabled it’s going to start running and it’s going to filter any of the incoming or outgoing traffic based on whatever the rules that you have that’s been set up so if you want to disable it you would do the same thing pseudo ufw disable and it will just deactivate it and any of the traffic that you previously were filtering is now not going to be filtered anymore um if you want to allow a service so in this particular case SSH represents Port 22 so you do sudo ufw allow SSH so it allows traffic for that specific service which is in this particular case SSH which would be Port 22 and SSH stands for secure shell so it allows for somebody to connect remotely to the particular device that this ufw is running on um if you want to deny it you can do it via the port number as well as the service so you could say deny 80 which would be the HTTP traffic and in this particular case it’ll block HTTP traffic on Port 80 which prevents access to web services running on this particular Port then you have the status of the firewall so if you wanted to see what the current status of it is whether or not it’s actually running and what all of the active rules are so if there are any deny rules if there are any allow rules you can see all of those things by just running a simple status command on this and I’ll just show you which one of the rules that you just try to apply for example so if you apply a deny Port 80 or deny SSH if you did those things and you run a status you can check whether or not those things are actually active and currently running um you can do a port allowance in this particular case so similar to what we did with the the service right we can do the same thing with a port that’s associ with it and then you can say allow 443 which is https so this is for like the internet um and Port 80 is HTTP but https is just a secure version meaning it has TLS or SSL uh uh traffic so that it’s actually encrypting all of the communication with your browser and it would be a TCP traffic Port right so it’ll allow all of the 443 TCP traffic that is typically for the communications with browsers UDP traffic is usually for uh viewing video so if you don’t allow UDP or if you don’t specify that you want UDP or if you actually deny 443 UDP you may deny any kind of video viewing that may happen on that web browser but in this particular case we’re allowing Port 443 which is for web browsers secure web connections we want those things to be allowed and then we’re going to allow the TCP protocol associated with them um if you wanted to deny the UDP traffic of Port 25 which is used for the simple mail transfer protocol it’s typically used for mail um it would be done by just doing a simple deny command and then doing 25- UDP and it denies all the UDP traffic on that Port which is used for simple mail transfer for emailing basically that’s what port 25 has done uh it’s the emailing Port that is typically used on on our Network and then you can delete rules right so if you had previously pseudo ufw allow SSH and now you want to delete that rule you can just say delete allow SSH it’s very simple right it’s very intuitive the syntax is actually quite easy to use and then you can delete the deny rule in a very similar way as well so if we denied Port 80 traffic then we would just say delete that particular deny 80 Rule and that way now you have deleted that rule which means that now you’re allowing the port traffic uh the port 80 traffic right and then you have logging on so if you want to enable the logging that’s being done from the firewall which highly recommended that you log everything that happens with your firewall you would just enable the logging so you would say logging turned on and it’ll log all of the firewall events that are happening um and this I mean I I can’t imagine that you would run a firewall without actually collecting firewall Lo logs so it’s very important to have the logs of the traffic that’s being uh coming through and if people are trying to access your Port uh 22 for example your secure shell Port when you’ve done a deny access to that port and then you keep getting attempts to hack into your Port 22 or maybe you have allowed Port 22 traffic to come in and somebody’s trying to log into your Port 22 but they keep using the incorrect password to log in so you feel like you are now victim of a Brute Force attack against your Port 22 which is for remote connections and remote control so uh it’s very important to turn on logging and then you can of course disable logging by just doing logging off and now there’s a version that’s kind of an all encompassing rule which would be a blanket type of a rule which would be allow all incoming so it’s it’s actually not uh allow all incoming it’s just allow incoming but what it does is that it allows all of the incoming traffic to your particular uh Network or I would say in this particular case your your uh device your computer right so you want to say hey I want you to allow all of the incoming traffic which means anybody and anybody could try to communicate with your particular computer on any port that you have open which includes your Port 22 it includes your DNS server and it includes all of the other 65,000 ports that would be available if you want to deny all of the incoming traffic you could do the same thing except just use deny instead of allow and then it’ll deny any connection that’s supposed to come to your particular device this is kind of difficult to do because then you won’t be able to connect to the internet for example via Port 80 or via Port 443 you won’t be able to get any of the the responses from Google or from YouTube if you’re denying all of the incoming traffic so just keep this in mind it sounds like it might be a good idea but you can say deny all and then go and do um allow of spe specific ports because you want to access specific Services right so you can allow you can dedicate you know the port 80 allowance and dedicate the port 443 allowance and then deny everybody else right and in the same way that you would deny all incoming or allow all incoming you can do allow all outgoing traffic so any request that you make out to the world we want to make all of those requests allowed and then same thing deny all of the outgoing requests and this is something that maybe you don’t want uh people to be able to connect to the internet for example because they shouldn’t be able to connect to the internet all they need to do is just work on their local computer schools sometimes do this because they don’t want people to be able to connect to the internet and they just want somebody to just use the computer for their schoolwork so this is the type of stuff that they would put so you’re denying all of the outgoing traffic that’s going to come from that particular computer so in summary it’s a very simplified process of managing the firewall rules for traffic and you can have very intuitive commands like we just saw which would be enable or disable or allow or deny and then you can do it based on the service name like SSH or you could do it based on the port number like Port 22 which would be the same port for SSH so you could allow Port 22 you could allow Port 443 so on and so forth IP tables is another command line firewall utility that will allow admins to configure Network packet filtering and address translation and a lot more other things so it’s a little bit more complex version of the uh the uncomplicated firewall that we just went through uh it operates with the Linux kernel to provide detailed control over how packets are routed so the core tables in IP tables can be filter so this is one of the main ones so it filters packets right so you can handle all the incoming packets by input you can handle the packets that are routed through your device by forward and then you can handle all outgoing packets by using the output chain so you have the input chain the forward chain and then you have the output chain and these are all done through the filter and then you have the network address translation which is for masting and port forwarding so it it doesn’t show your actual IP address it’ll use a different type of IP address so that the world doesn’t see your actual IP address which is actually very useful for masquerading and essentially kind of disguising your IP address and then the chains would be pre- routing so altering the packets before actually routing them out post routing altering packets after routering them out and then you have output meaning altering all of the packets that are generated by your device itself and being sent out to the world and what it does is that it essentially changes the source IP address in this regard it’ll change the source IP address when it gets sent out to the world to whatever the masquerad the disguised IP address would be so that when the the if it’s ever uh you know intercepted or your traffic is sniffed by somebody they don’t see your actual Source IP address they see some other IP address that wouldn’t be used so they can’t particularly attack you uh you have mangle which is used for specialized packet alterations like changing the type of the service that’s being request for example uh you know SS or UDP or th TCP or something like that or marking of the packet so you can again pre-out the alteration so you can alter the incoming packets before it’s been actually routed you can alter the outgoing packets by using the output chain and then forward input or post routing would be available for other types of modifications as well and that’s what mangle does so if you want to look at what the current rules are you would just do the pseudo IP tables with a capital l flag and it’ll list all of the current rules for the filter table uh it shows the rules for the input forward and the output chains and so this is what the example output would look like this is a little bit uh it’s kind of one of the more shrunken uh displays that I have on my screen so hopefully you can see what I’m doing um you can also just zoom in a little bit uh on your screen but so we have the chain input here and the policy itself is to accept it and then we have the chain forward and the policy is to accept it and the chain output and the policy for that is to accept it and then it gives us what the destinations would be if there were any and then the they accept all in this particular case from anywhere to anywhere these are the particular rules so this is the this is the current uh display of what we it would look like if we just uh listed this with the IP tables command so this doesn’t necessarily mean much CU there isn’t very much um uh data that is reflecting on these exact rules that we have over here cuz the with the exception of this one right here there really aren’t any rules that’s it’s saying from anywhere to anywhere except everything that’s basically the rule that we have over here so the basic commands that we can run would be to allow the traffic so in this particular case we have it IP tables Das a and then input and then the protocol would be TCP and then the destination Port would be 22 and then the accept would be the rule so it adds a rule to the input chain that’ll allow TCP traffic on Port 22 which is used for SSH then you can have the explanation of this mofo right here so you have the- a input which appends the rule to the input chain you have the- ptcp which specifies the protocol as being the TCP protocol Dort would be the destination Port as 22 and remember this is an input rule so this is actually all of our incoming traffic and the J accept would would jump to accept the target allowing the actual traffic okay if you wanted to block something then you would just do the drop as the rule itself so everything else exactly stays the same so in this particular case they’re doing it on Port 80 but the final rule would be to drop the traffic that’s coming so in this particular case this is again input traffic so it’s incoming and it’s on Port the TCP Port Port 80 and the very last thing is the big piece right here that is the actual rule itself which is to drop the target blocking the traffic that comes through for Port 80 as the destination Port if you want to delete something you would do the Dash D which is the deleting of the input rule from the input chain right so in this particular case it’s the TCP protocol on Port 80 which is the drop rule so previous the same exact rule that we just actually added uh we are now deleting it by instead of appending which was the- a that we were doing we’re just deleting it from the input chain and everything else stays exactly the same so the rule was to drop all the traffic from Port 80 now we want to delete that rule so we can allow the traffic on Port 80 and so if you want to save a rule you need to save it to the configuration file that is stored inside of the Etsy directory and you would do that Etsy IP table so assuming that you have IP tables installed and then there’s the rules. V4 which is actual configuration file so you will take the everything that you just did and then you will do pseudo IP tables save and then you would forward that so you should remember what we did with this operator it sends this particular output and it sends it to this specific uh path this particular file that we have which is the rules. V4 for IP tables if you want to restore a rule that was deleted um you can then go ahead and pull it from the rules. V4 file that you just did and you’re restoring so this one was saving so you’re doing IP taable save and you’re sending it to that configuration file this one whoops wrong direction this one you’re using IP tables restore and you’re restoring it from the IP tables rules configuration file if you wanted to view a specific table you would just use the T which is uh for the table um the natat table which is our Network address translator table and then it’s going to list all of the rules for our net table so you would do IP tables again and then you just want to list the table so that’s what DT stands for and we want the network address translation table and we want to list all of the rules for that particular table so it’s a powerful flexible tool so obviously it’s a little bit more complex than what we just saw with ufw um it has a lot of rules that be that work for a lowlevel environment providing extensive control over the actual Network traffic handling um you can also configure the rules for filtering packets and translating Network addresses modifying packets so on and so forth so those are the features that are expanded upon that are not available in ufw so ufw doesn’t uh you know protect your IP address by changing the IP address on its way out which is the network address translation so ufw doesn’t do that right usw ufw doesn’t do the mangling that is available on IP table so it is a little bit more complex to work with but it also has a lot more functionality than ufw does all right now on to a very important chapter which is the security and access management chapter and the very first section of this is going to be on file system security the first portion of file system security is going to be CH root and the concept of the isolated environment and something called the chroot J so first chroot stands for changing root or change root um it’s a powerful Unix command and it changes the root directory for any current running process and its children processes so we’re talking about the parent process and its children processes um when you change the root directory uh effectively you’re isolating a subset of the file system and you create what’s known as the chroot jail that I uh that subsystem that isolated version of the subsystem um ensures that any uh process that’s running within it can’t access files outside of that specific isolated environment which means that it enhances the security and the control of that specific system as well as the directory or the file system that has been isolated so essentially what you’re doing is you’re taking one specific file system or let’s say one directory and all of the contents within that directory and what you’re doing is you’re isolating it so it’s separate from the rest of the file system hierarchy and then from there um you’re going to ensure that not only is the rest of the file system it protected from what goes on within that isolated environment but everything that’s inside of that isolated environment is also protected from what’s going on outside of it inside of the rest of the file system so essentially the isolation component from it just that specific isolation component guarantees security for the isolated environment as well as everything that’s sitting outside of that isolated environment a better way to break this down is to look at the question itself so why would we use this because when you isolate something when you isolate an application in a chroot jail which is that isolated directory and everything inside of it um what happens is that it limits the damage that any potential untrusted or compromised programs can do and vice versa uh because those untrusted programs can’t see or interact with the broader system right so uh I’m going to show you a visual of this real quick okay so here’s the example that we have the visual that we have so this is the standard hierarchy right here right so this is our actual system route as well as the binaries the home the system so on and so forth right and then inside of the home we have this one particular user who has all of their contents but what has happened is that we’ve done CH rude we’ve put all of their contents inside of a CH rude jail so to speak and we’ve imprisoned them and now what’s going on is that everything inside of this red box is completely separate from the rest of the system meaning if this particular user downloaded something that they shouldn’t have if they clicked on a link they shouldn’t have whatever they did in their particular environment is protected it’s isolated right it’s isolated from the rest of this environment meaning our actual root and the binaries and everything else that’s inside of our main system is not interact is not affected by what’s going on inside of this jailed portion and that’s really the the big significance here essentially we’ve created a Sandbox type of an environment which anything that happens for this user in their isolated environment it might affect their system I mean it probably will affect their file system they probably might lose the content since inside of that or the the hacker or somebody who got into their file system may have access to everything that’s going on inside of this isolated area but they won’t be able to leave this isolated area and go inside of this root portion which is very important because this user maybe they don’t have elevated Privileges and we don’t want them to uh if they do something wrong and if they get hacked if somebody exploits this particular user we don’t want that malicious attacker that malicious actor to be able to get out of this environment and actually go inside of the main environment that has root privileges and they can do some serious damage get access to certain materials that they otherwise would not have had access to so this is the whole con this is the visual concept of what it means to create a chroot jail there are also other reasons why chroot is actually useful so apart from security um if developers want to create something and test it in a controlled environment before deploying it into the production environment or deploying it to the rest of the company so to speak uh they can do that safely inside of a jailed or imprisoned type of an environment um they can test our applications and any configurations that they want um and really really testing out the software that um has different dependencies or libraries that it needs for uh upgraded versions so on and so forth so it’s actually very very useful in the development context as well and finally the uh access and repairing of the systems uh from a rescue environment meaning if something has happened in incident response and the main system is unbootable uh administrators can use CH rout to access and repair those systems from the rescue environment because uh something may have happened and there is a crash or some kind of a uh something that they need to recover from essentially so that they can go back to what’s known as business as usual so there’s a recovery point and usually they take those images every time that they do a backup where they would have hopefully frequent backups of the most recent recovery point and then administrators can actually use CH rude to be able to access the system itself from what is known as the rescue environment which is essentially a jailed environment a chro jailed environment and then they can hopefully recover the system so the use of this command uh first requires that you actually create a directory that will serve as the CH root jail so we’ve already covered how to make a directory so you would make a directory and you would need to use pseudo because you’re going to use this particular directory in an elevated privilege environment so that other people can’t interact with it without having pseudo access so you would create the directory using pseudo and then whatever the path of the directory is this is what we’re going to use as our actual jail environment and then you’re going to populate that jailed environment um by either copying or installing the necessary binaries libraries and files into that particular environment so you would create the environment and it could be what we saw from that visual example it could be something that belongs to a user and then you would populate the contents of that directory with the libraries the files and uh binaries everything else that would be necessary for that particular environment to run and this is the basic population of it that happens right so in this particular case They’re copying bin bash to that directory uh They’re copying uh recursively everything that’s inside of the library and Library 64 and everything that’s inside of the user all of that is going to be copied to the new directory that’s been made which is the jailed directory right the imprisoned direct I like saying imprisoned for whatever reason but so all of the contents of everything that needs to be used for that specific uh jailed environment it needs to actually be copied into that environment so it’s it’s this part is very very simple um the r command is a recursive command meaning it’s taking everything inside of the library as well as all of the subdirectories and the contents of those directories all of those things are going inside of the the jailed environment as well as all these other examples as well so what you want to do is ensure that the directory structure within the CH rout jail mimics the standard Linux directory layout uh essentially meaning that you would need the the binaries as well as the the system binaries all of those other things to be able to make sure that this actually works so ENT what ever you would need in a regular environment in a standard Linux directory layout whatever you would need in that standard layout for your particular environment you need to make sure that it also exists inside of this jailed environment because it is being isolated from the rest of the system once you’ve transferred everything inside of the jailed environment then you just use the CH root command to change the root directory for the current process to the specified directory so what you would do is just run pseudo CH route the path to the actual directory we’ve created and all of the contents inside of it right and so what that does is just changes the root directory for the current session to that particular directory meaning uh for when you’re logged in into the current session inside of that Linux machine once you’ve created this uh jailed environment and you put all the binaries and everything that you would need in order for it to run uh then you change the route to that particular directory and for the remaining session that you’re logged into it’s going to act as if this directory is your actual root directory and if you want to run anything and uh test any particular development upgrades or if you wanted to open up a a file attachment for example to see whether or not it runs properly or if it’s malicious or malware or anything like that you would do it inside of this particular directory to protect the rest of the system from it or or to test whatever you need to test uh without affecting the rest of the system and this is a full workflow flow from beginning to end so you have creating and populating their directory itself so you have the make directory and then this is going to be the name of the directory and then you’re going to copy everything that you need for that particular directory into it which is going to be uh in this case these are all the various options that it can possibly be so you’re going to have my CH root and then inside of my CH root you’re going to take the bin bash put it inside of the bin the lib is going to go inside of the root and then the other pieces the 64 version of it is going to go inside of the root as well and then you’re going to change the environment that you’re in to that particular environment right so you’re going to do CH root my chroot and then you’re going to run it and that’s it and then once you’re inside of the chroot environment you just want to make sure that everything is actually running as it should so you can run something like LS and the forward slash which means that you’re you’re trying to list the contents of that root environment so when you do the forward slash you’re essentially listing the contents of the root directory and if you have done everything correctly you should only see the contents of this new root environment uh instead of everything that you would normally see when you look inside of the root directory so it should technically only just be uh these pieces right here right cuz we copied all of these things so you should only see the bin and the lib and then lib 64 those should be the only things that you see once you run this LS command to see what’s inside of the root directory some considerations and best practices to keep in mind so you want to keep the uh chroot environment the jailed environment as minimal as possible M making sure that you only have the stuff that you actually need all the necessary binaries or libraries or software anything like that to just make sure that you’ve reduced the attack surface if you copy everything that’s inside of your normal environment inside of the jailed environment it’s kind of defeating the purpose of creating this isolated environment so you only want to use the things that you actually need for that exercise because again it’s only for that session anyway ways and once you’re done with that session you I mean you can reuse it it’s not like you can’t reuse it um but for that particular session you should just only be using the things that you actually need um make sure that all the file permissions within that Jail uh are set correctly to prevent any privileged escalation uh attempts and the Escape prevention to uh to make sure that the person if in case anybody has actually uh attacked that particular jail directory uh to make sure that they can’t escape essentially uh you want to avoid running any services or granting access to tools that can allow processes to escape the CH route jail which goes back to the very first point which means only include the necessary binary so you don’t want to give access to or you don’t want to copy any binaries inside of the jail that would potentially Grant the attacker uh an Escape Route right they don’t want to have a vector to get out of the jail and then get into the rest of your system in summary CH route is a very valuable tool for creating an isolated environmental Linux essentially creating a Sandbox for yourself uh which enhances the security by restricting programs to a specific part of the file system and it’s commonly used for running potentially untrusted applications like we said um development and testing as well as system recovery so this is actually a very useful little strategy um that comes embedded within Linux and you can essentially turn any new directory that you’ve created into this little sandbox environment to protect the rest of your system from anything potentially that may go wrong or just if you wanted to test something so we don’t you don’t even necessarily need to worry about uh hacks or anything like that a lot of times you just want to test a new development uh or a new upgrade in the code or a new upgrade in the software so on and so forth and you just want to make sure it doesn’t affect the rest of the system and uh wipe something on the system or crash the system or accidentally delete data so on and so forth forth so it’s a very useful tool to create an isolated environment also known as a Sandbox type of an environment all right now we’re going to take another look at file permissions and ownership and this time we’re going to dive a little bit deeper into this particular concept so uh as you should already know there are different levels of file permissions that we can apply to something and there are different groups that can have ownership as well as access to uh any of the files or directories or anything that exist on a particular system the three different categories would be the owner which would be the user or any other user so it falls under the other category as well but there’s the particular owner which would which would technically be a user um there’s the group and then there are others and then the others would also considered to be users and they could also be considered to be the group that may have access to it so we have these three levels of ownership or access okay and then these three levels also have three levels of permissions which is the read permission the write permission and the execute permission and so if we remember the read permission has a numerical value of four the write permission has a numerical value of two and the execute permission has a numerical value of one so if you had all of these turned on you would have a value of seven right uh if you only have the read and write you only have a value of six if you have to read and execute you have a value of five so on and so forth so these are the permissions and then they can apply to any of these particular ownership categories okay so the first one that we want to think about in this case is just looking at the breakdown of what these things are again and then looking at what these particular breakdowns are again just to kind of give you an idea here the very first character which is represented by this Dash right here um if it is a dash it is a regular file if there’s a d right here instead of that Dash it means that this particular item is a directory and then an L would be a symbolic link so on and so forth so this particular uh item this very first character represents the type of the file that it actually is the next three characters represent the owner’s permission so in this particular case there’s a read write and execute permission that has been attached to this particular item and then the following three characters represent the group permission so it is read not write permissions but only execute and then this will be the other category which is read no write permission and execute so the group as well as the others category both get read no writing permissions and execute and writing represents modification or writing to the file so on and so forth so you can read the file or you can execute the file you can read the binary you can execute the binary so on and so forth but you won’t be able to write to it you won’t be able to modify it okay that’s what this particular example represents knowing this now we can look at what it uh means to change the permissions of something right how to change the permissions of something and we can do this with ch mod also known as change mode and in this particular example we’re doing it with the symbolic version so instead of using numerical values we’re actually using the symbols which is read write execute so on and so forth so in this case we’re looking at this particular example again what we’re doing is we’ve given the user read write and execute permission so you plus read write execute and then group plus read and execute and other plus read and execute and then the name of the file name usually this also requires a pseudo right in front of it so it would be pseudo chod yada y y and so we’ve given the user these permissions we’ve given the group these permissions and we’ve given the others category these permissions as well and then there’s the numerical value the numerical mode of changing permissions which happens with the actual numbers themselves and so the first number represents the user the second number represents the group and the third number represents the others category and as we already established the read write and execute permissions total seven and so in this case they only have read and execute so read would be four execute would be one which totals five and then read and execute would again total five in this particular case so this is what it looks like to change the permission numerically for that particular file name and this exactly is representing what we did in this previous case which was we were doing this whole Spiel right here so everything that you see here actually turns into 755 in this particular case now if we wanted to change the ownership of this particular file we would do it using the CH own Chone uh command and so you can see that they’ve used pseudo in this particular example and we’re doing CH own user and group and then for this file name and so what you would do is you would just replace these two Fields right here with the actual user and the actual group um that you would want to dedicate to this particular file name so it would be you know user one for the developers group and that would be uh for this particular file name so you change the ownership of the file to the specified user and group and you replace user with the username and group with the group name and then it would change the ownership of whatever this file is to those entities that we have declared and this is what that looks like so we’ve assigned it to Alice and the developers group and that would be the ownership for the example. text file in this case then we have changing of the group so you can change the group ownership and you would just do it with the chgrp change group uh command and you would first use the uh first assign the group itself that you wanted to be changed to so let’s say again developers and then you would give it the file name and again they’re using the suit pseudo command uh to allow the pseudo or administrator type of permissions to this particular command cuz you are again you’re changing the actual group of a particular file the ownership of that particular file so that should require an administrator’s privilege and this is what it looks like as the actual example right so we’re doing the example. text and we’re assigning the ownership of this particular file to the developers group using the change group command so here’s a sample workflow here so we have the file that’s been created so by using touch we created this file then we did change mode to change the permissions of it and so the owner has a permission of six which represents read and write and then everybody else has a permission of just read which is represented by the four and that’s pretty much it that’s the permission that we have over here and then if we do LSL for this particular file it should show us that it only has read write and then everybody else only has a read permission for that particular file if we wanted to change the ownership of that file we could just use the CH own command same exact file name and then it’s going to be I mean you just saw this in the CH own uh portion of the presentation anyway so this is literally a duplicate command that we just saw but we are assigning the owner uh to Alice so Alice would actually be the user and then the group ownership would belong to the developers group and it’s for the example. text and then again you just do LS and with the dasl option so you can see who the file ownership is and uh it’ll show you uh the First Column that right after the permissions it will show you the name of the owner and then the very next column would represent the name of the group and it would be Alice and developers in this particular example and in this example we’re changing the group which you’ve already seen this command already as well but we just wanted to kind of reaffirm this particular series of commands here so this this is how you change the group and then if you wanted to see who the new group was you would just do ls- L and it would show you that change so in summary understanding and managing file permissions is very important to system security because you don’t want people who should not have access control or should not have access to a specific fer directory to actually have access to that fer directory and to guarantee that to confirm that you have only assigned uh permissions for uh certain items s to certain people or certain groups you would do it with these various options so you would change the permission for everybody right according to what should be the permissions for it some people should not have any permissions right so you can remove the read write and execute permissions from every single person by doing a minus instead of a plus so in the previous example so if we go and look at just the particular example where we had the let me real quick quick skip to it in this particular example where we’re doing a the others group and we’re doing plus read and execute we could just as well do minus read and execute and it would remove the permissions for that same thing with the group we could do group minus read and execute and it would remove the permissions for that so just depending on what the instructions are and what you’ve been told to do cuz they’re going to tell you you know these groups of the company should not have access to these files or directories so need to remove permissions of anybody for this particular directory and all of its contents and when you assign when you bring in a new employee and they have access to a certain group and all of the assets that belong to that group this particular subcategory of those assets should not belong to these new employees unless they’ve hit a senior level secure uh a level of seniority excuse me and then when they hit that level of seniority then you can give them access until then they should not have access to it so you remove access to that specific category of files or folders by doing the owner or group or others and then just doing a minus command for that and then if you wanted to do it numerically then you would change these things to zeros and the zeros would represent no permission so it would be 70 meaning that the group as well as the others category have no permissions they can’t read they can’t write and they can’t execute that specific file so this is what’s important about the change owners ship and the change mode which means the permissions of it and of course the change Group which would change the group that owns that particular asset so this is a very very simple series of commands but they’re very powerful series of commands especially when it comes down to access control lists and making sure that people who should not have access to something don’t have access to those things and then whoever needs to have access to it actually can access whatever those assets are which brings us to access control lists so Access Control lists are a way to provide more more fine grained control over file and directory permissions uh other than the standard Unix file permissions that we just reviewed so with an ACL and access control list you can Define permissions for multiple users and groups in a single file or directory um or on a single fil in directory excuse me uh allowing for these specific access levels beyond the traditional group others and owner model so this is how we would go about doing it for first and foremost we need to have the entries of the access control list uh which means uh it’s basically a list type of a format and it’s uh each entry specifies the permissions for a single user or a single group um and it consists of the type which would be user or group The identifier which would be the user name or group name and then the permissions that are set for that which would be readwrite or execute so it I’m going to show you obviously examples of that um there’s going to be the user access control list the group Access Control list there’s a mask Access Control list and then there’s the default ACL so the user one specifies permissions for a specific user the group specifies permissions for a specific group a mask ACL defies the maximum effective permissions for users other than the owner and the groups so this is the maximum permissions that they have and everything beyond that they don’t have access to and a default ACL specifies the default permissions that are inherited by new files and directories created within a directory so anything that’s inside of a directory would essentially inherit the permissions of that directory uh so if the directory has a permission level of seven meaning you read write and execute essentially everything inside of that directory would inherit all of those permissions so it would also be rewrite and execute for everything that’s inside of that directory so the basic command here would be set f F ACL so set faac would set the ACL itself and in this particular case would be the mask for the user and then the read write and execute for the file name right so U represents user the user itself would be the actual name of the user and these are the permissions that are being assigned for that user um this is the actual breakdown of the command that we just saw so the M I’m sorry so the m is not mask the M actually stands for modify so my apologies for that um the explanation here is that we’re adding read write and exq permissions for these specific user on that file name and then we’re using the m to modify the ACL entries and then this is the actual uh string or the actual format of how to set permissions for that user which is in this case the user would be the the username so this could be user or this could be you this would be Alice and then these are the permissions that are being assigned to Alice now if you wanted to view the ACL for that specific file name you would do get fac so the first one was set faac this is get faac and then it displays the ACL entries for the specified file which in this case would be the file name showing all the users and groups within their uh with all of their defined permissions so whoever they are and the uh the level of permissions that they have on this particular file would be displayed by doing get fic the first one was again set fic and now we’re looking at the file name permission which would be get of ACL so this is the example workflow so we have pseudo because this does require administrator privileges and we’re setting the access control format for this particular file right here by modifying it and then it’s going to be for the user Alice and then she’s getting read write and execute permissions on this particular file so set faac modify for the user Alice readwrite and execute and of course the U is separated by a colon and then the the username and the permission levels are also separated by a colon and so pseudo F pseudo set ficm yada yada Y and this is how we add a user permission for a given file if we wanted to add a group permission it would exactly be the same entry so setf AC still going to be modified don’t only thing that we’re changing is instead of U we’re doing G for group and then we’re going to assign the name of the group which is developers and then they all will have a read and execute permission so they can’t write to the file or modify the file but they can read the file they can execute the file and that would be the example txt file again so this is how we set permissions for a group in this particular case and then we have the Set faac uh for directories right so this is the actual directory and in order to do that we need to assign the D flag or the D option in this particular case to designate that we are modifying uh the access control list for a directory so pseudo set f-d to designate that this is a directory DM to modify and then again for the same user and now we’re giving the path to the directory instead of the name of the file and then we’re going to view it so if you wanted to just view the uh the access controlers permissions for all of the groups and user users that have been assigned to this particular group you just or for this particular file excuse me you just do get faac and then the name of the file very very simple and then if you wanted to view the current example and then we can view the access control list for this particular file so we would do get fic and it gives us the access control list of the groups and the users and whatever permissions they have on this particular file and this is what the example output for that command would be so so the name of the file would be example txt the owner itself would be root the group would be root and then this particular user has this permission and then this particular user has readwrite and execute permission and then the group has this permission the group developers has read and execute the mask itself would be readwrite and execute and the other group or everybody that falls into the other category only has read permissions on this particular file right here to remove ACL entries would be done with the X option so it’s still a set FC command but now you’re doing the X option to remove uh the particular user from this particular file right so you don’t have to worry about the permissions or anything like that because you’re literally removing that entire user from the access control list on this particular file so whatever permissions they had is all being removed because we’re just removing the actual user from the particular files Access Control list to remove all of the access control entries for any given file it would be done with the dasb option and again it’s still a set f-b and then the name of the file itself if this was a directory we would do dasd to designate that it’s a directory then we would do DB to designate that we want to remove all of the entries and then we would give the name of the directory but it’s still a set fac command that is going to remove all of those directory uh entries or the a ACL entries excuse me so DX would be for a singular person for a single removal so it could be a group as well so you could technically do-x and then do G right here and then the name of the group so developers let’s say so we do G for group and then the name of the group developers and then we would give the name name of the file but we would do with a-x to just remove a singular entry if we wanted to clear the entire set of entries if we wanted to remove everybody from the access control list we would do a-b command and then we would just give the name of the file if it’s a directory you would do d d as well as DB and then you would do the path for the directory and then it removes every single entry for the access control list on that particular asset to modify the mask you would also use the set faac in this particular case and then we are using the- M to modify but what we’re doing in this case we’re modifying The Mask which is rewrite execute for this particular file and what that does is that anybody so anybody any user and group specified by this Access Control list for this particular file they all have rewrite and execute permission so even if you did user Alice and read if the mask itself is read write and execute then Alice would actually get read write execute permissions on this particular file and then if the developers group is in here and they only have read as a permission if you do readwrite and execute for the mask then it would allow it that they would be able to read write and execute for everybody that’s inside of the developers group on this particular file so in summary the access control list is an advanced method for managing file permissions in Linux allowing specific access levels for multiple users and groups the commands set fic and get fic enable you to set and view these fine grain permissions easily and then you already know the various options that we have so if you do you know dasm you’re modifying the permissions and that goes with set fic if you do-x you’re removing a singular person or singular entity from the list if you do- D you’re designating that it’s a directory and if you do dashb as in boy you’re removing every single person and every single entity from that access control list and then you would modify them you know specifically if you wanted to add a user you would have to have the U in front of the username if you want to add a group you would have the G in front of the username so on and so forth so I’m not going to repeat all of that stuff because you could just re rewind it and just go watch all of that Al together but uh set fic and get fic are the commands that we would use to uh set the access control list for any given asset and see the access control list for that asset all right now we need to talk about network security which is also very very important concept under the security umbrella and the very first portion of it would be the management of the firewall we’ve already done an intro to ufw which is the uncomplicate ated firewall as well as IP tables but we’re just going to review these things a little bit more so you should already know that firewalls are very important in network security they act as a barrier between your internal and external networks um and they monitor and control incoming and outgoing traffic and these are done with rules that we establish so uh both ufw and IP tables have sets of rules that you can use to ensure that whatever you’re trying to do uh gets done right so uh the first thing that we’re going to talk about is the ufw the uncomplicated firewall the uncomplicated firewall is a very simple firewall but it’s a very powerful firewall it doesn’t have require uh it doesn’t require like very complicated uh commands or understanding a variety of different sophisticated binaries uh to be able to run it it has easy syntax and it’s powerful it actually does what it needs to do right it gets the job done essentially um to first run it to actually activate it you need to run pseudo ufw enable and essentially everything else that we do is going to require a pseudo command to preface the rest of the commands so first you need to enable the firewall you need to actually activate the firewall the next thing you want to allow or deny traffic now to be able to allow traffic uh in this particular case what we’re doing is we’re in the First Command uh this Command right here is allowing incoming traffic on Port 22 which is SSH right this particular Comm and you have to if you want to do outgoing traffic as well if you want to allow outgoing traffic on SSH you would need to also say pseudo ufw allow outgoing excuse the capital S right here it’s a lowercase s for pseudo so anything that you want to uh set up as a rule that would be outgoing outbound traffic you need to put the keyword out in it so this particular command at the very top this first command is only dealing with incoming traffic so it’s allowing incoming Port 22 traffic same thing with this command it’s allowing or it’s denying incoming Port 80 traffic in this particular case we would be denying outgoing particular uh uh HTTP traffic on Port 80 right so uh pseudo ufw deny outport 80 traffic would be denying outgoing outbound HTTP traffic on Port 80 if you want to check the status of what’s going on with all of the rules that you have have that would be done with the ufw status command so it will show you whether or not the firewall is actually active and then what all of the active rules are that are running on the ufw firewall this is a new command that we’re going through which is uh allowing traffic from specific IP addresses right so you can say ufw allow from this IP address to any port on Port to any port 2022 I keep saying 2022 I don’t know why I keep doing that but it would allow anything from this particular IP address to any port 22 so it allows SSA traffic from this IP address on Port 22 um you can do a deny that would be on Port 22 as long as you have this allow from this IP address now you’ve whitelisted this particular IP address and even though everybody else could be denied on Port 22 this particular IP address will be allowed to come in and again this is inbound right cuz we haven’t done allow out we’re only doing allow which means that it’s inbound traffic incoming traffic and it’s coming from this particular IP address on Port 22 so this is how you allow a specific IP address IP tables would be the more complex or we can call it the sophisticated it would be the sophisticated counterpart of ufw um it allows more detailed control over the network it allows ad administrators to create complex rules for packet filtering Network address translation which essentially means that it masks your IP address so that people from the outside can’t see your actual IP address so when you send out traffic from your network it’s uh it’s masked essentially um by a different IP address and that IP address would be what the outbound or the outside world would see which is a very very powerful tool actually and then you know there’s mang and so on and so forth so it’s very very useful but it does require that you have a better understanding of networking Concepts the very first thing would be to just learn and see what your current rules are so you can just do pseudo IP tables DL with a capital l and it will show you all of the current rules for your filter table so this is the other thing right so you have a variety of tables that exist inside of Ip tables so in this particular cas case without designating what table we want to look at the default will be the filter table so it’ll show you the input forward and output chains for the filter table if we wanted to add a rule to our table uh we would do it with the a so capital A which would be either append or add however you want to remember it but it would be a capital A and we’re doing it to the input table and notice that the input uh table is also in all caps so you’re doing capital A input and then the protocol would be TCP so this is TCP traffic that we’re allowing to happen and then we’re doing on the destination Port 22 and then we want to do an accept for it so the J I believe is Jump I think that’s what it stands for um and what we’re doing is accepting the traffic on destination Port 22 and this is for input traffic which represents incoming traffic that’s coming in on our input table or input chain excuse me on the input chain on our particular table so we’re doing adding on the input chain on the protocol TCP destination Port 22 and we want it to be accepted if you want to block traffic again you’re adding it to the input chain protocol would still be TCP and the destination port in this case would be 80 and then what you want to do is you want to drop the traffic so there’s no it’s not deny if you want to block something you’re dropping the traffic so the previous one was accept this one is drop if you want to save the rule that you just created the IP tables rules uh you would do it with the iptables-save command and then you need to add it to the configuration file which is a rules. V4 config file that is inside of the Etsy IP tables directory so inside of once you have IP tables installed inside of the IP tables directory inside of the ETS directory there’s going to be a file called rules V4 and these are all of the rules that will persist right so when you restart the computer the all of the rules that you just created will also be saved so that the next time the computer reboots all those rules are still active um if you want to restore rules you would do restore and then notice the arrow right so in this particular case we have this forward slash which uh I guess is the greater than sign so we are saving the rules that we just created forward uh forward sign and then it’s going to go inside of the rules V before if you want to restore it when you use the restore command you’re using the less than symbol and then you’re going to use the same path which is going to come from the the rules. V4 file and it’s restoring the rules from a saved file uh just in case you whatever reason for whatever reason you need to restore your rules if somebody changed your rules or if you changed your rules and you want to go back to your previous rule set that You’ saved you would pull all of those rules from your rules. V4 file by doing the restore command in summary we have the simplified management with ufw uncomplicated firewall and then we have the IP tables which is Advanced control for detailed Network traffic management you have the network address translator you also you also have the mangles and all that stuff which we covered in the first description of the IP tables and the ufw but we needed to talk about firewall management because we are talking about network security under the security chapter so it was very important to re-review these files or re-review these tools um but we are going to actually use them when we get to the Practical section of this training series anyway so we’re going to run a bunch of commands and we’re going to create a bunch of rules and do a lot of things that will be relevant to your labs and uh ultimately your skill set as a Linux administrator when you want want to manage and you want to configure firewall rules for either ufw or IP tables another really important tool for security is actually SE Linux so security enhanced Linux um I’m just going to call it selinux uh selinux probably I don’t know if it’s selinux SE Linux I kind of like selinux cuz it it removes one syllable so selinux is a security module in the kernel that provides a mechanism for supporting Access Control policies includ including Mac Mac addresses um it’s commonly used in the red hat based distributions like Fedora Centos and red hat Enterprise Linux and the key concept here is that there are policies right so we have policy-based security um and the policies inside of selenex uh Define the rules for which processes and users can access which resources so these policies are strictly enforced providing an additional layer of security Beyond traditional discretionary access control which is also known as DAC so these are an additional layer of security and the they are very strict right so these are strict enforcement to set the rules for which processes and users can access which resources um the modes of operation are to enforce to permit or to disable so you have the enforcing mode you have the permissive mode and you have the disabled mode selinix policy is enforced and access violations are blocked so this is what enforcing is permissive is that the policies are not enforced but violations are logged for auditing purposes and then there’s the disabled which is it’s just turned off and it’s not working so you actually have the enforcing rules that anything that violates the policy is being blocked you have the permissive which is still logging everything but it’s not blocking the actions right so if there are malicious actions they’re not being blocked but everything is being logged so that it can be audited later on and then you have the disabled version of this so our very first basic command would be to see the status of what’s going on so the to be able to uh view the status of selenic including what mode it’s running under and all of the loaded policies or whatever the policy is that’s loaded you just do SE status so selenex status right SE status and then you would be able to see the current status as well as the policy that’s being enforced if you want to set the policy to enforcing mode in this particular case what we’re going to do is do set in force right so set in force and then one I believe represents true so zero would represent false one would represent true and in this particular case we’re saying that we want the enforce policy to actually be activated so we do set in force one and it changes the cenx mode to enforcing and this command ensures that all cink policies are strictly in forced if you do set in force zero it goes into permissive modes yeah so I I was actually uh misguided or misinformed I Mis assumed um the set Linux to permissive mode goes into set en force zero which changes it to permissive allowing violations to be logged so one represents the enforcing mode to actually be active and it is enforcing all of the rules and zero represents permissive mode so that it logs everything but uh it doesn’t block anything it doesn’t enforce anything of the rules or policies that may uh be in place and then we have checking the status as we’ve already established and then we have the Practical example or the the rule set to just kind of re remind us of how to set enforcing mode which is in this case set enforce one and then if you wanted to set the enforce to zero you would be putting it in permissive mode and this is essentially what the selenic Practical examples would be in this particular case uh we are going to go into the Practical commands when we actually get into the Practical section of this training of this training Series so that you can learn how to use this particular tool in depth another useful tool is app armor so app armor uh also known as application armor is another Mac system that provides and Mac we’re talking about Mac not Mac as in uh Apple so it’s another Mac system um that provides an additional layer of security by conf finding programs according to a set of profiles it’s commonly used in De based distributions like Ubuntu which is what C Linux runs on and so on and so forth so anything that essentially runs on davan or Ubuntu um is what app armor uses because it deals with applications specifically um the key concept is that there’s profiles in this particular case so previously we had policies which was in sellx now we have profiles inside of app armor and the profiles Define the access permissions for individual applications they specify which files and capabilities an application can actually access preventing it from performing unauthorized action so think about this as when you try to turn something on inside of your windows or inside of your um Mac OS and there’s a little popup that comes up and it says you know do you want to allow Google Chrome to access your microphone for example and this would be something that’s specific to the Google Chrome application and then it’s now giving uh your you’re giving that specific application access to the microphone or your downloads folder right so if you want to if you download something and it deals with the files inside of your file system uh once you try to run that application there’s going to be a popup that shows up from Mac OS or from Windows that says hey do you want to give this particular application access to your documents folder or your downloads folder or your pictures or so on and so forth it starts asking for permission to navigate across your computer and so this is what app armor is similar to and because of the fact that you have applications that are downloaded on an obuntu based distribution um then you are now dealing with the actual app itself and the app needs to be given permission to do a variety of tasks across your machine so we have the learning and enforcing modes in this particular case so enforce profile is uh authorized to access attempts and uh unauthorized access attempts are blocked excuse me so the app armor profile is enforced and unauthorized access attempts are blocked and then complain would be that unauthorized access attempts are allowed but they’re logged for review so this is similar to the enforce and the passive or permissive I think it was um it’s very similar to those particular profiles that were inside of sellic they’re just named differently so in this case we have enforce that it enforces the rule and any unauthorized access is blocked and then you have complain that uh allows the attempt but it just logs it for review uh the basic command to check the status would be pseudo AA standing for app armor Das status and it displays the current status of app armor including which profiles are loaded and their enforcement mode and then you have setting the app armor to enforcing mode for any given profile so in this particular case you would do AA enforce and then you give it the path to and it’s a it’s a fairly lengthy path but it is the path to uh the specific application that is going to be uh enforced for whatever the rules are so the profile for a specific application to enforcing mode ensuring that the rules are strictly applied to whatever the name of this application is going to be and then you will need to get the name of the application as it is inside of the user binaries uh and it’s very different from what you would see uh for Google Chrome for example which is a capital G Google and then there’s a space and a Capital C Chrome it’s rarely like that it’s typically all one word it’s typically all lowercase so you need to find the name of the application as it stands inside of your binaries or your optionals or wherever that application is actually installed and whatever that path would be to that application and then you would enforce the app armor rules upon it you would enforce the app armor enforcing mode for any given profile if you wanted to set the policy or the profile to complain then you just do AA complain and then you just give it the the path of the application very similar to the previous one the only thing that’s changed in this case has been aa- complain instead of aa- enforce as we saw over here and that is how we designate the complain mode for that specific application just a couple of practical examples here so again you can just do AA status to get the status of app armor you can do AA enforce to enforce a profile for a specific rule or for a specific application so in this case it’s Firefox right so Etsy apppp armour. D user. bin. Firefox and this is the specific application that is now enforced whatever the rules are they’re being enforced upon Firefox in summary we have both selinux and app armor that are robust security mechanisms for l systems through mandatory Access Control policies that’s what Mac Mac stands for so you have discretionary access control which was DAC and then you have mandatory access control which is mac and while selenex is typically used in Red Hats distributions app armor is used in deas distributions and emphasizes application specific profiles uh selenic focuses on systemwide policies across the entire system app armor is honed in on specific applications and they’re both very very useful and they do deal with mandatory Access Control policies which are also very very useful so Mac mandatory access control and that’s really what it means right so it’s like when when you go through coma cyssa plus which is uh an examination that I had the privilege of taking they give you a lot of situations they give you like a breakdown of okay this is mandatory access control and this is what it applies to and so on and so forth but you don’t really get it until you actually go through some tools that enforce those things and you’re like oh okay so when I enforce uh you know Firefox to not have access to my microphone for example that is technically a mandatory Access Control policy that mandates that that specific application cannot access my microphone and then until if I wanted to access my microphone for a zoom call or something like that or for a Microsoft teams come Zoom is actually its own application so Microsoft teams or whatever or let’s say dis uh what is it called Discord uh you can also have voice conversations on Discord so if you want Firefox to actually finally get access to your microphone when you do those voice calls then you do need to go and give it that permission inside of app armor for example if you’re running Linux right so this is very important to understand that this is actually where the rubber meets the road and these specific concepts of Mac there that’s how you enforce them by using something like app armor which it it could be as simple as a popup that shows up on your screen that says hey do you want to give this application permission to do this right it could be that or you can actually be going inside of your settings or go and inside of your terminal and use the the enforce policy or the enforce profile for app armor against the Firefox tool itself so um this is the difference between app armor and selinux all right now we’re going to switch gears a little bit and we’re going to go into user authentication and configuring secure shell which kind of actually do go hand inand so first and foremost let’s talk about user authentication methods um there’s password-based authentication which is the default mode for authenticating any given user everybody gets a password and you have to enter your password correctly to authenticate yourself to prove that you are who you say you are now the users are given a username and they’re given a password to gain access to a system to gain access to an application this is not news so if you don’t know if you don’t know this and you’re watching this tutorial you’re in trouble right so it’s like to understand that there is a username and a password for everything in this world anything and everything has a username and password YouTube that you’re watching this on most likely you have user uh profile with Google that’s connected to your YouTube account and you provided your user uh Gmail as well as your password if you’re watching Netflix you get a username and password if you want to access your phone there is a pass phrase or a PIN number that you have to enter to access your phone and then when you do your face scan or your fingerprint that is still authentication but now it’s going into Biometrics which I’m kind of getting ahead of myself but essentially you’re authenticating yourself in a variety of different ways and the very first one the most basic one is passwords so you get a password-based authentication password-based authentication uh improves the security of any type of an environment and to be able to improve the security of the password you do something like multiactor authentication which is also known as MFA and that could be something like a code that’s been sent to your phone or your uh Gmail account your email account they send you a secret code a onetime code and then you enter that and you can access the system the fingerprint scan is a version of the biometric that we were talking about that helps you multiactor authenticate yourself to be able to get access to that system and these things are done in addition to the password so if you’re accessing your bank account on a new computer or on a new browser or you’ve it’s the same exact browser or same computer but you reset the computer like right you you reformatted the computer and so all of your cachets are wiped or you just wiped your browsing history from chrome and your caches and your cookies are no longer saved when you do something like that it says oh you’re logging in from a new browser we’re going to send you a one-time code in addition to the password right so you enter your email and password you log in and it says okay we’re going to send you a onetime code and then when that happens now then they ask you do you want to save this particular browser for future references and then that’s how you develop new cookies and new cachets right so this is the whole process of multiactor authentication and it enhances the password authentication and it’s a very useful way to reduce the risk of unauthorized access because sometimes somebody may get access to your password but if you get text a code saying hey enter this in your login most likely that person won’t have access to your phone number or your email hopefully I mean it’s scary to think about but it it is possible for people to get access to your email and even your phone it’s just not as easy as somebody running a dictionary attack and finding out your password if you have a really weak password so um yeah again I’m getting I’m going off on tangents and I’m getting ahead of myself but uh we can enhance password based authentication by using something called multiactor authentication which is very simple and you just send a a code to somebody that’s one of the most useful ones and it’s very very useful the next level up would be a public key authentication and it’s more secure than password because it involves the use of a key pair so a private key and a public key the private key remains within the user or with the user while the public key is placed on the server and this goes into the realm of symetric and asymmetric uh encryption that happens typically with transactions that are done with your browser so uh a transaction is you viewing something on your browser but there is the private key and the public key that is available from that website so the private key is something that you can’t see from the certificate authority of that website that they have to authenticate themselves and there’s the public key that is given to you the viewer so that you can verify yourself and you can go back and forth interacting with the conversations or the interaction the transactions within that specific uh browser within that specific website right so you have a public key Authentication that just supersedes it it is uh it’s an amplified version of the password-based authentication um it had enhances your security so it is no longer a uh subject of Brute Force attacks because you can’t brute force a key right and the the private and the private key or private and the public key excuse me um it doesn’t require the transmission of passwords over the network because you’re just dealing with those keys it allows for automated passwordless logins which are particular particularly useful for scripts and applications and this does include at least a one-time login though because there is that initial U authentication that needs to take place with that uh password but then you just you automate the rest of that process because you now have a public and private key so to speak that uh communicates with that website or that application so that you no longer need to do a a password entry and this is how when it remembers you when a browser remembers who you are and you don’t need to provide that password anymore the next time that you log on to Facebook it just logs you in right the next time that you log on to Gmail even if you’ve closed the tab even if you’ve closed your browser the next time that you log on then it’ll actually just log you back in without asking for your password because the key is in place um if you wanted to generate a key you would do it with s SSH key gen and this is a little bit complicated of a process so we’re not going to go too deep into to this uh we will when we get it to the Practical section of this training series and we start doing these things um later on but uh you do SSH key gen and it actually generates a new key pair and you’ll be prompted to enter a file to save the key2 which is typically the SSH ID RSA and then optionally you can set a passphrase for an additional layer of security on that actual key right so if somebody wanted to access that specific key file they would need to enter the password to be able to access that key file so now you have multiple layers of security and so you create a key using SSH key gen and you can even uh designate the algorithm the hashing algorithm of the key that you want to generate so do you want to Shaw one sh1 do you want to sha 256 so on and so forth right so you can designate the hashing algorithm that you want to be used when you’re creating this SSH ID RSA by default it does a Shaw 256 key which is a very very pass uh powerful hashing algorithm this is what the output looks like so when you actually run that SSH key gen this is what the screen will look like so these are individual commands that show up right so first it says is generating it enter the file uh that you want to save the key to and then you would say such and such and then you press enter and then you say it says enter the pass phrase so if you don’t want a pass phrase you just press enter and it moves on but I do recommend you actually have a pass phrase for your key and then you verify your pass phrase again and then it’s been saved and then it’s been saved and then the key fingerprint is a shot 256 as you can see right here as well as the username uh and the host and it’s like a long series of characters that comes in after this particular portion right here and it seems like jargon and I mean for the most part it is uh it’s like you you can’t make out what it is because it it very much looks like a code like a long piece of encryption code so you you can’t make sense of it with a naked eye you need to feed it into something at the very least to try to use some other type of decryptor to be able to get access or decoder I should say you should you need to use a separate tool to try to make sense of what you just found but for the most part you can’t because it’s not designed uh to be able to be decoded or decrypted without its actual key right so whatever is generated nobody can make any sense of it unless they have the key they can’t unlock the lock unless they have that key which is why this concept is so powerful now that we’ve generated the key now we want to copy that key to the actual server so you would do SSH copy ID and then the user at whatever the server is and this copies the public key to the server placing it inside of the authorized keys of the SSH directory um for the specified user and this step allows the server to authenticate the user based on the public key copy the public key to the actual server if the SSH copy ID is not available you can manually copy the public Key by going and finding where it is so this is the actual key location right so where we when we went over here it said that it was saved inside of the home user and then this is a hidden uh directory and we’ll we’ll look at this when we go into the Practical section when you’re looking for a hidden file so if I did a regular LS command without looking at the hidden files this would not show up so this is a hidden directory and then inside of the users home folder um the hidden directory includes this ID RSA file so this is actually being saved inside of the home folder of the user and so if you can’t do it uh autom automatically using the SSH copy ID you can do it manually by going through these various series of commands right here so we can see that we’re going to concatenate this specific key and then we’re going to pipe so this is what this pipe is and we’ll talk about this again when we get into the Practical section but what we do is that we take the output of this command so cat as you should already know will display the contents of this file but when you take the output of this which is displaying the contents and you pipe it into this command which is using the SSH uh binary to log into this specific user and then make the directory of this specific directory right here and then concatenate that so you should already know what these double ends represent right so we’re combining a series of commands here so we’re going to do SSH user server we’re going to make a direct Dory for this specific directory inside of the root and then we’re going to concatenate that into the authorized keys so it’s a series of commands first we read this we take the contents of this and we pipe it into the SSH command that goes and creates a new directory and concatenates the contents of that inside of the SSH authorized keys this you don’t need to memorize you don’t need to know what all of this represents right now I’m just showing you what a manual version of this looks like so you kind of get exposed to it so that as we review it later at least it was embedded in your head at some point and then it’ll make it’ll it’ll be easier to make sense of it when we go into the later portions of this so you don’t need to memorize this you don’t need to know this off top of your head right now you should know what this is right you should know what this specific piece represents oh this is a key file for uh SSH where we’ve generated an RSA key file and it looks like that they’re adding it to the author authorized Keys uh list of authorized keys for this particular user right so you don’t need to know all the details but you do need to recognize these specific points so that you can see what is potentially going on right as long as you can make sense of it you don’t need to know the exact details of everything that’s going on that’s that’s the main point that I’m trying to show you here once the key has been copied into the authorization Keys the authorized Keys then you can actually SSH into that that server with that user and this is a very basic SSH command so you’re just secure shelling as this user into this particular server that’s all you’re doing um once you set it up and copy the public key you can log in without being prompted for password that’s the whole idea here um again this whole piece requires some kind of authentication at some point because you can’t just add a key to the authorized Keys folder without being authenticated at some point this is assuming that you’ve authenticated yourself with a password at some point in this process before you started this process and now the system trusts you therefore it’s allowing you to transfer this key to that authorized keys and once you’ve done that then you can access the server without entering your password if you have never entered your password and you’re trying to access the server as this user it is going to ask you for a password I don’t care I don’t care how uh great you’ve done the rest of the stuff that we just talked about if you never authenticated yourself none of those things are going to work therefore at this point you’re going to be asked for a password so the summary here is that you can have a password-based authentication which is the default mode of verifying who you are with any kind of a system but then you can have multiactor that enhances that password-based authentication something like sending you a onetime code or your fingerprint or your face scan something like that and then you have the public key key authentication which is a stronger security for using key paare it still requires at some point you’ve entered a password to verify who you are so that you can generate a key pair and then transfer the key pair from your where it’s currently sitting as your ID RSA inside of your authorized keys and then you’ll be able to enter whatever that server is without entering your password again right so it’s in some point for this at some point you do need to provide a password otherwise none of the other stuff is going to work so just keep that in mind okay now that we’ve talked about authentication we need to talk about secure shell and configuring secure shell so secure shell is a very powerful tool for remote access and it still is used currently um in the modern era in 2024 going into 2025 secure shell is still one of the most powerful ways to access a Linux server specifically Al um so it runs on Port 22 by default and uh for the most part the only thing that is required to access secure shell is a password unless you do other things to enhance the security which is what we’re going to be talking about when you enhance the security of something you are hardening the security okay so we are going to harden SSH configurations to mitigate any potential threats and these are some of the key steps to do it number one you want to change the default Port so by default it runs on Port 22 simply changing the port can help you reduce reduce the risk of automated Brute Force attacks that Target that default Port because everybody and their mother if you even if you’re like a brand new hacker you know that Port 22 is secure shell and it is the port for remote login so you’re going to attack Port 22 for the most part so the first thing would just be change the default SSH port and you can do that through the SSH configuration file by doing that or the way to do that would be to actually go into the sshd configuration file so you already know that sshd stands for the Damon of SSH and there’s a configuration file for it you’ll find the line that says Port 22 and it’s typically um commented out because it’s not modified so you need to uncomment it meaning just remove this hashtag at the beginning and then change it from Port 22 to 2222 for example and then you can also change that for literally anything else so there’s the top thousand ports which are usually assigned to something so you don’t want to use any of those top thousand ports you want to find anything past thousand or past 2,000 because it is not going to be a common Port anymore and at that point you can just use any of the 65,000 ports or let’s say 63,000 ports if you don’t consider the first two ,000 ports anything past 63,000 you can literally use any of those to be your Port 22 or your SSH Port excuse me it could be the replacement for Port 22 and once you’ve done that using Nano you can then just uh save the file and close it out and now you have uh reassigned your actual port for SSH the next that would be to disable root login so not allowing the root user to actually log in via SSH is a very strong move because if they are if any hacker is allowed to actually log in as a root user they get all the root permissions and I mean just figure out what the rest of the problems will be after the fact so you just disable root login and then it forces whoever the attacker is or whoever anybody is to log in as a standard user account and then they have to escalate privileges if they need to get the rest of the things that they need to get done so first logging in as a standard user is going to be a problem for them especially if you have really strong password policies that you enforce in your company once they log in as a standard user now they have to find a way to escalate Privileges and actually become an administrator or become a root user to be able to do the rest of the things that they want to do so that would be another really simple move that’s very very powerful you just disable the root login now the way to do that would again be inside of the sshd configuration file which would be with the permit root login portion so again you just find this particular line that starts with permit root login and then it says prohibit password is the default so you uncomment it you remove the hashtag and then you just say permit R login no so you remove the prohibit password portion which means that as long as uh they have the password they can get in it’s prohibited unless they have a password that’s essentially what the default is and what you want to do is you just want to change it from from this to no nobody can log in as root root login is not allowed and then you just save the file and you exit the editor the next portion would be to limit the SSH users so you just designate which users can actually log in Via SSH and nobody else can unless they’re on that white list and this is also another very powerful tool that is very very simple to do and it just goes miles as far as security is concerned so you have a handful of people that can log in via the SSH portal um and this is again done on the SSH configuration file so you find the portion uh where it says allow users uh if it’s not there you would just add it yourself and it is case sensitive so it is capital a capital u allow users and then username one username two and obviously those are the actual usernames that can log in Via SSH and I mean I would keep it to a small group of people I would not go crazy with this you don’t want a bunch of people to be able to log in Via SSH you just want uh whoever the admins are and whoever the specific uh it administrator is or the CEO or a CTO whoever those important people are you just want those people to be able to access SSH and then from there nobody else can access SSH and just close it off close it off to everybody else if they want to access their file systems remotely you give them a sep portal that is encrypted and you give them a different way that they can access a file system remotely they should not come in Via Port 22 or the SSH Port whichever Port has been designated for that specific service you do not want them to use SSH you want them to use a different login mechanism that is encrypted and runs across a VPN and a variety of different authentication methods so that they can access the file system remotely and do what they need to do they should not be coming in Via secure shell that’s the whole point here once you have done all of those things you need to restart the SSH service by just doing a system CTL restart command and it will restart the service which means that it’ll apply all the configurations that you just made to the configuration file for SSH so this part is very very important if you don’t restart it then it will not enforce all of those rules that you just added to the configuration file F in summary you want to change the default SSH Port you do this by editing the configuration file finding the portion that has the port 22 and then changing it to whatever your new port number is going to be you’re going to disable root login in the same exact file you’re going to go and find permit root login and you’re going to change it from whatever the current setting is to no simply no no root user is allowed to log in Via SSH and then you will edit the uh config file by adding the allow users parameter the allow users option so that you can uh designate which L users which limited number of users should be able to log in Via SSH and then after the whole thing is done you’ve saved the configuration file you need to restart the SSH system the SSH service so that all of the rules that we just created will now be enforced and that way you can actually conect configure your SSH for secure access and now we need to talk about encryption and the secure transferring of files which is another very very important concept so um encrypting data with gpg so gpg is a tool for communication securely right so securing Communications and data transfer essentially um it uses asymmetric encryption which involves a pair of keys a public key and a private key as we’ve already discussed with our key genen portion the public key is used to encrypt the data and the private key is used to decrypt the data so public key locks it private key unlocks it this ensures that only the intended recipient recipient who possesses the private key can read the encrypted message my accent kicks in sometimes and I’m like oh my god um so only the person who has the private key is the one that’s allowed to decrypt the message or file so that they can get access to its contents so it’s very simple simple concept but again really really powerful concept so we’re doing it with gpg so you generate a key with gpg you do gpg D- gen key it generates a new key pair you’ll be prompted to provide the name email address optional comment and you can also set a passphrase for additional security which I always recommend that you do and then once the key has been generated you select the key type the size of its that you want it usually uh the default is very useful but I do recommend if it gives you like a really massive option I do recommend getting like the largest type of key that you can find because it the bigger the key type is the more powerful it becomes and the harder it becomes to be decrypted um you set an expiration date for the key if you want to do that and then you enter a passphrase if you want to private uh protect a private key which I again I recommend that you do that so these are the steps that you would do to generate your key with gpg this is what the output looks like so you run the gpg uh command and then this is the top portion of it and it says it needs to construct a user ID to identify your key so the real name would be Alice the email address would be this person comment would be this you selected this user ID Alice y y y change email comment or is everything okay and then you just say okay and you press enter and then it continues to do what it does this piece is right here for the rest of what we’re going to be talking about so this is essentially the user ID that will be assigned to this key that’s being generated and when we get to the encryption of a file you need to actually give the uh the recipient to the command right so you’re going to do gpg encrypt the r would be the recipient themselves and in that particular case it would be this person right this is the IDE of the recipient and so when you run this command you would put the ID of the recipient and then the file name that you want to be encrypted and then it will encrypt that file name and then whatever the key is for that specific person that will be what’s used to decrypt this file name this is what that actual command would look like when you actually use somebody’s ID so same Command right so gpg yada y y and then we have Bob at example which would be the user so the recipient in this case that that’s the ID for the person which is Bob at example and then the name of the file that’s going to be encrypted and then once it is been encrypted the encrypted file will have a gpg extension at the end of it so it will stay it still says document.txt it’ll just say. gpg at the end of it implying that this has now been encrypted and then this is the file that will be sent to Bob and then Bob will be the person that has the only key that would be able to decrypt this particular file Bob would then need to run this command to decrypt the file so instead of- e it would just be A- D for decrypt and then it would be the file name with the extension of gpg and then that’s what would happen to actually decrypt the file the key would need to be added to their key log um which we’ll do in a couple of slides but essentially this is how the file would be decrypted and you run the command to decrypt a file you’ll be prompted to enter the passphrase if there was one and then that’s how it would be decrypted so very simple so this is what the actual full thing looks like right so you just do gpg uh- D document text PHP or gpg excuse me and then it’ll output the decrypted content into the actual console um instead of doing that you can output it into a text file or into any given kind of file by using the- o flag so instead of it being printed onto the console which is the default you can just run exactly the same command just do- O and A assign a name which would be for example decrypted document.txt and then you give it the gpg file and then it’ll decrypt it and it’ll output it inside of this file for later use to be able to import a key we will use the import command with gpg and we would do– import and then the public key file and it’ll import the public key from a file into your gpg key string there you go it’s called a key string or key ring sorry key ring not a key log so it’ll import the key file into your key ring and this is what it looks like right so this is Bob’s public key and it’s going to be imported into Bob’s key ring when he runs this and then he can run the decrypt Command right um if you wanted to export the public key you would run the export command uh with the a uh option right here for the user ID themselves and then you would export it into a public key file so it’ll export the public key into a file replace user ID with the user uh with the email or key ID for the person and then the public key file would be the name of the actual output file itself and this would be the file that would be imported into the key ring using the import command later on and the export of the public key would actually look like this in this particular case so you have the export of this particular user’s key that would be exported into this key file right here and then this would be the file that you would email them and then they would need to import it into their key ring or you would not email them it would probably be like a secure copy kind of a situation and then once they have that they would be able to import it into their key ring and then use it to decrypt a file if you want to list the keys that you’ve generated you would just use the list Keys command and it’ll show all the keys that are inside of the GP gpg key ring including the key IDs the user IDs associated with them and the types of keys that they are so in summary it’s a very versatile tool for securing files and Communications using public and private key pairs um with the commands for generating them encrypting and decrypting files managing them it ensures that your data remains confidential and secure and this is as you saw it’s not a complicated tool to run uh the process is fairly simple right you generate the key you encrypt the file and you encrypt it with the ID of the person that should be able to decrypt it and then you create uh you import or excuse me you export sorry you export that person’s key into a key document a key file you get them that key file they would import that key file into their key ring and then they would be able to use that to decrypt whatever the file is that they’re supposed to decrypt and typically if you’ve already added a passphrase to kind of double up the security for that file then they would also need the passphrase to be able to do that so if somebody intercepts that individual key file that you generated for them and then you emailed them or secured copied whatever if somebody intercepts that key file but they don’t have the password to access that key file then they still wouldn’t be able to decrypt the original document which provides an extra layer of security and I would recommend that you send the password to the key file in a separate type of a medium so you text them the password and you email them the actual key file for example so that there’s two different Communications that have happened through two different mediums so if somebody is intercepting their emails for example they won’t be able to get the password that you texted them or you send them through WhatsApp or a different method of communication you could call them and say this is what it is write it down so that nobody can actually there’s no digital layer of evidence for that transfer of information so there’s a lot of different ways that you can secure this but I would highly recommend that every single key file that you’ve generated and every file that’s been encrypted also has a password that’s attached to that key so that it can be decrypted using the password as well as the key and this is the perfect segue into secure file transferring and we can do this with SCP which is secure file copy or SFP which is the secure file transfer protocol um so this is essentially the way that you would transfer those key files that you just generated as well as the the document itself that was encrypted right so the file that was encrypted as well as the key that was generated you can be able to transfer them using either SCP or SFTP so SFTP is the interactive protocol for um file transfer it is the secure version of FTP which is a very very common protocol that was used for a very long time until they found out that it’s not secure because most things are clear text and they developed the encrypted version of it the SFTP version and it’s more flexible and user friendly because it’s interactive right so once somebody has the login for the FTP the file transfer protocol they can kind of navigate it very similar to the way that they would navigate um any any Linux uh file structure any Linux file system so a lot of the same commands that would run inside of a terminal for a file system actually run on the SFTP once the person is logged in so you can do SFTP user at host and you would start the SFTP session as such and then you can put the file inside of this particular server and then somebody else can log in and then access that file and download it onto their computer so it’ll initiate the SFTP session with the specified user on the remote host and then you can run the ls command for example to list the contents in the directory you can change your directory into the path of another directory because you’re inside of a file system right now you’re inside of a file transfer protocol so you’re literally inside of a file system that is just being managed remotely and you can download the file right you could just say get of this file and you can put a file inside of it so there there’s a uh for example the file that you just encrypted using uh the commands that we just ran through um we can take all of those files as well as the the key that we just generated and we can take both of those and put it inside of the FTP file transfer Portion by using put and then the person on the rece side would log in and then they would use get and they would download those files onto their local machine so that they can decrypt them and they can get access to them um you can do a get R for remote directory and you would recursively download everything that is inside of that directory onto your computer so instead of doing a individual file that you would do with get remote file you would get the entire contents of the directory and you could do the same thing with put our local directory so recursively put everything inside of this directory into the file transfer protocol and then they would be able to get it on the receiving side while logging in so this is what an example would look like so the user Alice on this particular IP address so you just do SFTP Alice at this particular IP address you would be prompted to enter the password for Alice so this is not just going to immediately let you to run LS that’s simply not how it works so once you run this command you are going to be prompted to enter a password you enter the password if you have the right password you now are inside of this particular server as Alice and then you can list the contents of that home directory so on and so forth so once you’re there let’s say you want to transfer contents to somebody else you would first do put project zip inside of this particular server and now it exists inside of that file hierarchy and then from there somebody else can log in or you can log in you know Alice for example can log in from a different computer onto to this exact server and then get that exact file onto the machine that they’re now logged into so you can download a file name example from the the file system using this you can put the file inside of the file system using the put command so it’s very simple um secure copy would be the quick and straight forwarded file transfer over SSH which uh is essentially the streamlined quick version of doing this particular command which is the SFTP command um and we’re going to run through some examples for running secure copy as well but SFTP is logging into a file system secure copy is just transferring the file from one host to another one and this is what the SCP example looks like so the command is SCP and then this is transferring from our computer to the remote host so you would do SCP and then the path to the actual file that you want to transfer and then you’re going to do the username at remote host and then the path to where it’s going to land and this essentially you kind of can designate wherever you want it to land this part is very very important and what happens as soon as you press enter you’re going to be asked for the password of this particular person at this host so this is not just going to transfer the file willy-nilly right you need to still have the password of this particular person at this host and then it’ll just transfer where this file into this particular location and it’s really as simple as that there is no logging into a file system running LS and getting and putting and all of those extra commands You’re simply just doing a secure copy very similar to a copy command that you would do locally on your computer you’re just doing it from your location to their location so this is going from your computer to their computer this version is coming from their computer to your computer so essentially you’ve just reversed the order of this particular command and then you’re doing secure copy the username and then the path to that actual file and it’s coming to the path on your current directory or in your current computer and again as soon as you press enter you’re going to be prompted to provide the password for this username at this remote host so that you can copy the location or you can copy the file from that location what’s important is that in this particular example this is actually very important right so in this example it wasn’t really that important cuz you you can just transfer this into wherever you essentially want on their computer that you just need to tell them where you put it so that they know where it is um the path of the local file is important in this version because you just need to know where it is that and what file it is that you want to transfer out in this example this path is very important because you need to know exactly where the file is that you want to transfer to your computer and then this part of it isn’t as important because you could just put it anywhere as long as you know where you just put that file so this is what secure copy is and how you can transfer a file securely and it’s again very simple command it’s one command that does the job instead of having to log into a file system and do all the rest of the stuff that we did with SFTP you’re literally just doing a secure copy from one location to another except you’re just doing it across a uh secure Port which is actually Port 22 or or whatever your Port is for secure shell so this is going across that secure shell port and is copying the file either from their location to your location or from your location to their location and that’s basically what secure copy does so very very very powerful tool to be able to transfer files you just need to know the password for the actual username on that individual host that you want to either pull the file from or send the file to and that’s basically it for secure copy okay so now it’s time to talk about troubleshooting and system maintenance and the first part of this is log files so how to analyze and interpret log files and the very first uh command that we’re going to go over for this is Journal CTL or Journal control for system logs and journal CTL is a very powerful command line utility for viewing and managing logs that are generated by the system D Journal so this is the more modern version versions of Linux that run systemd as their in it processes so it’s particularly useful for system administrators and developers to troubleshoot and maintain system Health on Linux systems that use system D so that’s what Journal CTL is now to be able to view everything on the log you just run Journal CTL and then you press enter and it displays all the logs recorded by System djournal starting from the oldest entry to the newest so chronologically going from the oldest to the newest and it clud system messages kernel logs as well as application logs um the filter by boot so if you just wanted to see logs from the current boot that’s going on um or the current boot session so to speak you would run journal c-b and this is particularly useful for diagnosing issues that occur during system startup and then we have filtering by Boot and then you have the dash one so this is logs from the previous boot uh you can adjust a number to view logs from earlier boots so uh if you go -2 it’ll go prior to that Dash uh three so on and so forth so- B would be the current boot just by itself and then -1 would go back to the previous Boot and then you can keep going back further to be able to find uh all the boots prior to that that are still inside of the log so at a certain point the log will have most likely stopped recording the boots um so from there uh you you can kind of just try to figure out how many you have in store in the log so that you can try to troubleshoot if you need to or go back go as far back as the logs will allow you to go you can filter by a service Name by using the dasu option and then you would provide the service name to it so it’ll display all the logs that are related to a specific service and then obviously replace the service name with the name of it so you would need to run the uh one of the previous commands that we went through for example top as an example to see what all of the various services are that are running and then from there you can look at the logs that would be relevant to that specific service by using journal c-u and then the name of the service and then we have the SSH as the example in this particular case so dasu SSH would display all the logs that would be relevant to the SSH service if you wanted to view real time log updates you would look at Journal c-f which would be similar to the tail – F command that would be applied to uh any log file essentially because tail would show you all of the the bottom entries at the bottom of the log which would be the most recent entries that have been appended to that particular Lo log file the entries that have been added to that log file so it’s similar to running tail-f command on that particular log in this particular case we’re doing journal c-f and it will give you realtime log updates as the entries are added to the log and as the system log or the log itself is being updated so you could combine this with any of the various log options that are available so that you can see the most recent or realtime additions that have been going to that particular log so it’s very useful for looking at live system activity or diagnosing issues as they occur then you can filter by time so you can do Journal CTL D- since and then you would provide it the the time as you see in that particular format and I think that’s called that’s the universal uh time standard I want to say I’m not exactly sure but it’s it’s in the format that you see on the screen so I don’t know what the technical term for that format is where you see the year the year the month and the day and then the hour minute and the second uh you can say since that time you know I want you to show me all of the entries that have come from that time filtering by time will look like this if you wanted to provide the actual time into it so you can see that they didn’t provide the second in this example we didn’t provide the minute in this example we just said from 8:00 on November 15th 2024 I want you to show me uh all of the entries that have come through this particular log right so that’s essentially what it will look like you don’t need to give it the the time the minute and the second unless you really are trying to narrow down on a specific incident that took place so that you can get the the results that you’re looking for to provide uh Mur uh further context for your investigation so to speak uh but usually you can just say at you know 8: a.m. on this particular day I want you to show me everything that happened since that particular time so if you wanted to filter By Priority you would just use the dash p and then the priority um and the level would be needed to be provided to it so if it’s a level zero which would be emergent meaning emergency uh to a level seven which is just a debugging kind of a priority level you can say uh what you want that priority level to look like so it’ll say if you do- P0 it’s going to show you everything from zero on up if you do uh Dash I guess it would go from seven on up so it would show you 7654321 0 uh 654321 0 so on and so forth so that’s what it would look like if you wanted to go by the priority level and uh show essentially every event that happens from that level that you’ve assigned all the way up to all the other levels um you can also do by an error type of a uh message or an error level message and higher so still priority flag so- p and then err would be all of the logs that have the error priority and higher and then it just continues on from there if you wanted to you uh filter by unit and time so we’re going by the service name as well as the time you can combine them so this is uh all of these commands are able to be combined together this is not to say that you get to use one or the other and this is the example that we got in this particular case so you can do – SSH and then since you know November 20th 2024 and on you know what I mean so you can combine filtering by services and time or a variety of other options so you can service and time and priority for example you can combine these various options to filter the log files so you can get the information that you’re looking for so this is what the example itself would actually look like right so – us SSH since November 15 2024 at 8:00 a.m. it will show you all of the log items for SSH since uh 8: a.m. on November 15 2024 so in summary you can look at Journal CTL as a tool for system administrators that are using system dbased Linux system so uh it won’t work for CIS vinet because it doesn’t exist on CIS vinet so Journal CTL will only work with system dbased Linux systems and then from there you can look at a lot of different options for viewing and filtering during logs and of course we’re going to go into all of those options and run a bunch of different formats of the journal CTL command as we go into our practical portion of this training Series so you can get a good understanding of how to use it and all the different filtering options that are available for Journal CTL until then this is this is going to serve as your little cheat sheet so you can view the entire log you can filter by the boot you can filter by the service uh filter real time log updates filter by time itself and filter By Priority or you can combine all of these to create a very specific filter to look at a very specific series of incidents that have taken place or a series of log items so you can combine all of these options to create a very specific viewing rule with Journal CTL but this is just a kind of a sample of the common commands that are run with Journal CTL for looking at system logs as we discussed in the file system hierarchy standard portion where we were looking at the main uh hierarchy of the file system in Linux we figured out that the logs are stored inside of the ver log directory so a lot of logs are in this particular uh directory and uh this is the central location for log files in Linux basically um you can get system events service activities application Behavior security incidents and everything in between uh analyzing these particular logs s admins can troubleshoot issues monitor system performance and enhance security and in a lot of cases you don’t even have to do it manually you can use a security Appliance to do it for you or you can try you know take all of the logs that are in this particular location feed it into Splunk as an instance or you can connect Linux into Splunk so that it gets live updates from your logs and then from there it can help you analyze the events that are going on in your logs uh if you don’t want to pay for something like Splunk you can always use wazu or a variety of different uh tools that we have Cabana if from elastic stack that’s another really good one that you can use that’s an open source tool that can be used to look at log files so very very useful location because it holds all of the logs that have to do with everything that goes on with your system so some of the key logs inside of the ver log directory would be the CIS log or the messages log so to speak so these are the general system log files that record a wide range of system events including kernel messages logs and service activities so it could be either the CIS log or the messages log and so there’s the distribution differences that would be uh relevant to these particular uh types of log so when you see CIS log it’s for deban based systems like Ubuntu when you see the messages that’s for Red Hat based systems like Centos or Fedora so the usage in the again would be to use uh tail f for example or just use Journal CTL for example um but in this particular case since you’re looking at an actual log you want to be using Journal CTL cuz Journal CTL has its own series of logs so in this particular case we’re going to be using tail-f to look at the last series The the bottom portion of this log file you will look at the last 10 lines for example which would be the most recent entries inside of the CIS log file same thing with this one where you would look at the most recent entries inside of the messages file another one would be the authentic a log the off log and this file contains information related to authentication and authorization so anybody who’s trying to log in any login attempts whether they were successful or unsuccessful user authentication processes or privilege escalation attempts all of these things would be uh stored inside of the Au log so anything that has to do with authentication or authorization would be stored inside of the off log and the view again would be tail F to look at the most recent uh additions that have happened to this log and it will show you everything at the bottom of that log file by using the tail command so uh this is very important for unauthorized access attempts and anything that is also security related so the authentication file or the off log I should say this is one of the key files that security analysts will constantly look at and keep an eye on especially in a large environment because you want to see if there are any uh failed login attempts or repeated failed login attempts or successful attempts that have come in during odd hours of the day uh or anything of the sorts just to uh figure out if people who should not be logging in are actually logging in or trying to log in um then we have another one which is the DMG so D message I kind of look at it like that but dmesg could be the one um it’ll records messages from the kernel ring buffer which contains information about Hardware components and the status of the hardware so the initialization of the device itself for example or the drivers that may be connected to your system for your printer or uh or anything that is a physical connection to your computer or any other hardware error that may go on with your individual machine and again it would be used as uh the tail F command to look at the most recent uh entries that have gone into this particular log file so um it’s very useful for diagnosing any kind of a hardware issue that’s going on and understanding the state of Kernel activity so if you remember the kernel is the entity that connects the user and anything that we use as the user to the actual Hardware so it’s the bridge that connects the hardware and everything inside of the actual physical computer to us the user so that we can communicate with it so if there’s any kind of a kernel issue or any kind of a kernel hard troubleshooting that you need to do you would do it with the D message log and then we have the secure log and this is specific to Red Hat based systems like Fedora or uh you know Red Hat Red Hat Enterprise Linux um this is the file that records security related events especially those related to secure shell as well as other secure services so SFTP for example or SCP that we just recently covered in the last chapter so this is where anything that has has to do with secure versions of a specific service all of those things would be uh logged in this particular file uh and this is uh for Red Hat based system so it wouldn’t apply to Ubuntu for example it would be Cent OS it would be Fedora and everything along those lines that would be considered a red hat-based system and this is the viewership so this is just a command and by now you should have memorized this so pseudo tail-f and then you would give the path of the log file and and it would give you the last 10 or 20 or however many lines that you would designate on this particular log which would be the most recent incidents that would happen inside of the secure file so for example failed login attempts and changes in user permissions which all have to do with security so anything that has to do with security and anything that we covered in our security portion of this training series most often than not is going to be inside of this particular log file for red hat-based system systems so this is our first example here so if you want to monitor a gener uh General system log you could look at the Cy log that would be on a Debian based system or and if it’s on a red hat based system you would look at the messages if you wanted to look at recent authentication events you would look at the authentication log or the off log that would be anything that has to do with authentication or authorization and again the tail would give you the last 10 Lin lines or the last number of lines that have been entered which would represent the most recent if you just wanted to look at everything inside of that file you could just open it up with Nano you would still need to do pseudo Nano but then it would open up the entire file which is a very massive file most likely so it will be very overwhelming uh you can search through it with GP right so the various tools that we’ve covered and then we’re going to do a lot of these things as well when we get into the Practical section but typically you would look at tail um or you would use tail to look at it so that you can see the most recent authentication events that have taken place um another example would be konel messages so you could do dmesg and that would give you all of the kernel messages or you could look at the most recent entries to that in real time for example which would be the tail-f which give you all the recent uh entries that have gone into the dmesg file and then you have looking at the secure file so security related events on a Red Hat system uh which could be anything that has to do with secure shell secure file transfer anything that has to do with changing permissions or ownership for example all of that stuff would be in uh inside of the secure log file itself so in summary we have the ver log thear log however you want to pronounce it we have this particular directory that is a treasure Trove of information it contains logs for everything all of the logs are inside of this and when we actually look at the uh the log file when we go into the Practical portion of this and I’ll do an LS command for you so you can see the number of log files that are inside of this thing it’s massive uh the ones that we went through are key log files that everybody should know about but there are literally dozens of log files that are inside of this particular directory and you can get a lot of different information from those log files so it all just depends on what you’re looking for and in a lot of cases individual applications that you install will also get their own log entries that will come inside of this exact directory so there might be something for MySQL there might be something for Apachi and a variety of different software services that would be installed on that machine that they get their own log files as well so um they’re just important location for you to consider right so anything that has to do with security and Security Administration or troubleshooting would all be done from these log files that are in stored in Star inside of our VAR log directory and these are some of our specific logs that you should keep in mind just as a screenshot so uh you already see you’ve already seen all of these so I’m not going to go through all of them but if you wanted to screenshot this one two 3 and ‘s up okay now we need to look at the usage of the disc itself and any cleanup that would need to be done so we’ve already looked at this particular command which is the DF command and uh DF and du kind of go hand inand and they help you look at uh everything that has to do with your disk so um dis usage analysis and cleanup is very important for maintaining system performance because a lot of times clutter tends to add up inside of the system and you just need to make sure that you are good with your storage right especially if you’re managing a bunch of different users and they have a bunch of different files and media and everything that they’re downloading and using there needs to be disk usage uh processes in place place to just make sure that uh you are good with storage this is more important for storage than really anything else in my opinion but it’s also a security issue as well so you just want to make sure that there is no uh you stay ahead of it so that there are no issues and the system doesn’t crash because it uh it doesn’t have enough storage or the ram doesn’t slow down or any of these things because there’s just too many things for the system to be taken care of so the two primary tools that we have are DF and and du so DF is the disk file system as you should already know and it’s a command line utility that displays information about the available and used disk space on our file system and this is just one of the simple commands so the dashh option with the f is human readable that’s what it stands for so it formats the output in an understandable way meaning that it’ll give you kilobyte usage and it may give you a tree breakdown or like a tree format view of it so you can kind of have a good understand of what you’re looking at on the terminal so df- will give you the dis usage as well as the memory usage and all those things in a human readable format so this is what the potential output might actually look like so you run it and then you can see that for this fire file system which is our uh primary SDA file system partition one has a size of 50 GB 30 GB has been used there’s 20 GB that’s available the percentage would be 60% % of this Total Space has been used and it’s mounted on our root directory and then the second partition has a 100 GB assigned to it 70 GB has been used 30 is available which means that there’s a 70% usage and it’s mounted on to the home directory that’s inside of the rout and the home directory specifically is for all of our users so it makes sense that there is more usage that has been done over here because most likely there’s multiple users that have file and media and everything that’s being stored inside of that directory so it’s using up a lot more space so that’s what the disk usage the DF command uh helps us find and du itself is the dis us it stands for dis usage and this is a command line utility that estimates and displays the disk space used by files and directory so it’s actually very similar to what we’re doing with our disk file system Command right so it’s it’s really not that different the intention of it is is the same um just a different command and it gives you a little bit of a different uh response here so you would do dis usage du D- sh and then you have to give it the path to the directory so- s s excuse me stands for the uh summary of the total dis usage and then the H stands for the human readable so same thing as the dis file system portion so the human readable and then give me a summary and so the example command would actually be this so dis usage dsh for the home user directory and it just gives you that there’s 5.2 GB on the home user directory that has been used so disk usage 5.2 GB on that particular directory that has been used um if you wanted to look at the full directory and then do a little bit of sorting and give you the top 10 lines for example so this is what this entire command is doing for us so we’re looking at a human readable format for this particular path and then we’re going to take this result and we’re going to pipe it into the sort command and then we’re going to take the top 10 lines from all of the output that we got so dis usage H will give you the dis usage for the specified path and human readable format sort RH will sort the output in a reverse order that’s what the r stands for and then based on human readable sizes which is what the H stands for and then the head N1 will show you the top 10 largest directories if we did 10 tail n 10 it will give you the bottom 10 uh largest directory so we’re sorting it and typically when you use sort it’s in ascending order so it’ll go in alphabetical A to Z or numerical smallest to largest so if we want to see the largest first we would do it in reverse order which is why we have the DHR and then H would be the human readable portion of it and then now it gives us the top 10 which would be the top largest files so this is potentially what the output itself would actually look like so we’re looking at the VAR directory that has all the logs and everything in it and then we’re sorting it in reverse order and we want the top 10 so it says the log directory understandably because there are so many log files and they take up so much space so the log directory would have the largest uh portion that it’s taking it takes up the most amount of gigb the biggest size right and then you have the cache which is also understandable so 1.8 GB the library has 1.2 GB and then the www that has to do with HTTP or typically it has to do with the Apachi server or any kind of a web server that is holding up 900 megabytes of data so some practical examples here we have the df- will give us the human readable disk file system command that will break it down for you by the file system and show you how much of it is being used and how much is free disk usage dsh will show you everything that’s going on on this particular uh path and it will give you the summary as well as the human readable format version of it and then this would be the the full thing if you wanted to look at everything for the home directory and then sort it in reverse order and human human ridable format and give you the top 10 results from this particular uh command so that’s how you would analyze all of those things so as a summary you have the disk file system the DF that provides an overview of the dis space usage by file system it makes it easy to see which partitions are filling up and it gives you everything on a line by line as you already saw and that’s the command D f-h and then you have the disk usage that gives you detailed insights into the actual usage by directories and files so you can identify which areas are consuming the most space this would be the summarized version for a given directory the summarized Das s and then DH for human readable on this particular directory and then you can analyze it by using the short command in reverse order and then giving you the top 10 results so those are our summaries but again we’re going to be running this as we go through our practical section so you’ll get a lot of opportunities to actually run this and use it uh and that this doesn’t mean that you can’t run these while you’re watching this so this is going to be uh this is technically the lecture format as I’ve I mean I say it literally at the end of every summary section so I just want you to keep all of that in mind so you can definitely be running these as we’re going through the lecture but uh we’re going to do like a deep dive on all of these things when we get into the Practical section so let’s talk about disk cleanup tips so this cleanup it helps maintain the system performance ensures that you have adequate storage for new data there are also essential tips and commands that we’re going to be going through to make sure that you accomplish all of those things this cleanup is a very important concept and it’s something that you should be thinking about all the time as a Linux administrator so why do we do it we want to removed any unused packages because they accumulate over time they take up a lot of space um and it kind of messes with your overall storage um they can also include dependencies that are no longer required by any installed software so you just want to make sure that you get rid of all of those things cuz they’re literally just taking up space and there’s nothing that is actually using those dependencies because they could be outdated they could be upgraded a variety of different reasons why they’re no longer relevant and you you just want to be uh be on top of this you want to do this routinely you want to do it in a scheduled manner so that it doesn’t run away from you you just want to kind of stay on top of this so to be able to remove unused packages you would use the AP um uh package manager command so we would do pseudo AP and then you just do auto remove and it removes packages that were installed as dependencies and are no longer needed by any installed packages that are currently being used so Auto remove removes anything that was installed as a dependency and is no longer actually being used by any other packages which is very very useful you don’t have to go through the list you just run pseudo AP Auto remove and it automatically removes anything that is irrelevant and then we can do it on a red hat based system so same exact thing except you’re just using dnf as your package manager so dnf instead of AP and then you would just do auto remove and then it it removes unnecessary packages and dependencies that are not being used the second reason why you want to do this is to clear all the temporary files which can also accumulate inside of the temp directory which take up disk space so uh for the most part it’s safe to delete all these things but you just got to be sure that there’s nothing that you actually need um or any applications that may be uh using any of this critical temporary files for example so uh for the most part you can delete all of these things cuz if they weren’t meant to be temporary they wouldn’t be inside of the temporary directory so that’s my approach to it if uh they were not important to be put inside of the key directories for the actual uh library for that specific application or the optional uh directory or any of the other directories that can be used if they’ve been put inside of the attemp directory for the most part they’re good to go uh the command would be remove so we’re using the r M command and then you do the RF option which is recursive so you’re doing a recursive um to delete all of the files for the temp directory now notice that there’s an asterisk at the end of this particular path which means asterisk it stands for a wild card character so it’s going to remove literally everything that is inside of the temp directory because it can be anything it could the asterisk stands for anything so TMP for/ asterisk means that anything that comes after this path it can be removed recursively and all of the files and directories uh will be removed so uh this is a very powerful command again it’ll permanently delete everything you just got to make sure that none of these things are relevant but again this is just my process My Philosophy around it if it’s inside the temp folder most likely it was going to be deleted anyways cuz it t typically does get deleted on reboots so most likely it was going to be deleted anyways or at least delete it after a certain amount of time otherwise it would not be placed inside the temp folder so for the most part I think it’s good to go then we have removing older Journal logs so this happens with a lot of uh log entries that have to do with the system for example because it’ll log everything on the system uh even if it’s just informational even if it’s uh nothing that is critical or nothing that needs to be addressed it just keeps logging everything so uh over time it’ll be a lot of log files massive massive log files that take up a lot of space so it’s important to periodically clean these or set up some kind of a script to pre periodically clean these specific log items so that uh you know they’re only maintained for let’s say 6 months or they’re maintained for a year or however long is relevant to you but after that 6-month period is over if it’s older than 6 months it should be deleted and you should move on from it or maybe if it’s older than 6 months it should be transferred out of your system and put into an external drive or something like that and that’s the way that you would manage your log files so um in a lot of environments and a lot of Enterprise environments and based on regulatory environments or compliance issues based on Regulatory Compliance you may be required to keep logs for longer than 6 months but you can transfer them from hot storage which is what’s on the computer and it’s accessible all the time time to warm storage which could be an external drive that’s easily accessible you could just plug it into the computer and get access to it or if it’s super old you can put it into Cold Storage which means it’s now sitting inside of a warehouse somewhere and then somebody would have to go retrieve it in order to be able to access that information but it’s being placed in storage uh just depends on what the compliance environment that you’re in and what they require um but typically uh usually if it’s 6 months or older especially if it’s just you on your computer if it’s like older than 3 to 6 months you can just wipe it and move on from it um in an Enterprise environment is a little bit different but for your computer for a specific personal computer you really don’t need to hold on to log files that long the way that you could do this is by using the journal CTL command and then remove the uh the stuff that is older than two weeks for example by using the vacuum time so this command removes Journal logs older than 2 weeks um and you can adjust the time frame as needed so you could do two days or one month or whatever and journal CTL will remove those items based on the time that they were in uh you can clean up packages by doing the pseudo AP clean or pseudo dnf clean all which will clear the package cache freeing up space that are used by downloaded package files that one’s pretty self-explanatory and then you can do a local Purge which would remove unnecessary localization files um in according to the local machine so uh The Purge command is kind of funny to me but uh it removes unnecessary localization files for languages that you don’t use so you got to install it first if it’s not already available and then you would run it and it would uh e Purge it would Purge all of the languages that aren’t being used from your local machine um we can do find and delete of large files so if it’s uh larger than 100 megabytes for example in this particular case you’re looking at the fine command which we’ve already kind of been introduced to and it’s looking inside of the root folder for the file type so type f would be a file that has a size of a 100 megabytes or larger and then once you find these files you can delete them if they’re no longer of use to you and then analyzing dis usage with GUI tools which are graphic user interface tools like Bob dis usage and analyzer on gnome or ker stat on KDE um and then you would have to install them obviously and then once you have them installed you can sort through your computer and browse the computer to find anything that would be larger than a certain size or anything that’s older than a certain period and remove all of those files and folders if they’re no longer applicable or at the very least transfer them to an external storage so in summary you can use a variety of different tools to clean up your disc um and just doing this regularly in a scheduled manner will help you avoid any kind of storage issues and so it could be done as easily as using the AP package manager and auto remove it could be done with using the dnf manager and doing Auto remove you can remove recursively anything that’s inside of the temp folder you can use Journal CTL with the vacuum time to 2 weeks to delete anything that’s older than 2 weeks or anything of that sort all right now we need to talk about backups and restoration strategies and one of the big pieces about this is going to be the command known as tar and tar is a command that is used to create archives the it’s an acronym for tape archive and it’s a command line utility to create and manipulate archive files um it compresses the multiple files into a singular archive file it can also work with directories as well um which makes it easier to store things and to transfer them to manage Backup so on and so forth it can uh support various methods for for example the gzip bzip and XZ which are it’s separate from the zip file uh ZIP command excuse me um which is another command that can compress uh data or compress files and directories into a singular uh ZIP file a compressed file um it is separate from that but it’s technically not considered uh creating a zip file it creates an archive file um so that’s I would say the key difference but I mean it does compress the multiple files into it so it does uh essentially the same uh activity it serves the same purpose as creating a zip file so to create an archive uh you would do it with the tar command and there’s a variety of options that we’ve attached to this so I’ll explain them in the next slide um and then you create the compressed file which would be this in this particular case and then this would be the path to the files that you want to compress so if we want to break this down the first portion of those options would be the C flag which is to create so it creates a new archive the Z flag compresses it into a gzip so a zip folder a gzip file V would be verbose meaning the files that are being processed are going to be displayed onto the screen so this is not necessarily uh it’s not necessary for the function of tar to work it’s just going to display onto the screen what’s actually being processed and how it’s going and then the F portion is specifying the output file name which in this particular case is backup. tar.gz so this is necessary for the function this is necessary for the function this is necessary for the function if you leave this out if you leave the- F portion out what it’s going to do is it’s going to create its own name and then you would have to rename it after the fact so if you just add the- F you can designate what you want the name of it to be and then the path to the file would be the actual file and directory that you want to be archived so in this particular case it’s just one path that’s been provided and then everything inside of this path is going to be archived into this backup. tar.gz uh compressed file right so you use- C to create – Z to turn it into a gzip and then- F to designate the name and then- V is just a verbos output so that it displays everything onto the screen as it’s being processed then you have the backup of these documents so for example again you’re creating another archive in this particular case but now it’s going to be a backup that is being created of the user documents so the exact pretty much the same exact command that we just ran in this particular case in this case right here except now we’ve actually given the path that we want in this particular case and it’s going to be it looks like this right so we’re creating a backup of the home user documents and it’s being placed into this particular backup file right here which is again same exact options that are being assigned to it and it’s creating a gzip file for us so if you want to extract the archive now you’re going to do uh a similar uh options here um the flags are a little bit different and the the very end right here where you actually create the the destination the path to destination also requires a flag for it as well so if we were to uh dis uh decipher that or if we were to what’s the word that I’m looking for not decipher um not split slice I’m drawing a blank but it it’ll come to me so if we were to break it down basically uh what we’re going to do is the first piece the instead of C we have X so instead of creating a file we’re now extracting a file so the first option the first flag is the X and then the Z Would to designate that we’re decompressing the archive using gzip so this would have to be a gz extension right here in order for this to work if it was a different type of an extension for this archive file you would use a different option inside of this so the Z represents gzip when meaning that we’re decompressing a gzip compressed file the V is still verbose to display everything that’s going on and then the F would specify the input file name which in this case would be the backup. tar and then we have the C for the directory meaning the destination directory for the extracted fil so everything here looks almost identical to creating an archive except in this case we’re extra in it and we’re going to use the X flag here to extract the Z is still the gzip and then the V would be verbose the file would be the name of the file that’s going to be extracted and then we’ve added this Capital C flag over here that would give us the path to the destination of where it’s going to be extracted to so um extracting the archive as an actual example here where we have the backup that we just created from the documents and then we’re going to designate the place that it’s going to be transferred into once it’s been extracted which is going to be the home user restore documents and this is essentially the same exact command that we just saw with actual content filled in the actual path has been filled in in this particular case and then the name of the backup file has also been filled in so if you want to list the contents of an archive you could use the tvf so V and F you should already be familiar with so V is going to be for verbos f is going to be to designate the file name in this case but then you’re using the T flag which stands for list so list T um so it’s going to list the contents of the archive without actually extracting the contents of the archive so we don’t have an actual breakdown of those commands um what we’re going to see in this particular case is that this is literally the closest thing to the breakdown of the command so you’re already familiar with V for verbos f is going to designate what the file is that you’re going to try to look at and then the T flag is going to list the options or list the contents of this particular archive file now if you want to exclude files from the archive then you’re going to use very similar set of commands here except there’s this last piece right here which is going to be the piece inside of this directory that you want to be excluded right so you want to Archive everything inside of this piece with with the exception so excluding this path so notice that path two files is still here so we’re still trying to uh archive this piece right here the only thing that we’re doing is that this exclude me portion is not going to be included in this archive file so you can actually create an archive and then exclude a certain portion of this directory from your overall archive that you’re creating so exclude is the uh the command here so– exclude and then you need to give it the full path of the directory or the file that you want to be excluded from this backup file that’s being created from this archive file that’s being created using tar so to append a file to an existing archive you can do this as well so you can do the r option to append now this unfortunately is not R in any way is doesn’t equal append or it does it’s not included in the word append so you kind of have to learn this one and kind of memorize this one or just save this right save this command for future use um if you don’t if you’re not going to get the the the documents or the the slideshow U from hack holic Anonymous I hope you’re creating your own file with these commands inside of it so at the very least I hope you’re doing that but again you can just come to the video at this portion and just look at the archiving function with tar so there’s a lot of different ways that you can come back and refer to this but anyway so we do have the V for verbos we have the F to designate what the file is going to be and then in this case we’re using the r flag to append something to this backup file right here to this uh archive file so it’ll append additional files to an existing archive and the r option stands for a pen so this is going to be our existing archive and then this is the path to the additional file that you want to be appended into this archive so you don’t this is actually very neat it’s very useful because um when you create a zip file you can’t append something into the zip file but you can do it using the tar command which is actually very handy cuz sometimes you want to just add things to an archive file without creating new archive files you know what I mean you just want to keep the same archive file and then just add something to that archive so for example um new versions of a log file inside of the VAR log directory instead of creating new archives you just keep adding those same log files as the time arrives you just add them to the same archive file so you could just have one archive file for the authentication log for example the off log you can have just one archive file for that and then anytime that you do a backup you just add the new archived file or the new log file that’s been created you just add it to the same authentication backup archive which is again it’s very very handy it’s very useful so this is an example that we have over here that that we’re now going to kind of go through to create a backup so it’s very similar to what you’ve already seen in this particular case we have all of the same flags as we had before and in this backup file we actually have the date that’s been attached to it to kind of give us an understanding of what this backup represents and then we have the documents folder that’s being backed up so backup of 1115 2024 and then it has all of the documents or all of the content of the user documents that’s going to be backed up inside of this file so very very handy little command and then now we have the extraction of that backup so same exact archive file except now we’re extracting it and we’re using the capital c flag over here to designate where we want it to be extracted to and the X flag right here to designate that we’re extracting instead of creating so the rest of it is exactly the same it’s a gzip file it’s verbos and then you’re designating the file name that you’re going to be working with and then it’s going to be extracted into this particular location which is going to be the new location for those restored documents and the third example that we have is to list the contents of that backup file so this should have probably been example number two where we list the contents of the backup file and then we extract the back backup file but T would represent list so you’re listing the contents of this backup file or this tar file this archive uh without extracting the contents of it and that is it so our summary here is that uh it’s obviously very versatile it’s very powerful as you just saw from the examples that we saw um by mastering just a few key commands you can efficiently back up and restore files and honestly create a lot of really good scripts because now that we know that we can append something using the r flag for example when you can append something to an existing archive file that means every time that that script runs and it could be a schedule task that you schedule with KRON tabs and cron jobs and every time that that backup file or that archive uh script runs it’s just going to append the contents of whatever it is that you’re trying to do inside of the same exact archive without creating a new archive file which is honestly I actually love that it’s very very useful it’s very handy and as I’m going through this I’m like okay well why am I not why haven’t I created a script for that so you better believe that I’m going to create a script that’s going to use the tar command to append the contents of my logs uh for my own machines inside of the same exact archive file which again is just super freaking handy so incremental backups that can happen with our sync will be our next piece so it’s a very useful versatile efficient utility for synchronizing files and directories between different locations uh it’s particularly well suited for backups because it transfers only modified files uh reducing the time and bandwidth that’s required for the operation so it essentially detects which files inside a given location have been modified and if they have been modified then it’ll back it up which is again very very useful so uh you have the key features here which are the incremental transfer so only modified portions of the files are transferred so think about what it means to sync something right so if there is one new addition or one new modification that’s been made it’s just going to get resynced it’s very similar to what happens when you have iCloud for example and your iCloud storage will detect any new additions that have been made to your phone’s contents and then it’ll back it up to your current cloud um and it won’t uh it won’t take everything that you previously had it’s only going to take the new additions that have been made into your phone and then just add those into your iCloud backup so it’s very very useful uh it only detects things that have been modified right it’s very versatile it can be used for local backups as well as remote backups over secure shell which is freaking badass uh it preserves file attributes so it maintains permissions time stamps and any other attributes that were originally created on the original file as it synchronizes to the new backup location so very very useful little tool so the basic command structure here is we have rsync with the AV flag and then this is the source location and then the destination directory so a to Archive and it enables archive mode preserves permissions Sim links everything else that’s the attribute for this Source directory and all of its contents and then verbos to provide a detailed output of the synchronization process onto your screen and then the source directory is the path to the source and then the destination would be the path to the destination so very simple command not complicated to understand at all and we’re just running our sync with the two flags right here you got to give it the source directory and then the destination directory and then it’s going to do exactly what it should um a basic sync as an example using actual directories here again same exact command with the the archive and the verbos flags attached to it now we’re just saying the home the documents from this particular user is going to go into the backup documents and that’s pretty much it syncing over SSH which is the the one that I’m most interested in um would be something that the flags the initial flags are staying the same so you still have the archive and the verbos flag but now you have this e flag for here for export I would say um that’s really the the thing that I would attribute the E2 um and it’s going to use the SSH um uh protocol to be able to transfer it and then Source directory and then you have something very similar to what we did with secure copy if you remember the instructions that we went through secure copy that you would have the user at the remote server which would probably be an IP address for example so you’d have the user at the remote IP address and then you have this colon and the path to the destination that this backup is going to take place once you run this okay you need to probably provide a password for this user unless you generated a key that has been uh uh attached to to the the key log not the key log I forget the the technical term but essentially it’s the storehouse of the local keys on your computer that would uh that would not require you to enter a password every time you did this so if you want to run this as a part of a scheduled script for example inside of your KRON jobs then you would definitely need to take the actions that we took when we generated keys so that you can have a passwordless authentication for SSH and then this would run every single time without needing the password to be entered for this particular user so that it can backup this directory inside of this destination so it’s not complicated to do the backup it’s actually quite easy you just need to designate that you’re going to use the SSH protocol and then you have to have the credentials and the location for where this is going to go and then if you have used a uh key based authentication method you you won’t be prompted to enter the password for this particular user and then from there it’ll just run like clockwork so syncing over SSH is a very very powerful tool and then this is it right here right so we have Alice at this particular IP address and then it’s going into the backup documents for uh this particular IP for the Alice user and then that’s it and this is assuming that we already have Alice’s key inside of our keyless authentication or not keyless passwordless authentication or we know Alice’s password so as we run this we would enter Alice’s password and then it would just back up everything inside of this location on Alice’s user profile inside of this particular server so very very useful little command is rsync so if you want to back up something with deletion so if you want to delete right it ensures that the destination directory mirrors The Source directory by deleting files from the destination that no longer exist in in the source this is very useful right so if for example uh you no longer need the contents that you deleted from the source so let’s say you had 100 pieces 100 files you had 100 files you have updated 50 of the files and you deleted 50 of the files okay this destination still has those original 100 because you already did one backup when you run this flag right here it’s going to sync these two destinations and all of the files that were deleted from The Source because you didn’t need them anymore are now also going to be deleted from the destination so it’s going to sync them and make sure that they match exactly and only the files that exist inside of the source are going to be the files that exist inside of the destination so it’s very very useful command and this is something that you can of course combine with these commands as well as if you want to run it with D SSH command you can combine all of those options together and just add this delete portion so that anything that would be duplicate or anything that no longer exists in this Source location will also be deleted from the destination location ensuring that you’re not holding on to old files that are no longer relevant so that’s very very useful so this is actually what it looks like using the delete command and some actual paths here so we have the home user documents and then the backup documents and of course it’s going to to synchron synchronize the contents of Home user documents with backup documents deleting any files in the destination that are not present in the source so deleting any files here that no longer exist in this particular location so here’s our example here so we have the rsync with the AV flag so we’re archiving and we’re going to have a verbos output and we’re taking the contents of this and we’re putting it inside of this Project’s backup folder we we have another example that is being done with the SSH protocol so we have to add the dash e flag here to designate that we want to export into SSH we’re taking the contents of the home user projects and we’re putting it on Alice’s profile on this particular server with the backup projects location and then we have the delete command that’s also being added to this and again this also could be added to this particular command as well right so you could have the av– delete and then- SSH so on and so forth and then that would ensure that the contents of here match this content so it would delete anything inside of this location that no longer exists in this location it deletes the stuff in the destination that are no longer in the source and that’s our destination that’s our source so very useful series of commands we have a couple of additional options in this uh location so we have the Dash progress D- progress which displays detailed progress information for each file during the transfer so it’s similar to verbose ex except I guess it’s showing you kind of like a percentage update we’ll be reviewing this as we go through our practical portion so you can actually see what this looks like but it’s going to display the progress as the transfer for each file is taking place during the overall transfer process and then we have The Preserve hard links command so this is the hard links portion right here which preserves hard links in the source directory so this location the hard links uh if you remember soft links are essentially uh shortcuts to a certain file or a command and a hard link is the duplicated version of that so if you have two files uh the hard link would be a duplicated version of the file whereas the soft link would just be a a shortcut that points to the or original file so in this case it’s preserving all of the hard links in this case which I mean uh assuming that uh you want to uh keep these things I don’t I personally don’t know why you would have to designate this maybe as a part of the sync process it sometimes might delete it but uh I would say include this in everything if you want to keep everything that is inside of the source directory and then delete whatever’s been deleted from the destination that’s all fine but just add this I would say to every single time that you run rsync to ensure that it’s keeping all of the hard links inside of the source directory that that feels like it’s an important command that should be run every single time then we have compressing of the data during transfer this is another thing that’s very very useful so the Z option compresses the file uh data during the transfer to reduce bandwidth usage and to reduce the space that is taken inside of the destination directory as you transfer for the file so this would this feels like another uh option that would happen frequently especially if you’re not going to be transferring this stuff back and you’re mostly doing this for backup so if you’re not going to do regular access of the destination directory and you’re just doing this to back up your stuff for Recovery purposes for example then you would add the Z option to compress the data so that it can take less space as well as reduce the amount of data that’s being used for the transfer so the bandwidth usage that’s being used during the transfer that would be another one of those flags that I would say should be ran every single time unless you’re going to access the contents of the destination folder while you get to the to that location or you’re going to use it from a different computer or something like that and you want to have uh uncompressed files I guess or like live type of files that haven’t been compressed so that they’re easily accessible and you don’t have to go through the extra step of decompressing them without uh with uh decompressing them and get access to them essentially so you don’t want to decompress them and you just want to access them essentially but for the most part if it’s just being for backup purposes so for you know future recovery in case anything happens and your system crashes or something I would say that you should compress all of that data so in summary uh our sync is actually very useful to do incremental backups because what it’s going to do is going to update this uh the destination location only with the changes that have been made in the source location it’s flexible it has a range of options that make it suitable for a variety of backup scenarios both locally and over the network do using something like SSH you have our basic sync command that we’ve gone through we have the SSH sync command that we’ve gone through and then we have the delete version of the command that’s also very useful that will delete anything inside the destination that no longer exists in the source very very useful tool which is rsync which brings us to system performance monitoring so monitoring the CPU memory and the processes that are running using a tool called top now top and htop both uh perform essentially the same function so what they do is they list the the processes that are currently running the services that are currently running on your computer and they show you how much CPU each one of those things is using how much memory or Ram each one of those things are using and the pids and certain details about each one of those processes and it’s a dynamic list meaning that it updates in real time and if something is taking up more memory than the next thing it’ll go up uh in the list and it’s a live list right it’s not something that is just stagnant and then you run at once and you just get this list uh with the line items saying exactly the same so it’s a live uh environment type of a monitoring so what it looks like is you just literally run top and you press enter and it provides the dynamic realtime view of what’s going on in the system so the processes that are running on the system and the amount of resources that each one of them is using uh this is included in most Unix operating systems and Linux obviously and it’s actually on Mac OS as well so if you just run top and press enter it’ll actually show you all the commands that are running on Mac OS in a live environment and how much resource each one of them is used using uh this is used all the time for system administration as well as security so if you just want to see if like a computer slows down or if a Server slows down drastically and you don’t know exactly what’s going on you would run the top command to see what is running and how much resource those things are taking and more often than not you kind of reverse engineer not based on the name of the service but based on how much resource it’s using so if it’s using a lot of the Ram or if it’s using a lot of the CPU then you can say okay well this seems kind of funky what is this particular process and then you start doing the rest of your investigation that way the command itself is literally press top and press enter and it’ll start displaying the output and it’s updated regularly it typically goes 1 second at a time but then you can change it so that the incremental uh updates are done by 5 Seconds or 10 seconds or something like that if the the 1 second update is a little bit too much which I think it has in my opinion it is too much cuz it’s it’s kind of hard to notice what the commands are so you can change it to a 5-second update and it’ll change or it’ll update itself continuously but it’ll do it every 5 Seconds instead of every second um when you navigate this so This is actually something very useful to understand so uh you can use a lot of different commands to interact with it while it’s running so when you press top or you type top and you press enter the output on the screen is just going to stay there until you exit it so what you want to do is you want to interact with the output put as it’s going around so if you press P while top is running it’s going to sort everything by CPU usage so P CPU P CPU so you press p and then it’ll sort everything by the CPU usage making it easy to identify what’s going on and what’s consuming the most CPU if you run M you would be sorting by memory so that one’s very easy to understand so another way that we can look at CPU would be p for processing so processing would be the the uh sorting mechanism when you use P so it’s going to use CPU processing power so Central Processing Unit think about it like that so the central processing unit so P would be for processing M is for memory so the ram random access memory so it’s going to sort by the RAM usage and how much RAM the the process is currently using um if you press k then you would be be ordering top to kill a process so you would press K and then it’s going to ask you uh or it’s going to give you the option to enter the P of the process that you want to kill so let’s say that using P and M you have determined that there’s one specific process that’s taking up a lot of uh resources and now you want to kill that process so you would press K and then you would provide the PID for that process so that it will be killed automatically or immediately it’ll be killed killed by your system and hopefully it will not take any more resources from you and it will just kill sometimes you may need to do Force kill which is a different command but just K by itself it will kill a process if you want to quit top you just simply press q and then it’ll exit the top interface so remember when you enter tops you type in top press enter you’re now inside an interface that is going to be interactable so you can interact with that interface once you’re done interacting with it and customizing the display and getting the information that you need killing a process doing whatever you need to do you would need to exit top and in the way that you do that is by pressing q and then it’ll exit the top interface the next one is H top which is the enhanced version of top that offers a more userfriendly colorful and interactive interface but it essentially provides the same uh processing or it provides the same service and utility that top does except it’s just a little bit more user friendly it’s colorful instead of just a bunch bunch of black and white entries on the screen you actually get color coding that happens with the entries on the screen um and it improves the user friendliness of it so uh you would run install htop if you don’t have it so it’s a pseudo command so pseudo AP install htop or pseudo dnf install htop because it does not come pre-installed so top comes pre-installed with Linux but if you wanted to include if you wanted to install htop you would need to actually install it and then you would run it according to the various commands so for example run htop press enter and then it’ll start the tool and it’s very similar to the interaction that you would have with the top tool so it provides the enhanced user experience with easy navigation if you want to view processes um or viewing the processes would be the main screen and it gives you the list of processes similar to top but with more detailed and accessible information um if you wanted to sort by columns you would do F6 and this don’t know what the the replacement for this is if you don’t have F6 on your uh computer so or on your keyboard I should say so what I’m going to do is I’m going to actually get that information for you now okay so thankfully it’s actually fairly useful so if you don’t have the F6 key on your columns or in your keyboard for example to organize the columns then what you do is just use the left and right arrow keys to move through the columns at the top of the interface and then once you’ve landed or highlighted the column that you want to sort you press enter and that’s it so uh it’s fairly simple fairly straightforward to organize the columns if you don’t have the F6 key on your keyboard which I do realize that not every keyboard has that so a lot of keyboards do my current keyboard the Bluetooth keyboard that I have does but when I’m looking at my MacBook the MacBook does not have F6 on it in uh in inherently what you can do is you can press the um FN key um which brings up the F keys right above my numerical numbers and then I can use the F keys that way but sometimes you don’t even have that option and you have to try to work a way around it so if you have the F Keys that’s amazing if you don’t have them this is how you do it you just use the left and right arrow keys and then you press enter once you’ve highlighted the column that you want and then it’ll organize that column for you so if we you want to kill a process in HTP you would select the process using the arrow keys and then press F9 in this particular case and the signal to send this whoops the uh the signal to send in this particular case would be Sig term which is the default uh which is going to terminate it’s the signal to terminate and it sends the process using the uh it kills the process excuse me using the F9 key so it allows you to interactively terminate the process now I need to find that out how to do this if you don’t have the F keys on your keyboard so let me go get that information for you if you don’t have the F9 key to terminate the process you can kill it using the K key so use the arrow keys to move up and down the list to highlight the process that you want to kill and then you press the K key the lowercase version of the K key to open the action menu which is an alternative to the shortcut to uh or this is an alternative shortcut to F9 and then you select the the the actual option or the the the process that you want to kill and after you’ve pressed that you’ll see a list of signals that you can end the process the default signal to terminate process is Sig term in this particular case which is signal number 15 for terminate and then you press enter to send sck term which will attempt to gracefully terminate the process if you can’t do that then you would force kill which would work with the Sig kill uh which is signal number nine to forcefully kill the process so Sig terminate is signal number 15 and it’s going to attempt to gracefully kill the process but if it’s not killing and if it’s not working or if it’s not dying so to speak then you can forcefully kill the process by using the Sig kill which is signal number nine and then boom you are done we have the F3 option which is the option to search for a process and this essentially allows you to enter the name or part of the name of a process that you want to search for and that way you’ll be able to find it and then if you need to kill it or you know Force kill it or get other information about it you can go ahead and do it that way so now we need to find out what the substitute the alternative for F3 is if you don’t have the F3 key then you can use the forward slash as an alternative so you press the for/ key it’s going to open the search prompt at the bottom of the interface and then you would enter the process name or part of the process name that you want to search for press enter and then you can use the N key to move to the next match if there are multiple instances of the search term and that way you can search for processes by name if you don’t have the F3 key so there we go now we have the alternative to the F3 key next would be quitting htop which in this particular case would be done with the F10 key but then I’m going to find of course the option for you in case you don’t have the F10 key and this one is a exactly the way that top worked as well so you just press q and you press q and while it’s open this will immediately exit the htop application so if you don’t have F10 you just press q and then you are good to go so in summary both top and htop are very useful tools for monitoring system performance each of them have their own unique strengths the provided strength or the strength for top is that it provides a basic yet powerful real-time view of the processes and resource usage you have command commands like P to sort by CPU commands like M to sort by memory for navigation you can quit it using Q htop is a userfriendly version of uh top and it has enhanced features for example interactive Process Management and intuitive search navigation and you can do those with either the F Keys as we saw or by using the arrow keys or the various keyboard shortcuts that you have you can use F9 to kill a process and F3 to search for processes or you can use the the various um options that we actually had to either kill or forcefully kill something and then F3 would be the forward slash to be able to search for something and then Q would be to quit it so on and so forth so uh it still uh essentially offers the same usage that top does except it’s more user friendly because it does provide color coding and the responses and the output that you get and then you can interact with the results a little bit better and you have more options to interact with the results so um that is it for top and H top and now we can move on to free free is a simple and another powerful command line utility that displays information about the systems memory usage including both physical memory and swap space um it’s vital for monitoring system performance and diagnosing memory related issues so the command itself would be free Dash and the DH option stands for human readable so it formats the output in a way that it’s easier to read using uh units like kilobytes megabytes or gigabytes so this is an example out put for example an example output for example welcome to the Department of redundancy Department um so in this case we have ran free H and then we see that we have the memory usage or the total memory that’s available in our uh RAM as well as the swap space and it says that there’s been 3 gbes that’s been that is currently being used by the RAM and then there’s 8 gabt that’s free there’s 239 megabytes that’s shared there’s the buff and cache that’s at 4 GB and then your total actual available after all of these things are considered are 11 GB and then there’s nothing that’s being used inside of the swap so that one is all good the breakdown that we just see just to kind of give you another bre we’ve actually kind of reviewed this already but we’re going to review it one more time because we are now in the troubleshooting section of the the training series but this is a repeated training cuz we did go over this uh earlier in our training Series so um you’ve probably noticed that there are repetitions of various Concepts that we’ve gone through and we’ve looked at them at least twice and this mainly because the fact that they are relevant for a uh multiple amount of things so it’s not just for one use so free can be used for troubleshooting and it can also be used for swap monitoring and memory monitoring in the context of partitioning and file systems so you can be looking you can be using the same tool to serve multiple purposes as we’ve established so the total amount of memory or swap space the used amount the free amount the shared amount of memory that’s being used by the temporary file system the buffering and the cache so the memory that’s been used by the buffer or the cache and the final available memory after all of those things are considered uh without the swap space if you want to check the memory as an example and this is just another uh usage of it but it’s essentially the same command that you’re running just different output results that we have in this case and so the total would be 8 GB 2 GB that are used free or 500 megabytes are shared there’s 1 GB dedicated to the buffer and the cache and then the total available after all of these things are considered would be 6 GB and so this is just checking the memory usage as an example if you wanted to um monitor what’s going on in real time this is a very useful command that we didn’t cover before so so what you’re going to do is you’re going to use the watch command and then the N1 represents every second so if you did N5 it would be every 5 seconds and so on and so forth so you’re going to use the watch command and then run free H every single uh second essentially so this is a separate command from free this is not something that uh is included under the free AG tool so the watch command is by itself and you can apply it to a variety of different command line tools so we have watch- N1 which would say I want you to watch this output every 1 second which would essentially Run free H every second providing a real time version of the output so it’s kind of like running top except when top updates itself now we’ve kind of cheated the system and we’re running free H every second so that we can watch the output of this command every second providing a realtime version of free H for ourselves if you wanted to look at the memory information you can concatenate use a cat command against the process mem info path and this is not part of free but it is detailed information about memory usage directly from the file system itself so from the proc file system so this technically should not fall under free but it was incl uded in the course content and so this is how we’re going to look at it so you can run the cat command and concatenate the content of this particular file which will display detailed information about memory usage that’s directly from the actual process file system which will act similar to the the results that you would get from free free- m displays memory in megabytes Das K would display memory in kilobytes and Das G would display memory in gigabytes in my opinion you should just just do free Dash because it will associate it’ll kind of determine by itself what the best measurement um would be and then it would provide you that measurement in the associated metric so if it’s less than a gigabyte it would give it to you in megabytes if it’s less than a megabyte it would give it to you in kilobytes and so on and so forth you don’t need to necessarily run either one of these things but if you wanted to you could so now you know k for kilobytes M for megabytes and G for gigabytes I was going to say gigabytes G for gigabytes if you want to display the output specifically in those measurements we have free dashb which is the option that displays memory in bytes so it doesn’t even go in kilobytes it’ll go in the number of bytes so that’s the smallest uh form of measurement that we can feed into free free- l would include statistics about low and high memory uh usage so low and high memory statistics and then as a summary we have all of these uh commands just kind of output for you it’s basically a command that’s used uh it’s a very straightforward and essential uh tool for monitoring memory and swap usage on a system if you use the- H option it provides human readable output which essentially gives you the measurements in what it thinks are the best measurement uh uh parameters in the measurement um man I’m drawn a blank again today it’s kind of it’s weird the metrics I guess I should say uh it will give you the human readable format in what it determines to be the best way to measure something so kilobyte gigabyte megabyte so on and so forth and then it’ll also give you the free the used the shared the buffer cache total memory and available memory after all things are considered uh you would run free- for that if you wanted to do realtime monitoring you would do watch N1 so that it gives you a 1 second repetition of this command so it’ll run free H every single second and You’ be able to see kind of a live update for that and then you can run the concatenate or the cat command um on the particular M info file so that you can get detailed memory information which is not necessarily a part of the free command but it kind of falls into the overall conversation that we’ve had where we’re run with top and htop and of course now free VM stat is another tool so it stands for virtual memory statistics and it’s another tool that helps you look at these system statistics like memory usage and CPU performance and input output operations as well so it helps the administrator which is you Monitor and troubleshoot system performance effectively so you would essentially run all of these commands uh to get a variety of different information or to try to see if there’s something that maybe was not caught by htop for example and you were able to find it with VM stats so uh the basic command would would be VM stat 1 and five so the first one is the data every second being populated onto the screen for five iterations so the very first one would be a live count every second and then it’s going to iterate five times so you will get five entries printed onto the screen and then you just get the snapshot of the system activity over the specified interval so this is what it would potentially look like right so you have VM stat5 and then this is the structured in several columns so on and so forth so we have the processes itself you have the memory the swap the input output and the system itself and then the CPU that’s being used used and then you have all of this data that’s associated to every single one of those things but you kind of do see that there is like an overall column that’s associated with this so under memory we have the swap memory we have the free we have the buffer and we have the cache that’s being used under the swap space right here you have swap in or swap out that’s nothing being used in there you have the input output put so basic input basic output you have two that’s being input you have 15 output uh processes that are working you have the system itself incrementally the uh the in and the Cs I have to I think we have it on the next slide that it actually breaks down and then you have the CPU usage as well that gives you the data that is being used by the CPU so this these are the key Fields as we have them here so we have the procs uh column that we saw at the very beginning right here so we have the proc column the r represents the number of processes that are waiting for the runtime so these are runnable processes the B represents the number of processes in interruptible sleep so they’ve been blocked right so that’s what procs is and that’s what R&B stand for under procs memory would be swap D so the amount of virtual memory that’s been used which is the swap space you have the free memory the free amount of idle memory you have the buffer memory the amount of uh memory that’s being used as buffers and then you have the cache which is the amount of memory that’s being used as the cach a then you have the swap column which has the SI ands so so the memory that’s swapped in from the disc in kilobytes you have the memory that is swapped out to the dis in kilobytes so swapped in from the memory so you have the random access memory that can’t handle anything more than 8 gbt for example so when it goes into 8.1 that 0.1 is going to be swapped in from the disc and if there’s anything that’s being swapped out of the memory to the dis so that it’s being used by the disk that would be essentially the reverse of what I just mentioned then you have IO which is for input output so you have bi for the blocks received from the Block device which is in so it’s coming in blocks per second so this is the input that’s coming in in blocks and then you have the blocks that are being sent to a block device which is the going in blocks per second and then you have have the system itself so you have the in and the Cs so the number of interrupts per second including the clock itself and then you have the number of context switches per second which means you’re switching from the text editor to the internet browser and you’re switching from the internet browser to a video player and so on and so forth so what you’re making switches between different applications or different processes that’s what’s happening per second and of course the number of interrupts that are going on per second and then finally we have the CPU portion which gives us the ussy wst so the US would be the time spent running non-kernel code so this is the user time running non kernel code so this is the stuff that’s being done by the user itself the SI would be the time running kernel code so this is stuff that’s being run by the system by the kernel in the background you have Idol time the time that’s been spent on Idol time spent waiting for input or output which is I I guess a little bit different than idle cuz idle would be there’s absolutely nothing going on uh wait time for input output would be the computer is up it’s not asleep it’s not an idol or hibernation or anything but it’s still waiting for something to happen and then St would be time stolen from a virtual machine so just to clarify this for you cuz it is something that was a little bit above my head as well so when you see time stolen from vmstat it refers to the CPU steel time CPU steel time is the percentage of time that a virtual CPU within a virtual machine is waiting for resources because the hypervisor is allocating those resources to another virtual machine on the same physical host so it’s the time when your vm’s vcpu is involuntarily idle because it can’t get the necessary CPU from the physical machine so you have a physical host that physical host has two VMS on it two virtual machines on it and each one of those virtual machines is requiring a certain amount of CPU a certain amount of processing power and so if the CPU is not uh if there isn’t enough CPU then you can’t allocate to both of the machines and one of the machines is taking a lot of the processing power then the second machine is getting its processing time stolen so the steel time means that the processing power the processing time has been stolen from one virtual machine because another virtual machine is overclocking and it’s taking too much so it happens because the hypervisor is managing multiple VMS and has to distribute the physical CPU resources among them if there are more VMS or higher CPU demand than the physical host can handle some VMS may experience CPU steel time so an example would be to just run VM Stat one and so we already established that the first number that we feed it would just be the amount of seconds that it waits before it refreshes the data so it’s just going to continually update the system performance data every second until it’s interrupted using the control C to stop if we were to do one and then give the second number as 10 then it would repeat this iteration 10 times and then it would just automatically stop by itself if you ran VM stat it’s just a single snapshot so it’s not going to repeat itself it’s just going to give you one snapshot of the system performance at the moment that you ran that command and then this is what we see right here so it’s going to update itself every 5 seconds for 10 increments so it’s going to or yeah for 10 increments so it’s going to update itself every 5 Seconds 10 times displaying the performance data for 10 iterations that’s what I was looking for not increments it’s going to do this 10 iterations um every 5 seconds and this is our summary for VM stat so another essential tool monitoring and analyzing system performance provides detailed statistics on CPU memory and IO operations input output operations when you understand and utilize it administrators can effectively identify and troubleshoot system performance issues so this would be used in conjunction with top htop vmstat free so on and so forth these would be on your Suite of tools it would be in Your Arsenal of tools when you’re trying to troubleshoot system resources uh for example the amount of C CPU that’s being used the amount of ram that’s being used the amount of uh processes that are running and how much each one of those processes is taking so uh as you saw you get different results from each one of these tools so it gives you a little bit of uh it gives you more context to what’s going on within any given machine so that you can figure out overall what’s going on vmstat as the name implies is going to the virtual machine virtual machine statistics free would give you all the free space that’s available uh top would give you the uh processes that are running and the amount of uh memory or Ram that they’re using so all of these things serve a different purpose and they can be used in conjunction with each other to give you a full picture a great idea of what’s going on inside of your computer’s environment or your Network’s environment all right now we need to talk about virtualization and cloud computing and this is not going to be the the Linux plus examination or at least the version that uh we mentioned at the top of this training series but they do upgrade and update their certification exams uh relatively uh frequently so uh for whenever this does get included as a part of the examination process I want you to be aware of these Concepts and even if they’re not covered in the exam it makes you a much better administrator when you understand what virtualization is and what cloud computing is and this is in no way limited to Linux but uh I think it is a very important topic to uh talk about especially when we are talking about deploying Linux and this was something that we actually did kind of go over as we uh went through the installation process um in chapter 3 and I was showing you how to boot something from a USB drive and how to install on a USB drive and how to uh reinstall a dual boot for example on your machine so that you can uh boot multiple uh versions of an operating system or different operating systems on the same machine so that all falls under virtualization and cloud computing is a little bit different but it’s kind of in the same ballpark obviously so uh just as an introduction to virtualization the first thing that we need to understand um is essentially if you if you think of your computer as a machine right or the computer that I’m recording this on or even your uh cell phone if you just consider all of these things as a machine and then we talk about the physical version of the machine or a virtual version of the machine so you could have a Windows machine be on a physical computer or you could have a Windows machine that would be a virtual computer a virtual machine and the way that we create the virtualization or the virtual version of it we would do it through virtualization and virtualization is just a technology that allows you to do it that to have multiple virtual machines running on a single physical machine and that physical machine could be something as simple as somebody’s home computer or it could be a meas sized uh server that was created at Amazon AWS or on Google cloud or Microsoft Azure cloud or any of those things right so um it’s overall the same concept it’s just different sizes of physical machines that end up hosting these uh the variety of virtual machines so uh this particular approach this concept of virtualization uh improves the use of resources uh it gives you isolated environments for a variety of different purposes so to create different applications or operating systems or test environments or anything along those lines it’s all very very powerful to use virtualization for those Concepts so uh this is essentially where virtualization is and then there are a few key Concepts that we need to to make sure that you really wrap your head around so as we as I already mentioned virtual machines are basically simulations of physical computers right so you can have a software based version of a physical computer that could live on a USB drive that could be boot booted from a USB drive it could be booted from a cloud service provider it could be booted from your actual computer that you’re watching this video on and uh that it’s basically what it is it’s just a simulation of an actual computer right the operating system of Windows or Mac OS or uh Linux or so on and so forth uh each virtual machine runs its own OS and the applications inside of that OS and they’re all independent from each other that would be on the same physical host so you can have one physical host that has a 100 VMS running on it or a thousand VMS running on it in the case of a mega server that exists at uh you know some Enterprise locations that they haven’t in their own building or they buy it from a cloud service provider right um there is isolation meaning that the failure or compromise of one VM will not affect the others or if you wanted to test something in an isolated environment you could boot up a virtual machine test it on that machine make sure everything’s all good if it crashes who cares because it’s that One Singular virtual machine and you you literally take it down as quickly and easily as you put it up and you can just keep keep testing until it’s actually ready and then you would deploy to the rest of your environment in your Enterprise or uh you just isolate each one of these things against each other for security purposes so in case something happens one machine crashing doesn’t affect the rest of your environment and your network so uh this is the overall understanding and the concept of virtual machines now there’s something that helps you deploy and create and manage virtual machines and that is the hypervisor and the hypervisor is basically a software or a firmware so to speak um depending on the type of hypervisor that you’re using but it essentially serves the same purpose right it helps you to create and manage and deploy and take down virtual machines that’s basically what it does and then with that hypervisor you allocate the amount of resources that will be uh designated to any one of these virtual machines and keep them separate from each other keep them isolated from each other that’s what the hypervisor does so it helps you boot them up deploy them and then allocate how much resources each one should take on that physical machine that they’re running on and then keep them all separate from each other the first type of hypervisor is known as the bare metal hypervisor and I’ll give you a visual so that you kind of understand the difference between the two uh so we have the type one which is known as the bare metal that runs directly on the actual physical Hardware so the CPU the motherboard the Ram uh the power source everything else right so anything that is required to build a computer physically the physical requirements of that computer the hypervisor sits on top of that and then from the hypervisor you boot a bunch of different uh virtual machines and it doesn’t require a host operating system so I’ll explain what that means when we actually get to the second type so it doesn’t require host operating system it just sits right on top of the physical hardware and uh from there you can boot all of the operating systems or applications or everything else that you would run inside of your environment and you would do that through the use of this particular hypervisor um this is more uh focused on performance so this is the higher performing version of a hypervisor because it is sitting directly on top of the physical hardware and it’s very very common in Enterprise environments or environments that have hundreds if not thousands of employees that they need to provide machines to they need to provide computers to the examples here are these are the actual hypervisors that are considered bare metal so the VMware SK or ESC I guess I I don’t know if that’s how you pronounce it um esxi whatever so the VMware version uh it’s very widely used in Enterprise environments it supports a lot of different features for managing virtual machines we have the Microsoft hyperv which is also another powerful hyper uh hypervisor that’s includes with the Windows server and it provides comprehens comprehensive virtualization capabilities and then there’s Zen which is the open source version of the bare metal hypervisor um that’s known for scalability which means it essentially you can use it in an Enterprise environment if they have limited budgets um and it’s still very secure and it’s a notable mention so it’s a good enough uh uh a hypervisor that even though it’s open source it’s still very useful in a large production environment then we have the type 2 hypervisor which is known as the hosted hypervisor and this runs on top of your actual operating system on your computer so imagine if you’re watching this on a computer or let’s say the Mac computer that I’m running this on so the computer that I’m on would act as the host and then we would install one of the hypervisor softwares on here and then with the use of that hypervisor I would deploy multiple virtual machines and it would all use the same resources that is inside of my MacBook or inside of my Mac operating system whatever it is right so if my MacBook has 8 GB of RAM all of the various virtual machines would rely on that 8 gabes of ram that’s on my actual computer right and so this is what it means to have a hosted hypervisor the computer serves as the host and then you’re using the resources of that computer and then the computer has its own operating system so it could be a Mac it could be a Windows anything else it could H it has its own operating system that computer that’s already running as an OS serves as the host and then on top of that you would have the hypervisor that helps you deploy the various virtual machines or applications so on and so forth and this is mostly used for desktop virtualization or uh smaller environment so this is not something that you would do with a thousand employees right it’s typically uh a much smaller scale version of running a uh hypervisor or a virtual type of an environment and it just depends on the resources of the main computer that host computer um the virtual box would be one of the most common ones you’ve probably heard of this because it’s an open source hypervisor that’s used a lot to deploy guest operating systems so if you have your computer being the host everything else would be the guest OS and the guest VM and it’s very very easy to deploy it using virtual box you just download it and then you start deploying as long as you have the iso image as we kind of went over inside of the installation process of chapter 3 you can essentially run uh as many virtual machines as your computer would allow based on its uh resources based on its hardware and uh the the processing power um VM workstation is the commercial hypervisor version of this and uh it basically does the same thing so it is a it’s for a hosted environment and it runs and man uh manages multiple VMS on a desktop computer um of course if it is inside of a commercial environment you would need a much faster uh host computer than something that runs on 8 or 16 GB of RAM with a you know basic CPU so it does still need to be a strong enough computer it’s going to be if it’s going to be used inside of a virtual environment or inside of a commercial environment excuse me to virtualize uh you know over a dozen VMS for example um the last one would be the Parallels Desktop which is designed for Mac OS specifically uh which allows you to run Windows and other operating systems on top of a Mac OS computer and uh of course Mac OS would need its own dedicated hypervisor so Parallels Desktop would serve as the hypervisor that can work on top of a Mac OS computer so there are a lot of advantages to do doing this uh I’m going to go over three main categories or maybe four main categories but uh depending on who you ask there’s an abundance of advantages to virtualization uh the first one would be the efficiency of the usage of the resource and how quickly you can scale up or scale down so um because you can use the same physical resources it’s efficient right you’re you’re not using a lot of money uh to buy 25 computers you can just launch 25 virtual machines from this same hardware and then from there just connect them to monitors keyboards and mouses and each one of those dummy computers we call them dummy computers or uh you can techn technically I think they’re called terminals uh you can use you know 25 terminals that don’t have any hardware they’re just connected to the main physical processor which is the massive uh or the the powerful let’s say let’s say it’s not massive but it’s like a very powerful Central Computer with all of the physical hardware and then that connects to the 25 different terminals that 25 different employees can use and then when you hire somebody or get rid of somebody you don’t have to worry about selling the computer or If you hired a new person you don’t have to buy a new computer you can just launch from that same physical resource and just have a keyboard and a mouse and a a monitor you know what I mean so it’s just much more cost effective the cost of Hardware is way less when you do virtualization instead of having to buy 25 individual computers so that’s the one piece and then the other piece is the scaling up and scaling down portion that you know you uh fire half of your people you don’t need to worry about getting rid of half of your computers you can just delete the half of the virtual machines that are currently deployed on your hypervisor and it’s as easy as selecting them and clicking delete and it’s it’s not complicated to scale up or down using virtualization the next one would be isolating and the connection of isolation to security so these two pieces actually go hand in hand so because of the fact that each virtual machine operates independently so it’s actually operating as its own computer um as long as the usernames and passwords are strong if one of them gets hacked into or fails right so if if a computer fails it doesn’t affect the rest of the network you can just reboot one and launch another one and you should be all good uh if that person gets hacked as as long as their password is strong or as long as the passwords of the rest of the people on your network are strong then whatever ransomware or virus or anything else that’s installed on that particular VM will not affect the rest of the computers because it is literally isolated as its own computer and it’s as if just that one if it’s one physical computer for example if that one physical computer gets hacked then the rest of the physical computers in the network won’t be affected and it’s literally the same thing because each one of the virtual VMS each one of these VMS is essentially a computer separate by itself so it’s isolated in its own environment and if it gets hacked if it crashes if anything happens then it won’t affect the rest of the computers that are on your network so isolation and security actually go hand in hand when you can isolate a compromised machine from the rest of your network that means you’re protecting the rest of your network then there’s the flexibility and Agility of testing and deploying and developing so because you can quickly deploy meaning if somebody new comes in literally you just go through the same uh installation process on your hypervisor using the same ISO and just deploy a new virtual machine and then you can connect that virtual machine to one of the terminals that already exist and just give that person their own username and password and you know when the other employee that was using that uh original terminal if they’re not working that day or if they’re working on separate schedules or something like that they could essentially be running uh their own VMS from that same exact terminal as long as they have their own login information their own dedicated login credentials um and then testing and developing is again another one of these easy things because it’s an isolated environment so if you wanted to test a new product launch or a new software launch and you wanted to make sure that it doesn’t affect the rest of your network then you would do it with a virtual machine test it make sure everything is all good and do any uh extra configurations or development that you need to and then from there once everything’s all good and all your eyes are crossed and or eyes are crossed all your all your te’s are crossed and your eyes are dotted once all of that’s done then you can deploy it to the rest of your environment and make sure all the other computers have it so it’s very flexible it’s very agile you can scale up or scale down as needed very quickly without having to buy new machines or sell the current machines and uh if you add 10 new employees for your night shift then they can all use the same exact terminals just get their own login credentials and that’s it it’s like it’s very very simple and easy to do and finally there’s the disaster recovery portion of it that um you know so let’s go back to the the concept of having 25 physical machines right so if you wanted to take a snapshot a backup of 25 physical machines then instead of having to plug each machine into an external hard drive and then run whatever software you would use to back up that uh computer’s contents on the the external hard drive you would just go to your hypervisor you would select all so select all of the machines and then run the backup within the hypervisor and then go to lunch and come back and all of the contents of all of those 25 virtual machines have been backed up on your external hard drive and if you can think about it so if we go back to the file system hierarchy that we reviewed at the very beginning of this training series and you you consider that you know every computer technically is just a massive file system and it has the root folder the root directory and inside of the root directory there’s a bunch of primary uh directories and then those primary directories are extended into a bunch of other directories and then those directories contain a bunch of files and folders and so on and so forth so when you think about it as a large file basically right so you have one root file and inside of that file there’s a bunch of other files folders when you think about it that way it’s basically as simple as copy pasting right so that’s what it’s like when you have a virtualized environment you click on one little line item inside of the hypervisor and that represents a computer and all of the contents of that hypervisor or all of that VM would be backed up using that hypervisor and it’s so simple to do so it’s it’s much much simpler process than having to back up 25 physical machines so this is one of the probably one of the biggest advant as far as convenience is concerned it’s one of the biggest advantages of using virtual machines and if there’s any disaster or you you lose your power or the building burns down or something like that it’s like all of these things are stored inside of this one virtualized environment that can easily be accessed especially if you’ve developed redundancies which is very important in security and disaster recovery and you have multiple locations that are connected to that same hypervisor and then if one Lo goes through an earthquake that means they’re all good that you can still launch all of those computers all of those virtual machines because you have these redundancies that are essentially connected to the same hypervisor the same virtual environment so very very powerful concept and uh as a summary really what we need to understand is that there are two types of hypervisors so if you know what a virtualization is and if you understand the concept of virtual machine you need to understand that there are two types of hypervisors we have the type one and then type two type one sits on top of the physical infrastructure type two sits on top of a already running computer which is would be the host computer and then from there you would deploy your virtual machines but I do want to show you this visual because I really believe that visuals actually help kind of uh embed Concepts into your brain and really drive the point home so uh let me just show you this real quick so this is a very simple visual representation of the types of hypervisor so we have the type 1 hypervisor on the left side here and this is the hardware so the CPU the motherboard the RAM and the graphics card and the power source and everything else that represents the hardware that is required for a computer to run and then on top of the hardware is the hypervisor so there is no operating system it’s just a hypervisor that is the type one the bare metal hypervisor and then from this hypervisor you launch all of the various uh web applications or applications or operating systems to terminal computer so on and so forth so it’s kind of simple and you can see how the this is more efficient and more uh performance driven because there is no operating system on top of the hardware it’s just the hypervisor that is deploying these various operating systems and then the type two would be my laptop for example or your computer that would be the type two where it has the hardware and then on top of the hardware is a Windows operating system that’s being used as a main computer or a Mac operating system operating as a main computer and then you have downloaded the hypervisor as a software and that hypervisor helps you launch these various operating systems or these various virtual machines and that’s basically it there’s it’s it doesn’t go any deeper than this right so the the details from here on would be okay what hypervisor are you using and how do you use it or are you going to use a cloud service provider to be acting as your hypervisor technically and then you’re borrowing or you’re renting their infrastructure from their massive uh server rooms and their data uh centers and so on and so forth and then you’re just using their interface to launch your virtual machines and then you’re going to give your employees their login and they’ll they’ll access it from their computer or is that how you’re going to do it or you know what I mean it that’s where the details kind of come in but for the overall concept that’s basically it right so if you if you are going to rent services from a cloud service provider you’re technically going to be in a hosted environment which would be this place and then you’re going to log into your web browser and that would be done from your operating system from your current computer you would log into a web browser and then you would go into Google cloud and then you would deploy 100 virtual machines using the hypervisor that is uh from Google cloud and then from there for each one of those virtual machines you would get an IP address or a login link basically and then for that login link there’s going to be a username and password that would be given to a person and then that person from their computer would get access to that particular virtual machine or web application and that’s basically it that’s really as deep as you need to kind of go into to be able to understand how Cloud environments work and how virtual machines work once we get that then we can kind of go into the nitty-gritty and be like okay this cloud service provider does this and this one does this this and so but it’s all essentially the same concept just in a little bit more nuanced of an approach so that’s it that’s the difference between a type 1 and a type 2 hypervisor if we wanted to look at different versions of type 1 hypervisors for example the KVM the kernel-based virtual machine would act as a type one hypervisor and is integrated directly into the Linux kernel it sits right on top of the physical hardware and it transforms the Linux operating system into a very powerful and efficient virtualization host capable of running multiple VMS with various guest operating systems and this is typically done in a a server type of an environment so um we can integrate the KVM with the Linux kernel um as you already know what the kernel is it connects the user to the actual physical infrastructure and makes it highly efficient and able to leverage the existing Linux infrastructure it in allows the the KVM to take advantage of the features of Linux like the memory management process scheduling input output handling robust performance and scalability all of that stuff falls under using something like the KVM or just the KVM itself um it also helps you allocate the resources very easily and efficiently so to speak um it uses the hardware assisted virtualization which is what a type 1 hypervisor is and it’s it’s supported by the processors with uh Intel virtualization or amdv technology and these are essentially the hardware uh AMD for example amdv uh you should recognize AMD just by the name because they create really awesome graphics cards and Intel also creates graphics cards and computer chips and things like that so um the KVM uses uh Hardware that is supported by processors from Intel or AMD and we’re talking about physical processors computer processors so this Hardware support allows the KVM to efficiently allocate resources like CPU memory and input output usage to Virtual machines ensuring that the performance and overhead is all matched up as it’s supposed to be and it’s all balanced out but basically kernel Bas virtual machine KVM is a type 1 hypervisor that sits on top of the physical computer and then from there you would launch a variety of different virtual machines um it has a lot of support for various operating systems meaning that you can actually launch Windows and Linux and BSD and these various types of operating systems from the KVM so it is compatible with variety of operating systems and it’s runs its own OS so uh each virtual machine would run its own OS basically and it would be configured with the hardware specifications like for example how much CPU will it use how much RAM is it going to get how much storage is it going to get so on and so forth so again the same thing that is done with a type 1 hypervisor it’s just this specific one is of notes for what we’re talking about because it works with Linux it is a Linux uh virtual machine manager Linux hypervisor but it is compatible with Windows and Linux as well as any other operating system that you would want to install for your virtual machines ver is the command line tool the command line interface that interacts with KVM so this is essentially the interace that you would use to manage the KVM based VMS KVM based VMS that’s like a kind of a tongue twister um it’s a part of the lib vert virtualization toolkit which provides the API to interact with hypervisors including KVM as well as a variety of different hypervisors so verse is the command line interface to interact with KVM to be able to deploy your virtual machines if you want to start a virtual machine via the command line using Verge this is basically it so you would run verse start and then the name of the VM or whatever the VM name that youve designated previously and it would just start that virtual machine up right and replace it as you can kind of this is like very intuitive it’s a very uh easy to understand command line so you run the command start and then you just give it the name of the VM so that it actually starts DVM uh for example this would be something that the name of the computer would be my virtual machine and you just say verse start my virtual machine and then you can list the running virtual machines using list so ver list very very simple to understand right so it would list all the currently running virtual machines uh displaying the IDS the names and the states that they are currently in and this is what an example output of that would look like so this in this particular case there’s two virtual machines there is my virtual machine and another VM and they’re both running very very simple to understand stand easy to understand if you want to shut something down then you would just use the shut down so the opposite of start you shut it down it reminds me of Kevin Hart I’m hey shut it down um so you would use verse shutdown and then give it the VM name and then it would just shut down gracefully I love this portion it gracefully shuts down that virtual machine uh making sure that everything is all good and this would be the example of it um using the my virtual machine name so verse shut down on my virtual machine um these are a couple of examples as again just to kind of reing grain this all in your head the start command would start up the virtual machine so in this case it would be the Ubuntu VM that we’re starting if we wanted to list the virtual machines you would use ver list and it displays all the currently running virtual machines on your screen and then shut down the virtual machine would be done uh to stop the virtual machine from running so verse shutdown ubun 2vm would stop the UB Bo 2vm from running so KVM is the type 1 hypervisor that’s integrated into the Linux kernel um it enables virtualization on Linux hosts um and then you can run multiple VMS uh with a variety of different operating systems and these are our basic commands so start a VM run the VM or view the the running VMS and then stop the VM to uh deploy the virtual machines using KVM is going to be out side of the scope of this particular interaction I just want you to know that the tool that is used to interact with ver would be or excuse me to interact with KVM to kernel-based virtual machine to interact with that you would be using the Verge command line and then from there the the commands are as simple as starting something that’s been deployed using that particular hypervisor which is KVM so the KVM hypervisor would be the one that would give you the name of the virtual machine for example and then from there you would uh start it stop it list what you have available so on and so forth virtual box is another really commonly used widely used type 2 hypervisor so as we reviewed our hypervisors this one is going to sit on top of a host machine uh it’s been developed by Oracle it’s very popular because it’s compatible with a different operating systems like Linux Windows Mac OS and it’s commonly used for testing and deploying environments um uh testing and development environments excuse me um it provides the easy to setup and flexible platform because it’s I mean it’s literally like running any software and you can go through the prompts and The Wizard for the installation to run multiple operating systems on a single machine uh some of the key features is that it has compatibility with a variety of platforms so you can actually run it on a variety of host operating systems uh which makes it versatile to use on different platforms you can uh run a bunch of guest operating system sys meaning Windows Linux Mac OS Solaris and others so it’s cross compatible right it’s Closs platform compatible you can run it on uh windows or Linux to launch Windows or Linux right it’s compatible across the various platforms very easy to use as I mentioned it’s as simple as doing a couple of clicks to download the install files and then actually installing it and then once you have it installed you would use the GUI the graphic user interface to to just go through the prompts and click the various buttons that you would need to uh start up a new virtual machine and there’s a lot of documentation that’s available for virtual BLX because the fact that it is one of the most commonly used tools for virtualization the one of the most commonly used hypervisors and if you really want to be nerdy about it you can actually use the command line interface for management and automation for scripting and a variety of different tasks uh if you really want to get good at virtual box and just virtualization in general which I recommend that you do uh I would recommend to go look into it um I’m not going to go deep into the virtual box uh command line and it’s use case but there’s a lot of tutorials available and maybe in a future video I will if there’s enough people that want to see it I may just create a video using virtual box because again it’s just one of the most commonly used tools to virtualize VMS um some of the key features here that we have this snapshot functionality so it allows you to take snapshots of the current state of a VM so meaning that you actually can take backups of the computer very easily as I mentioned earlier it’s as easy as clicking on one of the virtual machines in the list of VMS that you have and then just taking a snapshot of it meaning backing up its content so it’s very very useful and very easy to create uh virtual images so to speak or a snapshot of a virtual machine to have it as your backup uh you can have guest additions meaning uh there are tools that you can use um to enhance the performance and usability of the guest operating system so you can improve the graphics for example of an operating system you can have shared folders between all of your operating systems or you can have a mouse integration which is I mean that one’s kind of a gimme but uh the shared folders part is very important to be able to improve the graphics of a operating system is also very very cool to be able to do just from your virtual box from your virtualization hypervisor so that’s a very cool little feature and for the Nerds vbox manage is the command line interface for managing virtual boox VMS so uh this is basically the the platform the command line interface that you would use to virtualize machines via the command line or to create scripts for virtualizing machines which is where the the the scalability comes in in a a easy convenient type of a format when you learn how to script and you can launch you know a dozen machines with the use of a uh a script that was done it’s I mean it’s it makes it even easier to virtualize VMS so uh vbox manage is actually the CLI the command line interface to launch virtual machines so uh vbox manage start VM VM name so this is a little bit uh more of a mouthful than verse but basically the the same concept right so you would uh call on the vbox manage tool and then start VM and then give it the VM name which would be the name of the virtual machine so as an example ubun 2vm would be the name of the virtual machine so you would just say Rebox manage start VM ubun to VM uh if you wanted to list them same thing list VMS very very simple um instead of verse list this is vbuck manage list VMS so as it’s kind of as simple as just using verse and then it would just list all of the vmss that are registered with virtual box and it displays their names their uu IDs and so on and so forth if you wanted to look at the output as an example this is what it looks like so you have the Ubuntu VM and the Windows 10 VM so we have a Linux VM here as well as a Windows VM and Microsoft and these are The UU IDs that are associated with these virtual machines that we’ve launched using virtual box so if you box manage control VM VM name power off again more of a mouthful than using the Verge command but this forces the specified VM to power off so you would replace it with the name of the VM that you want to turn off vbox manage control VM VM name power off this would be the example so ubun to VM being powered off and this is what we would do to do that so A couple of examples just to kind of review these commands that we have so vbox manage start VM Deb and VM would start the DN VM list the VMS with list all of the registered VMS with their names and their uu IDs the stopping of a VM will be done with control VM give it the name of the virtual machine and then give it the power off command and in summary we have the vbox manage commands for managing VMS at the bottom right here and uh we have the reference of virtual box which is I would say probably one of the most HP popular if not the most popular type 2 hypervisor that’s super super easy to download install and then it’s very flexible because you can run multiple operating systems on a single computer and it’s used for testing development and a variety of different tasks because it’s so compatible with various hosts and guest operating system so again it has cross compa compatibility between the host operating system and the guest operating system which makes it again one of the most popular hypervisors that exists on the market doc ERS and containers are The Replacements that we have for file systems and uh partitions essentially so a doer is a popular popular containerization tool that enables us to package applications and their dependencies into uh portable containers essentially um these containers can run consistently across different environments ensuring that the application behaves the same regardless of where it’s being deployed so you have have a container a compartment so to speak that includes a variety of different applications and all of the dependencies for that application to run and then you can launch this Docker uh within a Windows computer a Linux computer or so on and so forth and this would be all done through the virtualized environment right so this is something that a virtual machine would have access to so a container versus the virtual machine itself if you have to think about the comparison between the two uh containers are the isolated environments that share the host operating systems kernel they’re lightweight fast to start and they don’t require a full operating system okay containers include everything needed to run the application like the code the runtime Etc but it’s not an actual virtual machine so the virtual machine is not the uh or excuse me the container is not the OS right so it’s an isolated environment that shares the OS and then it has the application and the dependencies that would need it to run right so the virtual machine is the OS the container is the environment that is using that OS okay so uh you can have uh and so as as the comparison here we also have the virtual machine that is running on the hypervisor and includes the OS right so the container would run on the virtual machine essentially that’s what it is so a container hosts the application it hosts the dependencies so on and so forth but it needs to run on something so it would run on the operating system that’s provided by the virtual machine or it would run on the host operating system that would be your computer for example each VM operates independently with its own OS which increases resource usage as we’ve already established containers don’t run their own OS so they’re more lightweight and they’re faster to run um when you have a Docker container right so the docker and the containers within that Docker essentially they run on anything that can actually support Docker which means that they’re consistent across development testing and production environments uh they’re isolated so you can manage the file system or process everything that goes on within that file system uh on the variety of different operating systems that they will be running on um which means that you can have multiple applications run on the same host without interfering with each other so this is really just a fancy way of saying that you can compartmentalize a group of applications or a series of applications from each other you can separate them from each other run them separate of each other but they would be running on the same host right so they’re portable meaning that they can essentially be transferred via an email or a file share and then they can be isolated they can run on themselves or by themselves excuse me without uh interfering with each other and then uh they will run on the same host they would run on the same host operating system and they’re isolated right so they’re portable and they can be isolated the benefits are many um they’re efficient so they’re lightweight as we mentioned they don’t use as many resources from the hardware or the operating system and it can start at pretty much like you’re starting up a piece of software because that’s kind of what they’re like um they’re scalable meaning that they can be easily scaled up or down based on the demand they can be shared with a variety of different uh operating systems um and they’re very uh they’re ideal for micro services or Cloud native applications and that’s really where they come into play is the cloud environment because Cloud providers uh offer these Dockers and these containers and the software that is installed within them the applications that are installed with them are provided from the cloud service provider and uh you can essentially download it and install it into your virtual machine fairly easily or download it and install it to a 100 of your virtual machines fairly easily because they’re very compatible and they can scale up or scale down um to be able to run one you would use the docker command so the docker is the tool that will run the image of one of the containers that would run one of the containers ERS so the it option allows you to interact with the container via the terminal so you say Docker run interact with this particular image name and then you would replace the image name with the image or the the name of the various container tools uh that you would want to run so as an example you run Ubuntu right so again you’re running Ubuntu as a software technically as an application you’re not running it as an operating system system and that’s why it’s faster to run and it’s using the resources of the host operating system to run it so this is not uh running Ubuntu as an operating system it’s running it as a container which is using your host operating system your computer which is why again this is very native to Cloud environments because when you’re using a cloud environment you’re technically using a type 2 hypervisor which uses your computer’s resources to launch Ubuntu for example right then you have PS to actually list the containers that you have available so it’ll list all the currently running containers displaying their IDs names statuses and more and this is something that is done inside of the cloud command line and this is what it looks like so you have the container ID the image meaning what is actually running um the command uh interface for it which is a bash interface in this case it was created 2 hours ago it was uh up for 2 hours and there are no ports for it and the name is awesome Wing that’s been assigned to it and that’s it that’s what it looks like if you wanted to stop that exact container you would just do stop container ID and then give it the container ID or the container image name in this particular case but H what we’re talking about is this ID so this would be the container ID that you need to give it to stop that container from from running to stop that image from running this is what it looks like to Docker stop and then you give it the ID and just like that it it stops that container from running if you want to pull an image and so pulling an image meaning downloading the specified Docker image from Docker to your actual computer to the local machine your host computer that you’re running it on you would use Docker pull and then give it the image name so for example pull Ubuntu or pull in Jinx in this case it pulls the latest inin image from the docker Hub onto your computer and the docker Hub is the location where everything is listed inside of the cloud service provider that you can uh either install on your virtual machines or run from the cloud service provider or just download onto your local computer right so dockerhub is the location where all of these the variety of Ubuntu and in Jinx and the all of the different images so to speak of these containers are all stored and then from there you would either run them or download them onto your computer or you can install them on your virtual machines if you want to remove something it’s as simple as using the remove command so RM removes that container or the the stopped container right so you actually have to stop the container first and then from there you would remove it from your list of containers it’s fairly simple it’s like very intuitive um you need to give it the ID so as we already mentioned this is how you stop one and when you stop one you can use the same ID to remove it and then if you wanted to remove an image of something you would just use RMI so if you use the RM command by itself you need to give it the ID of the container in this particular case if you wanted to use the image name cuz it’s much it’s a more convenient thing to use the image name instead of the ID itself unless you’re going to copy paste for example but if you just want to remove it by its image name you just do RMI and then give it the image name and then it will remove that specified duck or image and then the same thing example would be here right so Ubuntu is much easier than this ID number for example so you just do RMI Ubuntu and then it’ll remove the Ubuntu image from your local machine so in summary we have the simplification of creating deploying and running applications in isolated containers that’s what basically the docker does right so the application is stored inside a container and then you would use Docker to deploy it or to create one or to download it onto your local machine or to run it or to stop it so on and so forth so Docker is the source that you use to interact with the various containers and then inside of those containers could be a variety of different things so it could be applications it could be what you saw technically as that Ubuntu image that was running running which is technically not its own operating system cuz it’s still running on top of your host operating system which is your computer so it’s more lightwe to do it this way so it’s faster to run them uh compared to traditional virtual machines cuz when you actually start a virtual machine uh it takes up a little bit more time and it takes up more resources uh because it’s a little bit more heavy duty uh than running just a container that has that specific series of applications that you’re looking for for it to run so again not an operating system you’re not launching the Ubuntu operating system you’re launching the variety of applications that would be stored inside of that one container that resembles what it would be like to launch Ubuntu as the operating system right but again it’s not running the actual operating system cuz your host operating system on your computer acts as the OS and then the container has the ninx server in it and the the applications and dependencies that would be connected to running an engine server and then you just launch it and now you have your engine server up and this is your cheat sheet of the commands that we just talked about so if you wanted to take a screenshot of this or uh jot it down you could pause the screen or take a screenshot of the screen one two three and moving on now we’ve talked about virtualization and virtual machines and Dockers and containers and all that good stuff now we can talk about Cloud Administration because essentially this is what virtualization is you are dealing with the cloud with virtual machines and virtual computers so cloud computing it’s the technology that provides OnDemand access to Computing over the Internet so on demand access to Virtual machines or Dockers or containers so on and so forth uh it includes servers storage databases networking software and a variety of other things and you can provision them and manage them with the click of a button and cloud computing allows businesses and or individuals to leverage these powerful resources without the need for physical Hardware or extensive it infrastructure which is what makes it so freaking popular so the infrastructure as a service is the first one and infrastructure as a service is something that’s provided as it says as a service and it’s provided from a cloud service provider so infrastructure as a service is the virtualized hardware resources like uh which is virtual machines for example storage networks which allows the user to deploy and manage operating systems applications and development environments so this is infostructure as a service uh the example would be AWS ec2 and AWS has a variety of different Services under their umbrella of services ec2 is the compute capacity in the cloud and enabling users to run applications on Virtual servers that’s what their version of IAS infrastructure as a service is the next one is Microsoft Azure virtual machines which is the uh essentially the equivalent of aws’s ec2 and it provides a range of virtual machine sizes configurations which support both windows and Linux operating systems and then we have the Google compute engine which is the Google version of this whole thing which is essentially offering the same thing uh scalable flexible compute resources for running large scale workloads on Google’s infrastructure and then you have platform as a service platform as a service would be the next level up so if we looked at infrastructure as a service as the base level because they’re providing the infrastructure platform as a service is the development and deployment environment in the cloud so it provides you with the tools and services to build test deploy and manage applications without worrying about the underlying infrastructure so for example the virtual machines and the physical Hardware that’s required you’re just using their platform to develop your applications to deploy and test your applications so on and so forth so it’s the next level up from buying infrastructure or renting infrastructure uh the elastic beant stock would be the AWS version of this um which helps you develop and scale and deploy web applications and services using popular languages Google app engine is the Google version of this that helps you deploy applications on Google’s infrastructure without automatically uh with automatic scaling excuse me automatic scaling and management and then we have the Microsoft Azure version which is the Microsoft Azure app service that helps you uh develop help developers build deploy and scale uh web apps and apis quickly with the integrated support for various development languages so on and so forth so basically these three companies essentially offer the same exact thing across the board uh the platforms have different interfaces to use they uh some of them are more userfriendly than the other ones but for the most part they offer similar services for everybody then you have the software as a service so if we were to look at infrastructure as the base level platform would be the next level software would be the next level as well which would be the application over the internet on a subscription BAS so you can access these via a web browser without the need for installing or maintaining anything you’re just log into any of these companies so Microsoft Office 365 is like one of the most com uh common ones it’s access to Microsoft Office like word excel PowerPoint so on and so forth and you it’s very similar to Google uh drive right so Google Drive would be another one of those things where it’s kind of the software as a service you get access to Google Sheets and you get access to Google Docs and so on and so forth and that’s you’re using the software now right right the the software is the word editor the software is the spreadsheet Creator editor and that’s what that is and with the connection with these things to one drive and teams so teams would be the sharing portion of Office 365 and one drive is the storage portion of Office 365 which again is very similar to Google uh Google Drive excuse me um Google workspace is the software as a service that kind of combines everything together right so you have the productivity collaboration because you’re now working with your uh co-workers and everybody that’s on your team and inside of this would be Gmail Google Drive doc sheets and Google meet which would be their video uh chatting video conferencing and then Salesforce which is a really big one um it’s a customer relationship management so which is a CRM tool and it helps businesses manage their customer relationships streamline s processes so on and so forth and Salesforce again is a software and it’s actually available on most cloud service providers as well and you can buy a license to it and have it installed on everybody’s local computer so that would kind of be another version of the software as a service so cloud computing as we’ve discussed in multiple instances so far under this uh particular chapter it helps you scale up or scale down right so based on the demands of your uh company you can get a bunch of different softwares deployed to all of your users all of your employees you can have a bunch of different virtual machines launched using the infrastr infrastructure as a service you can have uh the platform launch as well for them to be able to develop code and run uh or test their uh uh Cloud resources or test their applications that they’ve developed so on and so forth and you can do this up or down meaning as your company grows or as your company shrinks and downsizes you can add things with the click of a button or you can remove things with the click of a button very easily scalable in both directions and it’s coste efficient so this is this is one of the big things that companies are like uh one of the main reasons why they go into cloud computing because literally I mean for $20 for $30 a month you can start uh launching something for whatever environment for as much power and processing as you would need instead of buying a $5,000 computer you know what I mean depending on what you’re looking for depending on what you would need the cost The Upfront cost is so much cheaper and then you know when you don’t need it anymore instead of worrying about what to do with that $5,000 computer you just stop paying for the thing and let’s say you used it for 6 months and you don’t need it anymore and now you stop paying for it and you just turn it take it down you know what I mean it’s it’s in the grand scheme of things it’s so much more efficient cost-wise to use cloud computing for a Compu uh for a company especially maybe not for a person just depending on who you are as a person and what you do and what you you need it for maybe you do need it but for the most part for comp companies it’s kind of it’s kind of a no-brainer to go into cloud computing uh to use their services um flexible obviously accessible so cloud services are accessible from anywhere with an internet connection you just need your laptop and now you can log into your cloud service providers your CSP and then from there get access to whatever it is that you were using from your home office for example um they’re reliable they’re available because they have redundant locations so AWS micro uh Amazon’s uh web services they have warehouses all over the globe that house these servers that are connected to the AWS service and if one of those things goes down it just defaults into the next available one in that region and now the whoever the person is that’s using it never has to worry about losing access to what it is that they need to use and this is very very important because the redundancy portion of this the Redundant server warehouses that exist all over the world literally all over the world these redundant warehouses are the main reason why these services are so reliable and available all the time uh Disaster Recovery as long as you have enabled some kind of a backup you will never use your lose your data and as long as obviously you’re paying your bill that’s the other part so it’s like as long as you’re paying your bill and you’ve done a scheduled backup uh you will never lose your data and you will never lose your service your server will always be on your web server your application server will always be on and running so the and these were just the three major Heavy Hitters we were talking about there are a lot of different cloud service providers that are also very legitimate and they have a lot of great resources that are available so uh these were just the three big ones the three Heavy Hitters that we were talking about and then you have of course the automatic update so you don’t need to update your computer uh you don’t need to update the service it just gets updated on your behalf and you don’t have to worry about making sure your it team is on top of it or that you if you’re the the sole proprietor the administrator and the CEO and everything you don’t have to worry about updating anything because it’ll be automatically done for you the security will be updated for you uh all of it is very very convenient process there there’s a lot of different conveniences that are available for using cloud computing but this is another really big one because with updates and uh automatic updates with automatic updates there is a lot of security that is uh enforced and a lot of convenience that is also enforced because as things get more streamlined uh then you will just get those updates as things get um hacked into so as pen testing is done and they upgrade their security infrastructure you will also inherit those benefits as well so cloud computing is a very flexible scalable cost efficient and just awesome way of going into virtualization and you have the infrastructure as a service I AAS platform as a service p p p a a oh my God and then s AAS as a service and I’ll show you what the visual of that also looks like this is kind of the best image that I was able to find all the other ones are they’re similar to spreadsheets or something like that or like a pyramid that doesn’t really give you much um so I AAS the infrastructure as a service you can see that this is basically what the web server looks like or the servers that are inside of those massive warehouses that’s what it looks like and this is what you’re renting and these things can help you deploy virtual machines and virtual computers for people to use and then those people can then use uh install their own um for example operating systems on top of them or they can install their own applications and develop new applications so on and so forth so this is the most uh base level of ser service that you would get which on top of this you can essentially do anything that you want but you would need it people you would need a team of people to be able to do this for you right and then you have platform as a service right so it they give you the servers obviously but then they also give you the operating systems so you could just launch a Windows machine or a Linux machine instead of only launching a server and then on top of the server having to use your hypervisor to install the operating systems you would just launch one you would just launch launch the operating system and then from there you can start developing your code or you can start running your business whatever it may be right so this one requires the most configuration because you do need to use a hypervisor to be able to install the virtual machines and then install the operating systems and then launch them this one you just launch the operating system with a couple of clicks and now you have access to a Linux computer or Windows computer right and then the last one is assuming that all these things are already configured for you you have to do no configuration and then you just launch the application so Google Drive for example you just launch Google Drive right and that’s the the the most convenient version of this uh Trio because there is very little configuration that’s required from your behalf you just log into the application and you start using the software and that’s basically it right so this one requires the most configuration because somebody needs to use a hypervisor and install the uh operating system install the Linux or uh ver windows or whatever it is uh launch the virtual machine all of those things this is the most configuration this is the least configuration and then this is the middle ground right here that with a couple of clicks you’re now launching a Linux machine and then now you use the Linux machine and use the applications inside of it or you launch a Windows machine so on and so forth so this is the trio that we have over here which falls under as a service this infrastructure plat platform and then software as a service it’s probably also a good idea for us to talk about some of these Cloud survivers uh providers excuse me uh these are again the big three that we got over here so Amazon AWS Microsoft as well as Google um they offer a comprehensive list of services um and they’re comparable you know they’re very comparable to each other it’s just I guess kind of defends on Personal Taste and personal preferences um where they would uh differ or where somebody would choose one over the other maybe pricing would also play into that but for the most part uh they all pretty much offer uh similar range of services so AWS is you know the most popular Believe It or Not Amazon is not just a Marketplace their biggest profit comes from their web services their AWS web services because for the most part they have the same infrastructure that’s set up and then on top of that they’re just selling uh access to their resources on a variety of different tiers and it includes computering power of course storage options networking abilities um I host my website one of my websites actually all my websites but I don’t have one specific that’s launched for this channel yet uh I did buy the hack Alix Anonymous domain through AWS so that’s a really cool one and uh that’s basically it like it it just offers uh computing power storage options and networking capabilities that makes it great for everybody individual users Enterprises government entities so on and so forth um their key services are the elastic compute Cloud so ec2 uh lamb Lambda excuse me which is serverless Computing so you don’t need an actual computer or server excuse me and you can just get access to uh computing power just using their platform and then elastic beanock which is the platform as a service and then storage would be the simple storage service which is S3 elastic Block store and then Glacier which is for long-term storage Amazon web services databases uh runs across the variety of different services or levels of services as well so you have the relational database service you have Dynamo DB which is no SQL database and then you have the red shift for warehousing data warehousing which is kind of falling into cold stor storage and then you have networking so virtual private Cloud Route 53 for DNS services cloudfront for Content delivery Network and this is their list of key services and then we have the management consoles and the management tools so uh the interface for managing AWS resources would be the Management console the AWS CLI for the command line interface to interact with their services programmatically as well as script wise and then you have the software development kits the sdks for integrating AWS Services into applications using programming languages so python sdks are very common thing to understand when you’re interacting with the API of any service you will look for a python SDK and you will import it into your uh software or not your software excuse me you will import it into your project file and then that SDK will allow you to interact with the API of that service for example in this case AWS AWS has an SDK you can import that into your python project and then now you can interact with your AWS management uh environment as long as you have your your token and your API key so on and so forth to confirm that you actually have access to their services now you can interact with those services using a variety of apis and software development kits so uh very very cool portion of it and again also all of these guys have sdks just to let let you know so Microsoft Azure is another one it’s the competitor to AWS uh seamless integration with Microsoft products Azure offers a wide range of cloud services including compute analytic storage and networking you have the compute and storage portion so aure virtual machines Azure functions which is the serverless Computing Azure kubernetes service which is a really big one a lot of people actually ask for this uh inside job descriptions you have the Azure blob storage Azure disk storage and Azure files you have the databases which is the SQL database Cosmos DB which is the nosql database asure database for post SQL and MySQL and then we have Azure virtual Network Azure load balancer and content delivery Network which is for their CDN and then you have their management tools which would be the Azure portal the command line interface and of course Powershell scripting language and then they also have an SDK that you can interact with and then we have Google cloud which is for capabilities in data analytics and machine learning uh they have a robust set of cloud services that leverage the infrastructure of Google the massive infrastructure that they have like Computing storage application development uh Google compute engine the kubernetes engine and Cloud functions are all under their Computing section you have the storage so cloud storage persistent disk file store Google Drive for example and then you have the databases cloudsql Cloud spanner which which is the regional database and of course fir store which is the nosql document database and then networking for virtual private Cloud so VPC Cloud load balancing and Cloud CDN and then this is their management tool so Cloud console the cloud CLI and the client libraries for integrating into applications using programming languages so the client libraries would be similar to the SDK the CLI as you can tell every single one of them has a CLI and every decent cloud computing provider should have a CLI and then the console which is the GU the graphic user interface for management of all of your resources and assets via Google Cloud so that’s what we got over here obviously scalable obviously cost-efficient we’ve talked about all these things flexibility security all of those things and the Advanced Services like the AI stuff machine learning stuff and the big data analytics enabling you to innovate and stay competitive uh this is becoming more and more uh required to be competitive in the modern business world to have some kind of an AI assistant or some kind of AI uh integration with your infrastructure to make things more efficient and respond faster machine learning to just be on top of your competition with the research that’s being done and to improve your data sets and your infrastructure because machines learn faster and better and more consistently than human beings do and then you have the big data analytic stuff which is uh it’s very very crazy that how much data actually exists in the world and how important it is to be able to ingest all of that data and make sense of it and these things are connected to each other so the big data analytics connected to the machine learning which is done by an AI all of those things are interconnected and they’re I feel like they’re almost mandatory at this state of the the modern technology world and how technology has been connected to business I feel like these things are absolutely mandatory and if you want to be really really good as a cyber security person as a administrator a tech person it you need to really get comfortable with the concepts of machine learning and big data and using AI Integrations so that you can just stay ahead of the curve when it comes down to your competition inside of the IT world and the cyber security and pentesting and system administration World in summary we got these major Cloud providers but they are not the only ones and uh I would challenge you to Google what the major Cloud providers are and see what you can find out there and how they rank as far as their competitiveness to these big three right here um there is a lot of tools available as we already talked about there’s infrastructure platform as well as software that is available as a service and they have a lot of great management tools including the consoles as well as the command line interfaces as well as the SDK the software development kits that integrate well with programming languages to provide further automation capabilities and these are the big three as we talked about all right so you got your VMS up you got your containers and all the things that are provided to you by your virtual EnV environment how do you manage these things that’s what we’re going to talk about so lib vert would be the first command line tool that is available for us and it’s a toolkit that is connected W via API and it’s used for interacting with VMS across a bunch of different platforms like KVM uh kemu Zen and VMware it’s consistent because it works across all of these different platforms and it’s a very popular choice we’ve already mentioned it actually hopefully you remember uh as we talked about D virtualization portion earlier we did mention lib vert already um it offers a unified API for managing the VMS across different hypervisors uh simp it simplifies the VM management as a result and makes it so that the use of a consistent scent of commands and tools would basically work everywhere depending on doesn’t matter where you are and what you’re using as your virtualization technology Tech ology um ver would be one of the key features that comes with lip vert and then vert install with the command line tool for creating and installing a new virtual machine so these are the two command line tools that come with vert and it’s compatible again as we already talked about so it works with the various virtualization Technologies it’s a versatile tool for various virtualization environments and this is kind of what a command looks like to use uh lib verts specifically to create a virtual machine so there are a few elements here that we need to go over so vert install would be the command line tool to install a new machine the name is what you’re going to give it and then this is how much memory should be allocated to it the vcpus that would be allocated to it and then the dis which is this is going to be the path that’s going to go for this particular dis and then the variation of the OS that you’re going to be installing which is essentially the OS image that you want to use to complete your installation so this is the breakdown the full breakdown of this we have the vert install which is to create and install a new virtual machine inside of lip vert then you have the name that you’re going to give it so what you want the virtual machine to be called the allocation of 248 megabytes of ram to this particular VM which is basically 2 gigs worth of ram that you’re giving to this virtual machine the process processing power that it has vcpu which is the virtual CPU that’s going to be assigned to this virtual machine which is in this case two CPUs that are going to be assigned to it and then this is the path so the dis image for the VM with a size of 20 Gigabytes and the 20 GB in this particular case is storage so we’re not talking about the processing power of Ram or the processing power of the CPU we’re just talking about storage for this particular machine and it’s going to get 20 GB and then of course the OS variant which is the operating system for this particular case which is Ubuntu 20.04 so this is how you create a new virtual machine using vert install inside of lib vert and this is the full thing if we were to actually provide some fillers in here so the name would be my Ubuntu VM same uh RAM capacity that we’re assigning to it same vcpus that we’re assigning to it and then this will be the full path that’s going to this thing and then we’re going to allocate 20 gigs of storage to it and then you have the OS variant which is Ubuntu and in the CD ROM wow look at that there’s a CD ROM image as well that’s going to be the path for this particular case it’s being pulled from a CD ROM so this OS variant is being pulled from the CD ROM partition which is the path to that particular cdrom partition and then this is how you destroy one I love that word so you destroy a virtual machine using the verse destroy so vert install is how you create one and then to it destroy it you would need to use ver and so again you just give it the virtual machine name that we got here so my Ubuntu VM would be the name that we give it and then it destroys it it forcibly stops the specified VM with the name that you want and actually destroys it so my Ubuntu VM in this particular case is being destroyed if you wanted to list your virtual machines you would do list list all and it lists everything that’s being managed by lib vert showing their IDs their names their current state whether or not their run uh running paused or shut off and then this would be an example output for that which is in this particular case you got three so the May Ubuntu VM the test virtual machine and the old virtual machine and because of the fact that this one is shut off and this one is paused their IDs are not activated so the only one that actually has this ID that’s associated to it is the one that’s currently running running and this would be the output that you would get from listing the all virtual machines with ver so this is the the flow here that we got so as an example we want to create a virtual machine in this case which is the my sentos Cent OS being one of the red hat distributions of Linux there is 4 GB of RAM assigned to this particular one there is four CPUs assigned to it the dis location the path to this virtual VM is going to be here and then there’s 40 GB of storage that’s assigned to it and then the orus variant is Centos and then it’s being pulled from the cdrom partition for Centos installation that has the iso image on it and that’s it this is how you install one with the Cent OS if you wanted to destroy that same machine you would use ver so again vert install installs this one ver is the one that we use to destroy this thing and it immediately stops it and then if you want to list everything you would do the list all command so lip vert is the toolkit for managing virtual machines across a variety of virtualization platforms it’s consistent meaning that you can use those same exact commands across a variety of different uh hypervisors and virtualization environments and all of those commands would run exactly as they are and then you would have ver for the management and vert install for the installation of the various virtual machines that you got and this is our summary of commands so 3 2 1 moving on okay so now we got to manage our Dockers okay um Dockers significantly or excuse me the containers of the docker my bad so the docker is the tool and the containers are the stuff that we manage with the docker tool so uh Docker simplifies the deployment and management of the applications which are stored inside of these containers and these are the microser Services uh and the architectures around them and the packages of applications along with all of their dependencies inside their isolated containers right so you have the docker which is the tool and the container is the container that houses the the applications the dependencies of those applications and everything else that would be used in something called a micro service so basically just running the various applications uh for example that Ubuntu example that we saw earlier that would just run uh the various applications and the dependencies that would be uh inherent that would be ingrained inside of a Ubuntu environment without actually launching an Ubuntu operating system so you would pull the image meaning downloading it from the docker Hub which would be housed on a variety of different cloud service providers or any other container registry really and then you would pull that from that location into your actual local machine your computer you would download that into your local machine so for example we’re going to pull inin in this particular case and pull that image into our local machine and then you want to run it right so you start the new container in detached mode meaning that it’s going to run in the background and then you would replace that with the name of the docker image that you want to use for example inin what we had in the previous case so run the inin and it’ll run the inin container in the background and it’ll return the container ID for you and once you have that you can do a variety of different things with that container like run it and actually use it and then when you’re done with it if you want to remove it you would need to provide the container ID so we got the container ID by running it inside of the uh background in the background and you can also run the list command as we saw uh to be able to give you the list of the containers that you have as well as their container IDs and then you can use the RM command to remove that container ID or the RMI command to remove the named uh image that you download the named container that you downloaded so remove my container in this particular case uh this would be the one that removes the container um this is my container uh that would be the name I would assume but not the the container ID itself so um we have the logs of those containers so uh the logs of the specified container which does not mean that it’s displaying the contents of those logs um you would need to access that either with Nano or some kind of a text editor or viewer or let’s say IDs or a Sim tool something like that but essentially you can view all of the logs that are associated with that individual container meaning the authentication authorization log the the logs for any errors that may have taken place or just regular interactions that were done with that particular container you can retreat retrieve all of those via the logs command so logs of my container would be retrieved by running Docker logs my container and then you have the PS which will list all of the current running containers so this is how you would get the actual container names their IDs the statuses and any other details and then this is how you would get the ID to be able to remove it for example or to be able to get the logs from it for example so this will be the example output in this particular case I know it’s a little bit small but this is where the container ID is uh listed you have the image which is in Jinx in this case the command line entry for this which is the uh Docker entry point and it it runs much longer than that uh you have the fact that it was created 2 hours ago it’s been running for 2 hours there are no ports associated with it because it’s running on the local machine and then the name is the Serene bassi that has been associated with this in Jinx server so PSA not the public service analment but PS sa a as the listing of all of the containers including all the ones that are stopped so we had the PS that was running something that is running right so it’s going to show you the stuff that’s currently running PSA will show you everything that is stopped or uh deleted or any maybe not deleted but probably the ones that are stopped uh either paused or uh deactivated and then you would see what those uh IDs are and what their names are so on and so forth and then you can stop one that’s currently running so if if you wanted to stop it Docker stop and then give it the container ID these are very very easy commands to remember but you already have this video as a cheat sheet as well so it stops Serene Bassie right so that was in the container ID this was the name that was associated with it the container ID is this piece right here so there’s a little bit of a glitch with these instructions so my bad that’s my bad on this particular case but you you get it right so if you give it the container ID you would need to give it this information right here which is the container ID not the the nickname that has been assigned to it which is Serene bassi in this particular case um if you want to remove it by its image name then you would do RMI and then you would give it the image name which is in this particular case the in Jinx which would be the image name right here so that is the the value that we would give the RMI version which is in this case we see in jinx as our example and so that’s it Docker is the tool that helps you uh deploy and manage applications uh that are stored inside of containers and containers house the applications as well as all of their dependencies to be able to run them and we already went through all of the key commands and this is just kind of the cheat sheet for you so to be able to pull it meaning download it onto your local machine you would use Docker pull to be able to run it you would do Docker run and to remove it you would do RM or RMI you can do PS to list them PSA to list all of them including the ones that are not running running and that’s basically it as far as the docker itself is concerned you also do need to learn how to orchestrate or manage your containers uh to just make sure that uh number one they’re not past their life cycle or if they’re done being used you get rid of them especially in a large scale environment because it does take up a lot of storage to be able to do something like that so kubernetes is something that is done or is used for that for the orchestration of the uh container ERS that would be in that environment Docker swarm is another one that can help you automate deployment scaling up or scaling down management of containerized applications to make sure that they’re running when they need to they’re efficient everything is all good and then when they’re done you get rid of them uh kubernetes is a very popular one uh it’s also it’s also abbreviated as k8s so kubernets uh it’s an uh open source platform for automating the deployment scaling and management of containerized applications and it’s great for really anything um especially in a large environment so automatic deployment and scaling because it actually works with um a variety of different script templates that come into play as well as rules and policies that you can assign it so you can automatically deploy something um and say that you know for this particular uh image for this in Jinx I want you to to create 100 different versions of this and deploy those 100 different versions and it’s very very good at automatically deploying something like that you can load balance and Route traffic so just based on the traffic size of your environment and the company that you’re working with you can make sure that the physical servers don’t crash because load balancing is very directly related to that so uh balancing the load of traffic that’s coming in and routing that traffic efficiently so that the containers run smoothly but the physical infrastructure is also running and it’s available without anything crashing that’s really basically what load balancing and traffic routing is uh selfhealing is an interesting one um because if anything fails to start so it’ll just be replaced or it will be automatically restarted um if anything that isn’t working repetitively will be killed I love that it kills containers that don’t respond to userdefined health checks and it doesn’t advertise them to clients until they are ready to serve right so it’ll just do everything that it needs to do and once it’s actually ready for the person who wants to use it it’ll just say this is ready to serve but until then in the background it’ll restart something it’ll heal it if it needs to be done it’ll kill it and wipe it and then go get a version of it that’s working and then bring it up and then give it to to the client or the user so that they can actually use it and then it can help you manage the storage as well to make sure that the persistent storage is actually uh deployed as needed it’s mounted as it’s needed and this is all something that kubernetes is done beautifully uh automatically so to speak um then there’s the security portion of this so if you have sensitive information which a lot of people do like passwords and API Keys it helps you manage those things securely which basically means either it won’t display them or if it does display them it’ll be done as hash values which is basically encrypted values that look like a bunch of randomized text that nobody can make sense of and they can’t be decrypted without the key that has been assigned to the administrator or to the user so there there’s no way to decrypt them or to decode them uh because it’s just a very nice algorithm that’s been used for the encryption and then there’s a key that’s attached to it that anybody without that key would not be able to decrypt those contents Docker swarm is docker’s native clustering and orchestration Tool so it essentially does uh what kubernetes does it’s just a little bit simpler um and it helps you orchestrate and manage your containers uh especially environments that actually already are using Docker so simplified setup management um it’s integrated with Docker tools seamlessly so it actually works very well if you’re already using Docker for your particular environment um there’s an ability to scale your services up or down very similar to kubernetes um by adjusting the number of replicas as I already mentioned so you can just say I want you to create 50 versions of this 100 versions of this so on and so forth load balancing same thing that it’ll help with the network traffic so that these Services don’t crash or they don’t overexert the physical infrastructure um it’s secure by default meaning that it actually has TLS encryption which is the more advanced version of SSL encryption that typically runs on top of Port 443 for uh https for web traffic um so TLS is actually very powerful encryption standard for secure communication between nodes in the Swarm cluster which is a very fancy way of saying a variety of different containers or tools that are trying to communicate with each other inside of this massive environment so it provides secure encrypted conversation between all of these various nodes or these various different uh containers that are trying to communicate with each other and it does it seamlessly and securely so to want to do all of these things with kubernetes as an example if you want to deploy a in Jinx environment you would do kubernetes so CP CTL is the command that interacts with kubernetes you do Coupe CTL create a deployment of in Jinx and then use the image inin right so it creates a deployment named inin using the official ninx image very very simple to do uh scale it is very interesting and this can a lot of these things can also be embedded inside of scripts that can be running so you just run the script and it’ll do this for you but again scale the deployment of in Jinx and multiply that by three so it scales this inin to deployment to three replicas this deployment would be done in three replicas how easy is that that’s crazy then you wanted to get the pods so list all the pods that are running in the cluster which essentially contain these three replicas or as many replicas as you would have so you would get them you would list all of those pods contained in that cluster and then the doer version of it would be to use the Swarm initialization first so you need to use Docker to actually initialize swarm first and then that cluster would be that uh stuff that has been managed and so you would interact with that swarmed cluster so you would do service creation and then give it a web so you’re using the web name in this particular case you’re going to create three replicas it’s going to be on Port 80 and it’s going to be inin so it creates a service named web with three replicas using the inin image in this particular case and Maps the port 80 on the host to Port 80 on the container meaning that’s it’s actually what’s used for web traffic so Port 80 is for HTTP traffic so it’s only appropriate that we use port 80 for our web service that has been created with the docker swarm if you want to list all of the services you would just use the ls command which is very intuitive to Linux cuz that is the command that we use to list anything inside of Linux containers or inside of Linux directories I guess and that’s it for the container or orchestration these two tools obviously go much deeper and there is a lot of documentation and a lot of uh tutorials that are available for both Docker swarm as well as kubernetes it’s just I want you to know about them so if you wanted to do more homework and self- teing you know where to go and what you’re supposed to look for and they’re very powerful tools because they both automate the deployment of multiple replicas as they said uh of anything really so it could be a Linux machine Linux virtual machine it could be an in Jinx uh web server it could be anything that can virtually be deployed uh inside of a container and it would be done times a thousand if needed and it’s as simple as saying hey create replicas equals a th000 and then all of a sudden now you have a thousand replicas of that same container so it’s a very very powerful series of tools kubernetes and Docker swarm and there’s obviously other versions of container orchestration tools as well these are just the most popular and the most relevant to the conversations that we’ve had so I do encourage you to check into uh orchestration of containers using either Docker Docker swarm kubernetes or anything that would be similar to it because it would make you much more functional as a Linux admin administrator and overall as just a system administrator this training series is sponsored by hackaholic Anonymous to get the supporting materials for this series like the 900 page slideshow the 200 Page notes document and all of the pre-made shell scripts consider joining the agent here of hack holics Anonymous you’ll also get monthly python automations exclusive content and direct access to me by a Discord join hack Alix Anonymous today this training series is sponsored by hack alic an on to get the supporting materials for this series like the 900 page slideshow the 200 Page notes document and all of the pre-made shell scripts consider joining the agent tier of hack holics Anonymous you’ll also get monthly python automations exclusive content and direct access to me via Discord join hack alic Anonymous today

By Amjad Izhar
Contact: amjad.izhar@gmail.com
https://amjadizhar.blog
Affiliate Disclosure: This blog may contain affiliate links, which means I may earn a small commission if you click on the link and make a purchase. This comes at no additional cost to you. I only recommend products or services that I believe will add value to my readers. Your support helps keep this blog running and allows me to continue providing you with quality content. Thank you for your support!

Leave a comment