Category: GitLab

  • Comprehensive Guide to GitLab CI/CD Pipelines and AWS Deployment

    Comprehensive Guide to GitLab CI/CD Pipelines and AWS Deployment

    The provided text offers a comprehensive guide to utilizing GitLab’s CI/CD features for software development and deployment. It begins by explaining the fundamentals of creating and troubleshooting GitLab pipelines using YAML configuration files, including common mistakes and account verification. The source then introduces continuous integration (CI) principles and demonstrates how to automate build and test processes for a React website within GitLab. Furthermore, it explores continuous delivery (CD) by detailing the deployment of the website to AWS S3, covering topics such as bucket policies, public access, and environment management. The text culminates in illustrating more advanced deployment strategies using Docker and AWS Elastic Beanstalk, encompassing containerization, image management, and automated application updates.

    GitLab CI/CD and DevOps Study Guide

    Key Concepts Review

    This section outlines the core concepts covered in the source material. Review each point to solidify your understanding.

    • GitLab CI: Understand what GitLab CI is and its role in automating software development workflows.
    • CI/CD Pipelines: Describe the concept of Continuous Integration and Continuous Delivery/Deployment pipelines.
    • DevOps: Explain the fundamental principles of DevOps and how GitLab CI supports these principles.
    • Automation: Recognize the importance of automation in the software development lifecycle.
    • GitLab Projects: Understand how to create and manage projects within GitLab.
    • .gitlab-ci.yaml: Explain the purpose and structure of the GitLab CI configuration file.
    • Jobs: Define what a job is in GitLab CI and how it’s configured.
    • Scripts: Understand how to define and execute shell commands within a GitLab CI job.
    • Stages: Explain how stages organize jobs in a pipeline and their sequential execution.
    • Docker: Understand the basics of Docker and its use in GitLab CI for consistent environments.
    • Docker Images: Explain how to specify and use Docker images in GitLab CI jobs.
    • YAML: Understand the basic syntax of YAML, including key-value pairs, lists (sequences), and indentation.
    • GitLab Runner: Understand the role of the GitLab Runner in executing pipelines.
    • Artifacts: Explain how to define and use job artifacts to pass data between jobs.
    • Testing: Recognize the importance of automated testing within CI/CD pipelines.
    • Variables: Understand how to define and use local and global variables in GitLab CI, including CI/CD variables and environment variables.
    • Merge Requests: Describe the purpose of merge requests and their integration with GitLab CI pipelines.
    • Rules: Explain how rules are used to control when jobs are executed based on conditions like branch names.
    • AWS (Amazon Web Services): Understand the basics of AWS and its role as a cloud provider for deployment.
    • AWS S3 (Simple Storage Service): Explain the purpose of S3 for storing and serving static website files.
    • AWS CLI (Command Line Interface): Understand how to interact with AWS services using the AWS CLI.
    • IAM (Identity and Access Management): Explain the purpose of IAM users and policies for managing access to AWS resources.
    • Environments: Understand how GitLab environments help manage and track deployments to different stages (e.g., staging, production).
    • Continuous Delivery vs. Continuous Deployment: Differentiate between these two concepts and how GitLab can support both.
    • Docker Registry: Understand the purpose of a Docker registry for storing and sharing Docker images, including the GitLab Container Registry.
    • Services (in GitLab CI): Explain how services, like Docker in Docker (DinD), can be used to provide necessary tools within a CI/CD job.
    • Dockerfile: Understand the purpose and basic syntax of a Dockerfile for defining Docker images.
    • Elastic Beanstalk: Understand the basics of AWS Elastic Beanstalk as a Platform-as-a-Service (PaaS) for deploying web applications.

    Short-Answer Quiz

    Answer the following questions in 2-3 sentences each.

    1. What is the primary purpose of GitLab CI?
    2. Explain the role of the .gitlab-ci.yaml file in a GitLab project.
    3. Describe the relationship between jobs and stages in a GitLab CI pipeline.
    4. Why is Docker often used in conjunction with GitLab CI?
    5. What are GitLab Runner(s) responsible for in the CI/CD process?
    6. How can you pass data between different jobs in a GitLab CI pipeline?
    7. What is the benefit of using environment variables in GitLab CI?
    8. Explain the purpose of AWS S3 in the context of deploying a static website.
    9. How does IAM help secure your AWS resources when using GitLab CI for deployment?
    10. What is the key difference between Continuous Delivery and Continuous Deployment?

    Quiz Answer Key

    1. GitLab CI’s primary purpose is to automate the software development lifecycle, specifically the building, testing, and deployment of code changes. This automation ensures faster feedback, reduces manual errors, and enables more frequent releases.
    2. The .gitlab-ci.yaml file is the configuration file for GitLab CI/CD pipelines. It defines the stages, jobs, and scripts that will be executed automatically whenever code is pushed to the repository or a merge request is created.
    3. Jobs are the individual units of work in a GitLab CI pipeline, defining specific actions like compiling code or running tests. Stages organize these jobs into logical phases that are executed sequentially, ensuring a defined workflow.
    4. Docker provides consistent and isolated environments for GitLab CI jobs. By specifying Docker images, you ensure that the build, test, and deployment processes have the necessary dependencies and configurations, regardless of the underlying infrastructure.
    5. GitLab Runners are the agents that execute the jobs defined in the .gitlab-ci.yaml file. They pick up jobs from the GitLab server and run the specified scripts within the configured environment, reporting the results back to GitLab.
    6. Data can be passed between different jobs in a GitLab CI pipeline using artifacts. Artifacts are a collection of files and directories created by a job that can be downloaded by subsequent jobs in the same pipeline.
    7. Environment variables in GitLab CI allow you to configure jobs dynamically without hardcoding values in the .gitlab-ci.yaml file. This is useful for storing sensitive information like API keys, configuring different deployment environments, and making the pipeline more flexible and reusable.
    8. AWS S3 (Simple Storage Service) is used to store the static files (HTML, CSS, JavaScript, images) of a website in the cloud. GitLab CI can automate the process of uploading these files to an S3 bucket, which can then be configured to serve the website to users over the internet.
    9. IAM (Identity and Access Management) allows you to create users with specific permissions to access AWS resources. When using GitLab CI for deployment, you create an IAM user with limited permissions (e.g., only to upload to a specific S3 bucket) and use its credentials in GitLab CI variables, enhancing security by avoiding the use of root account credentials.
    10. Continuous Delivery is the practice of automating the release process so that code can be deployed to production at any time, but the final decision to deploy is made manually. Continuous Deployment goes a step further by automatically deploying every code change that passes the automated tests directly to production without manual intervention.

    Essay Format Questions

    1. Discuss the benefits of implementing a CI/CD pipeline using GitLab CI for a software development team. Consider aspects such as development speed, code quality, and deployment reliability.
    2. Explain the role of Docker in creating reproducible and consistent environments for CI/CD pipelines. Describe how specifying Docker images in GitLab CI helps achieve these benefits.
    3. Compare and contrast the use of global variables defined at the top level of .gitlab-ci.yaml with CI/CD variables defined in the GitLab project settings. When might you choose one over the other?
    4. Describe the steps involved in deploying a static website to AWS S3 using GitLab CI. Include details about configuring the .gitlab-ci.yaml file, setting up AWS credentials, and making the website publicly accessible.
    5. Discuss the transition from a Continuous Integration and Continuous Deployment pipeline to a Continuous Delivery pipeline. What changes would be necessary in your GitLab CI configuration and overall workflow to implement continuous delivery?

    Glossary of Key Terms

    • CI (Continuous Integration): A development practice where code changes are frequently integrated into a shared repository, followed by automated building and testing.
    • CD (Continuous Delivery): An extension of CI that ensures the software is always in a deployable state, allowing for manual deployment to production.
    • CD (Continuous Deployment): An extension of CI where every code change that passes automated tests is automatically deployed to production.
    • DevOps: A set of practices that integrates software development (Dev) and IT operations (Ops) to shorten the development lifecycle and provide continuous delivery with high software quality.
    • Pipeline: In GitLab CI, a pipeline represents the entire automated process from code commit to deployment, defined in the .gitlab-ci.yaml file.
    • Job: A specific task or action executed by a GitLab Runner as part of a pipeline, defined under a stage in the .gitlab-ci.yaml file.
    • Stage: A logical division within a GitLab CI pipeline that groups related jobs. Jobs within a stage run in parallel, and stages are executed sequentially.
    • Script: A set of commands (usually shell commands) defined within a GitLab CI job that the GitLab Runner executes.
    • Artifact: A collection of files and directories produced by a GitLab CI job that can be downloaded or used by subsequent jobs.
    • Docker Image: A lightweight, standalone, executable package that includes everything needed to run a piece of software, including the code, runtime, system tools, system libraries, and settings.
    • Docker Container: A running instance of a Docker image.
    • YAML (YAML Ain’t Markup Language): A human-readable data serialization language commonly used for configuration files, including GitLab’s .gitlab-ci.yaml.
    • GitLab Runner: An application that works with the GitLab CI/CD coordinator to execute jobs in a pipeline.
    • Variable: A named storage location that holds a value, used in GitLab CI to configure pipelines and jobs dynamically.
    • Merge Request: A request to merge changes from one branch into another, often triggering a GitLab CI pipeline for validation.
    • Rule: A conditional statement in GitLab CI that determines if and when a job should be executed.
    • AWS (Amazon Web Services): A comprehensive and broadly adopted cloud platform, offering a wide range of services including computing, storage, and databases.
    • AWS S3 (Simple Storage Service): A scalable, high-performance object storage service offered by AWS, often used for storing static website files.
    • AWS CLI (Command Line Interface): A tool that allows you to interact with AWS services using commands in a terminal.
    • IAM (Identity and Access Management): An AWS service that enables you to manage access to AWS services and resources securely.
    • Environment: In GitLab, an environment represents a deployment target (e.g., staging, production) and provides a way to track deployments and related information.
    • Docker Registry: A service that stores and distributes Docker images, allowing users to pull and push images. GitLab has its own Container Registry.
    • Dockerfile: A text document that contains all the commands a user could call on the command line to assemble a Docker image.
    • Elastic Beanstalk: An easy-to-use service for deploying and scaling web applications and services developed with Java, .NET, PHP, Node.js, Python, Ruby, Go, and Docker on familiar servers such as Apache, Nginx, and IIS.
    • Service (GitLab CI): Allows you to define and run additional containers alongside your main job container, often used to provide dependencies or simulate external services (e.g., Docker in Docker).

    Briefing Document: GitLab CI/CD with AWS Course Review

    This document provides a detailed review of the main themes, important ideas, and facts presented in the excerpts from the GitLab CI/CD with AWS course taught by Valentine. The course focuses on automating software build, test, and deployment pipelines using GitLab CI and deploying to Amazon Web Services (AWS).

    Main Themes

    1. Introduction to GitLab CI and DevOps: The course aims to introduce beginners to the concepts of DevOps and how GitLab CI can be used to implement CI/CD pipelines. Valentine positions himself as a passionate software developer eager to share his knowledge in an accessible way.
    2. Automation of Software Delivery: A central theme is the automation of the software delivery process. The course emphasizes building pipelines that automatically build, containerize, test, and deploy applications to AWS, reducing manual effort and potential errors.
    3. Hands-on Learning: The course adopts a practical, hands-on approach, encouraging learners to build pipelines and deploy software to AWS through exercises and assignments. Valentine emphasizes “learning by doing.”
    4. GitLab CI Fundamentals: The course covers the basics of GitLab CI, including creating projects, understanding the .gitlab-ci.yaml file, defining jobs and stages, and understanding pipeline execution.
    5. Integration with AWS: A significant focus is on integrating GitLab CI with AWS to deploy applications to the cloud. The course introduces key AWS services like S3 and Elastic Beanstalk.
    6. Importance of Testing: The course highlights the role of automated testing in ensuring the quality and reliability of software deployments. It covers adding test jobs to the pipeline and validating deployments.
    7. Managing Environments: The course touches upon the concept of managing different environments (e.g., staging, production) within GitLab and deploying to them in a controlled manner.
    8. Docker and Containerization: Docker is introduced as a key technology for building and deploying applications consistently across different environments. The course covers building Docker images within GitLab CI.
    9. Version Control with Git and GitLab: The course implicitly relies on Git for version control and uses GitLab as the platform for hosting repositories and running CI/CD pipelines.
    10. Best Practices: Throughout the course, Valentine introduces various best practices related to CI/CD, such as using infrastructure as code (Dockerfiles), automating testing, and managing deployment environments.

    Most Important Ideas and Facts

    • GitLab CI as a Tool for DevOps: GitLab CI is presented as a crucial tool for implementing DevOps practices, particularly continuous integration and continuous delivery/deployment.
    • “in this course valentine will teach you how to use gitlab ci to build ci cd pipelines to build and deploy software to aws hello frequent campers and welcome to this course which will introduce you to gitlab ci and devops”
    • CI/CD Pipeline Workflow: The course walks through creating a pipeline that takes a simple website, builds a container, tests it, and deploys it to AWS.
    • “during the course we’ll create a pipeline that takes a simple website builds a container tests it and deploys it to the amazon web services cloud also called aws”
    • Focus on Automation: The core purpose of GitLab CI is to automate the software delivery process.
    • “in other words we’ll be focusing on automation”
    • No Prior Knowledge Required (Mostly): The course is designed for beginners with no prior DevOps or GitLab CI experience, and minimal coding knowledge is needed initially.
    • “I’ve created this course for people new to devops who want to use gitlab to build test and deploy their software don’t worry if none of this makes sense right now if you’re a beginner that is totally fine you don’t need to install any tools or anything else also no coding knowledge is required”
    • Importance of Course Notes: The course notes contain valuable resources, troubleshooting tips, corrections, and updates.
    • “go right now to the video description and open the course notes there you will find important resources and troubleshooting tips if something goes wrong I will also be publishing there any corrections additions and modifications”
    • Learning DevOps Concepts: While focusing on GitLab CI and AWS, the underlying concepts are applicable to other technologies and cloud providers.
    • “well in this course we’ll be focusing on a specific technology and cloud provider what you’re actually learning are the concepts around devops”
    • Using gitlab.com: The course primarily uses the hosted version of GitLab at gitlab.com.
    • “I’ll be using gitlab.com throughout the course if you don’t have a gitlab.com account please go ahead and create one”
    • .gitlab-ci.yaml File: The pipeline definition is stored in a file named .gitlab-ci.yaml in the root of the GitLab repository. The exact naming and syntax are crucial.
    • “this file name must be called dot gitlab dash ci dot yaml if it’s not exactly this name if you’re writing it on your own this pipeline will not be recognized so the pipeline find will not be recognized by gitlab this is a very common mistake that beginners make”
    • Jobs and Stages: A pipeline consists of jobs organized into stages. Jobs are sets of commands to be executed.
    • “what is a pipeline it is a set of jobs organized in stages right now we have a single job that belongs to the test stage however in the upcoming lectures we’ll be expanding this pipeline”
    • YAML Syntax: GitLab CI pipelines are defined using YAML, a human-readable data serialization language based on key-value pairs, lists (sequences), and indentation for structure. Correct syntax and indentation are critical.
    • “the language that we’re using here to describe our pipeline is called yaml and yaml is essentially a way to represent key value pairs it may look a bit weird in the beginning but after a few examples I’m sure you will get used to it”
    • GitLab Runner: The GitLab server manages pipeline execution, and GitLab Runners are the agents that execute the jobs. The course uses the GitLab.com infrastructure, so managing runners is abstracted away for beginners.
    • “at the minimum the gitlab architecture for working with pipelines contains the gitlab server and at least one gitlab runner the gitlab server manages the execution of the pipeline and its jobs and stores the results the”
    • Docker Images for Jobs: Each job in a GitLab CI pipeline typically runs within a Docker container specified by the image keyword. This ensures a consistent execution environment.
    • “in the previous execution we haven’t specified a docker image to use and by default it will download a ruby docker image which for our use case here doesn’t really make a lot of sense so we’re going to use a very simple docker image it’s a linux distribution that will be started with our job”
    • Artifacts: Jobs can produce artifacts, which are files or directories that can be passed between jobs or downloaded at the end of the pipeline.
    • “so we have managed to create this file and we want to pass it to the next job and the way we do that in gitlab is by using artifacts”
    • Testing in the Pipeline: Automated tests are crucial for verifying the correctness of the build and deployment. Test jobs execute commands to validate the application.
    • “tests also play a very particular role when we’re working on this pipeline when we’re building software there are many various levels of testing but essentially tests allow us to make changes to our software or to our build process and to ensure that everything works the same”
    • Variables: Variables can be defined at different levels (global, job, environment) to store configuration values and avoid hardcoding them in the .gitlab-ci.yaml file. Environment variables are automatically available to the jobs.
    • “quite often when we have something that we are searching multiple times inside a file or inside the configuration we want to put that into a variable and that way if it’s inside a variable if we need to make changes to that particular value we only have to change it once and then it would be replaced all over”
    • Merge Requests: Merge requests in GitLab are used for code review and collaboration. Pipelines can be configured to run against the branch associated with a merge request.
    • “quite often when we are creating branches we tend to use some naming conventions now totally up to you what you want to use or how your organization uses that quite often you will have something like feature for example for what slash and then a name of a feature sometimes you may reference a ticket or something like that so for example you have your ticket number one two three four add linter”
    • Rules for Job Execution: The rules keyword allows defining conditions under which a job should be executed, such as only running on the main branch.
    • “in order to achieve this we still need to make a few changes to our pipeline now this is how the configuration looks like at this point and the changes that we need to make are here in the deploy to s3 job now how do we exclude this job from this pipeline which is running on the branch and to ensure that it only runs on the main branch gitlab has this feature which is called rules”
    • AWS S3 for Static Websites: AWS S3 can be used to host static websites by storing public files in buckets and serving them over HTTP.
    • “since our website is static and requires no computing power or database we will use aws s3 to store the public files and search them to the word from there over http”
    • AWS CLI: The AWS Command Line Interface (CLI) is a tool for interacting with AWS services from the command line, which is essential for automating deployments.
    • “for the aws cloud we need a command line interface to be able to upload files for example and fortunately aws has already thought about that and there is an official aws command line interface that we can use”
    • IAM for AWS Credentials: AWS Identity and Access Management (IAM) allows creating users with specific permissions to access AWS services securely. Access keys (ID and secret) are used for programmatic access via the CLI.
    • “the thing is how is aws cli supposed to know who we are I mean if this would work as it is right now we should be able to upload files to any bucket that belong to other people or to delete objects from buckets that belong to other people so we need to tell aws cli who we are and this is what we’re going to do in the upcoming lectures”
    • Environment Variables for AWS Credentials in GitLab CI: AWS access key ID, secret access key, and region can be securely provided to the GitLab CI pipeline as environment variables. Specific variable names (AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_DEFAULT_REGION) are recognized by the AWS CLI.
    • “I’m going to start here writing aws and you will see there already some predefined things that pop up so one of them is aws access key id this has to be written exactly as you see it here there’s something that’s different about this away cli will not be able to pick up this variable it will look exactly for this variable name and will automatically pick it up without us doing anything else so it has to be exactly as it is”
    • S3 Bucket Policies for Public Access: To make a static website hosted on S3 publicly accessible, a bucket policy needs to be configured to allow public read access.
    • “in order to make our website that we have uploaded to s3 publicly accessible we need to do something additional and that is to change the bucket policy”
    • Continuous Delivery vs. Continuous Deployment: The course explains the difference, with continuous deployment involving automatic deployment to production upon successful builds and tests, while continuous delivery requires manual approval for production deployment. GitLab offers “manual jobs” to implement continuous delivery.
    • “what is a continuous delivery pipeline this is exactly what I wanted to show you essentially a continuous delivery pipeline is just a pipeline where we don’t automatically deploy to production essentially what we want to do is add here a button and only when we are sure that we really want to make that change to production we can click on that button and make that change so let me show you how to do this”
    • Dockerfiles for Building Docker Images: A Dockerfile is a text document that contains all the commands a user could call on the command line to assemble an image. It’s a declarative way to define a Docker image.
    • “in order to build our own docker image we need to create a file which is called docker file so inside our project i’m going to create a new file and the name of the file has to be exactly like this docker file with a capital d and no extension”
    • Docker in Docker (dind) Service in GitLab CI: To build Docker images within a GitLab CI job, a Docker in Docker service needs to be defined in the .gitlab-ci.yaml file.
    • “in order to be able to run this docker command we also have to define the image that we’ll use and the image will be docker just using this docker image will not work we’re gonna get an error the reason for that is the docker architecture is composed of a client and a server essentially what we have here docker this is the client and the client sends some instructions to a server actually to the docker demon which is the one who builds the job and in order to get access to a daemon inside gitlab ci we need to use a concept and that concept is of services we want to define here a tag services and this contains a list of services that we can start and what we’re starting here is actually service called docker in docker”
    • GitLab Container Registry: GitLab provides a private Docker container registry for storing and managing Docker images associated with projects.
    • “both aws and gitlab offer private docker registries and for this example we’ll be using the docker registry offered by gitlab and you’ll see here on the left hand side packages and registry and we’re going to use here the container registry because docker is all about working with containers”
    • Docker Login and Push: To save a Docker image to a registry, the GitLab CI job needs to log in to the registry using docker login and then push the image using docker push. GitLab provides predefined variables (CI_REGISTRY_USER, CI_REGISTRY_PASSWORD, CI_REGISTRY_IMAGE) for accessing its container registry.
    • “the command to login is docker login relatively easy and we’re going to specify username and password for this service we’re not actually using our username and password that we used to log into gitlab we’ll use some variables again that will give us some temporary user and passwords don’t really care so much about that but essentially this is what we’re to do so if you’re looking here at the variables that are available we’ll find here ci registry see a registry user say a registry password”
    • AWS Elastic Beanstalk for PaaS: AWS Elastic Beanstalk is a Platform-as-a-Service (PaaS) offering that simplifies deploying and managing web applications. The course introduces deploying Docker containers to Elastic Beanstalk.
    • “aws elastic beanstalk is a platform as a service offering that we can use to easily deploy and manage applications in the aws cloud so if you don’t want to deal with the underlying infrastructure configuring servers load balancers auto scaling groups and so on elastic beanstalk is the perfect option for you”
    • Environment Management in GitLab: GitLab allows defining and managing different deployment environments (e.g., staging, production) and associating pipeline jobs with specific environments. This helps in tracking deployments and managing environment-specific variables.
    • “going inside the project you will find here on the left hand side deployments and inside deployments there’s this optional environment what is an environment staging is an environment production is an environment wherever we’re deploying something that is an environment and it really makes sense to have these environments defined somewhere and to work with the concept of environment instead of fiddling around with so many variables that’s really not the best way on how to do that”
    • Extending Job Configurations (.): GitLab CI allows defining reusable job configurations using a dot prefix (.), which can then be extended by other jobs using the extends keyword, reducing duplication in the pipeline definition.
    • “if looking here at the deployed staging and we’re looking at deeply production well essentially at this point because we have used all these variables these jobs are almost identical the only thing that is different is the different stage different job name and we’re specifying the environment but the rest of the configuration is identical and yes the answer is yes we can simplify this even more so essentially we can reuse some job configurations so i can go ahead here and i’m going to copy this and i’m going to go here and i’m going to define essentially a new job i’m just going to call it deploy this will be a special kind of job it will be a job that has a dot in front of it and as you remember we have used the dot notation to disable jobs by having this we can essentially have a configuration here we don’t care about the stage right we don’t care about the environment so this is something that is not in common with the rest it doesn’t even have to be a valid job configuration we have just put here the parts that are really important for us and then in these other jobs we’re going to keep what we need so what we need we need a stage and we need the environment and of course we need the job name and this other part here we can simply write something like extends”

    Conclusion

    The excerpts indicate that this GitLab CI/CD with AWS course provides a comprehensive introduction to automating software delivery. It covers essential concepts of DevOps, GitLab CI, Docker, and AWS, emphasizing hands-on learning and practical application. The course progresses from basic pipeline creation to more advanced topics like environment management, Docker containerization, and deployment to AWS services like S3 and Elastic Beanstalk. It highlights best practices for building robust and efficient CI/CD pipelines.

    GitLab CI/CD Pipelines: An Overview

    1. What is GitLab CI and why is it useful?

    GitLab CI (Continuous Integration) is a tool within GitLab that allows users to automate the building, testing, and deployment of their software through CI/CD (Continuous Integration and Continuous Delivery/Deployment) pipelines. It’s useful because it automates repetitive tasks, ensuring code is frequently tested and deployed, leading to faster development cycles, fewer errors, and more reliable software releases.

    2. What are the basic components of a GitLab CI/CD pipeline?

    A GitLab CI/CD pipeline is defined in a .gitlab-ci.yml file and consists of:

    • Jobs: These are the fundamental units of work in a pipeline, representing a set of commands to be executed (e.g., building code, running tests, deploying). Each job runs in an isolated environment.
    • Stages: Jobs are organized into stages, which define the order of execution. Jobs within the same stage run in parallel, while stages run sequentially. Common stages include build, test, and deploy.
    • Scripts: Each job contains a script section where the actual commands to be executed are defined. These are typically shell commands.
    • Images: Jobs run inside Docker containers. The image keyword specifies which Docker image to use for a particular job, providing a consistent and isolated environment.

    3. How do you define a pipeline in GitLab?

    A pipeline is defined using a YAML file named .gitlab-ci.yml at the root of your GitLab repository. This file specifies the stages, jobs within each stage, the script to be executed in each job, and other configurations like the Docker image to use. The structure relies heavily on indentation to define the relationship between different parts of the configuration (e.g., which scripts belong to which job).

    4. What is YAML and why is it used in GitLab CI?

    YAML (YAML Ain’t Markup Language) is a human-readable data serialization language. It is used in GitLab CI to define the pipeline configuration in the .gitlab-ci.yml file. YAML’s structure, based on indentation and key-value pairs, makes it relatively easy to read and write pipeline definitions, although proper syntax and indentation are crucial to avoid errors.

    5. How does GitLab CI integrate with Docker and AWS in this course?

    GitLab CI uses Docker to provide isolated and consistent execution environments for pipeline jobs. Each job runs inside a Docker container specified by the image keyword. This ensures that the build and test processes are reproducible. The course focuses on deploying a simple website to Amazon Web Services (AWS) S3 (Simple Storage Service) using the AWS Command Line Interface (CLI) within a GitLab CI pipeline. This involves using an AWS CLI Docker image and configuring credentials to interact with AWS services.

    6. How can you manage environment-specific configurations and credentials in GitLab CI?

    GitLab provides CI/CD variables that can be defined at the project level (Settings > CI/CD > Variables). These variables can store configuration values, API keys, and other sensitive information. Variables can be marked as “protected” (available only on protected branches) and “masked” (their values are hidden in job logs). Additionally, variables can be scoped to specific environments (e.g., staging, production), allowing different values for the same variable name based on the deployment environment. AWS credentials (access key ID and secret access key) are securely stored as GitLab CI/CD variables to allow the pipeline to interact with AWS.

    7. What is the difference between Continuous Integration (CI), Continuous Delivery, and Continuous Deployment?

    • Continuous Integration (CI): Focuses on frequently merging code changes from multiple developers into a shared repository, followed by automated building and testing. The goal is to detect integration issues early and ensure the codebase is always in a working state.
    • Continuous Delivery: Extends CI by automating the process of preparing code for release to production. This may include additional testing, configuration management, and manual approval steps. The code is always in a deployable state, but the actual deployment is a manual decision.
    • Continuous Deployment: Takes Continuous Delivery a step further by automatically deploying every change that passes all stages of the pipeline directly to production without manual intervention. The course demonstrates aspects of both Continuous Delivery (with staging and production environments) and sets up a mechanism for manual deployment to production in the later stages.

    8. How can you create a more sophisticated CI/CD pipeline with branching strategies and manual deployment?

    To create a more sophisticated pipeline:

    • Branching Strategies: Use feature branches for development, merging them into the main branch upon completion and review. The pipeline can be configured to run different sets of jobs based on the branch (e.g., run all tests on feature branches, but only deploy from the main branch).
    • Rules: GitLab CI allows defining rules for jobs to control when they are executed based on conditions like the branch name (if: $CI_COMMIT_REF_NAME == $CI_DEFAULT_BRANCH).
    • Manual Deployment: You can configure a job with when: manual to require a manual trigger in the GitLab UI before it is executed. This is useful for production deployments, allowing for human oversight before releasing changes.
    • Environments: Define and utilize GitLab Environments to track deployments to different stages (e.g., staging, production), associate variables and URLs with these environments, and monitor the health and status of deployments.
    • Job Templates (.job_name): Reuse job configurations by defining a template job (prefixed with a .) and then extending it in other jobs using the extends keyword, reducing duplication and improving maintainability.

    GitLab CI/CD Pipelines for AWS Deployment

    GitLab CI pipelines are a powerful tool within GitLab that allows you to automate the processes of building, testing, and deploying your software. This automation is central to the practices of Continuous Integration (CI) and Continuous Delivery/Deployment (CD). Valentine introduces this course to teach how to use GitLab CI to build CI/CD pipelines for deploying software to AWS.

    Core Concepts of GitLab CI Pipelines:

    • .gitlab-ci.yaml File: The entire pipeline is defined in a YAML file named .gitlab-ci.yaml which must reside in the root of your project repository. The pipeline will not be recognized by GitLab if this file is not named exactly like this. This file contains the configuration for all the stages and jobs in your pipeline.
    • Jobs: A job is the fundamental building block of a pipeline and represents a set of commands that you want to execute. For example, a job might compile your code, run tests, or deploy your application. Jobs are defined with a name and contain a script section specifying the commands to be executed.
    • Stages: Stages define the order in which jobs are executed. Jobs within the same stage can run in parallel, while stages run sequentially. You can define custom stages or use predefined ones like build, test, and deploy. If no stage is specified for a job, it automatically belongs to the test stage by default.
    • Scripts: The script section within a job lists the commands that will be executed by a GitLab Runner. These are often Linux commands, and the course will explain the commands used.
    • Docker Images: GitLab CI jobs typically run inside Docker containers. You specify the Docker image to be used for a job, and the GitLab Runner will download and start a container from that image. This provides an isolated and consistent environment for your jobs. The container is destroyed once the job is finished.
    • GitLab Runner: The GitLab Runner is the agent that executes the jobs in your pipeline. It retrieves job instructions from the GitLab server, sets up the environment (including downloading the Docker image and getting project files), runs the commands in the script, and reports the results back to the server. GitLab.com provides shared runners that can be used by any project.
    • Artifacts: Artifacts are files or directories produced by a job that you want to save and potentially use in later jobs. You define artifacts using the artifacts keyword in a job’s configuration and specify the paths to the files or folders you want to save. Artifacts are uploaded to the GitLab server after a job completes successfully and can be downloaded by subsequent jobs.
    • Variables: You can define variables in your .gitlab-ci.yaml file or in the GitLab project settings under CI/CD > Variables. Variables allow you to configure your pipeline dynamically, such as specifying AWS region, bucket names, or application URLs. Variables can also be scoped to specific environments.
    • Environments: Environments represent deployment targets, such as staging or production. You can associate jobs with specific environments in your pipeline configuration. GitLab provides a section under Deployments > Environments to track deployments and environment URLs.

    The Workflow of a GitLab CI Pipeline:

    Every time you push code to your GitLab repository, GitLab CI will automatically look for the .gitlab-ci.yaml file and, if found, will trigger a new pipeline execution. The pipeline will proceed through the defined stages, and the jobs within each stage will be executed according to their configuration. You can monitor the progress and logs of your pipeline in the GitLab interface.

    Continuous Integration and GitLab CI:

    GitLab CI is a key enabler of Continuous Integration (CI). CI is a development practice where developers frequently integrate their code changes into a shared repository, and these changes are automatically built and tested. GitLab CI pipelines automate this integration and verification process, ensuring that any integration issues are detected early. Working with branches and merge requests is a common CI workflow. By protecting the main branch and requiring changes to go through merge requests with successful pipeline runs, you can ensure the stability of your main codebase.

    Continuous Delivery and Continuous Deployment with GitLab CI:

    GitLab CI also supports Continuous Delivery (CD) and Continuous Deployment.

    • Continuous Delivery involves automating the release process so that your software can be deployed to a production-like environment at any time. However, the final decision to deploy to production is typically a manual step. You can implement this in GitLab CI by having a deploy job that requires manual intervention using the when: manual condition.
    • Continuous Deployment takes this a step further by automatically deploying every code change that passes all stages of your pipeline directly to the production environment without manual intervention.

    Your course covers deploying a static website to AWS S3 and deploying a Dockerized application to AWS Elastic Beanstalk, demonstrating how GitLab CI can be used for both continuous delivery and deployment scenarios.

    Important Considerations and Best Practices:

    • YAML Syntax: Pay close attention to the YAML syntax in your .gitlab-ci.yaml file. Incorrect indentation, missing colons, or incorrect spacing are common causes of pipeline failures. Enabling the rendering of whitespace characters in the Web IDE preferences is recommended to help identify these issues.
    • Job Dependencies: Consider the dependencies between your jobs when structuring your pipeline into stages. Jobs that depend on the output of previous jobs should be in later stages, and you might need to use artifacts to pass data between them.
    • Failing Fast: It’s a good practice to order your pipeline stages so that jobs that are likely to fail early (such as linters and unit tests) run before longer or more complex jobs. This provides faster feedback on potential issues.
    • Pipeline Optimization: Optimizing pipeline execution time is important. You can explore running independent jobs in parallel within a stage and restructuring jobs to avoid unnecessary steps or waiting times.
    • Environment Management: GitLab’s environment feature helps you manage different deployment environments and associate variables and deployments with them. This simplifies managing configurations for staging and production environments.

    By understanding and effectively using GitLab CI pipelines, you can significantly streamline your software development lifecycle, improve the quality of your code, and automate the delivery of your applications.

    GitLab CI/CD YAML Configuration Essentials

    YAML configuration is fundamental to defining GitLab CI/CD pipelines. The entire configuration of a pipeline is written in a YAML file named .gitlab-ci.yaml located at the root of your project repository. If the file is not named exactly this, GitLab will not recognize the pipeline.

    Here are the key aspects of YAML configuration in the context of GitLab CI/CD, drawing from the sources:

    • Key-Value Pairs (Mappings): YAML at its core represents data as key-value pairs. In .gitlab-ci.yaml, you define jobs, stages, and their properties using this structure. A colon (:) is used to separate the key from the value, and it is crucial to have a space after the colon. For example:
    • name: john
    • age: 23
    • This is also referred to as a mapping, where a key is associated with a value. The order of these key-value pairs generally does not matter.
    • Lists (Sequences): YAML allows you to define lists of items, which are called sequences. Each item in a list is written on a new line and must start with a dash (-) followed by a space. For example, defining multiple commands in a job’s script:
    • script:
    • – echo “Building a laptop”
    • – mkdir build
    • – touch build/computer.txt
    • Indentation: Indentation is crucial in YAML for defining scope and structure. It allows you to understand which values belong to which keys. For instance, the script commands are indented under the build_laptop job definition. Typically, two or four spaces are used for indentation, and the GitLab Web IDE often defaults to four spaces. Inconsistent or incorrect indentation is a common source of errors.
    • Reserved Keywords: When configuring GitLab CI/CD pipelines, you use specific reserved keywords that GitLab understands. Examples include job_name:, stage:, image:, script:, variables:, rules:, extends:, artifacts:, and environment:. You cannot arbitrarily rename these keywords; they must be used as defined by GitLab.
    • Variables: YAML in .gitlab-ci.yaml can include variables. These allow you to make your pipeline configuration more dynamic and configurable. Variables can be defined directly in the .gitlab-ci.yaml file or in the GitLab project settings. In the YAML file, you typically access variables using a dollar sign ($) followed by the variable name (e.g., $AWS_S3_BUCKET). Depending on the variable’s content, you might need to enclose it in quotes.
    • Stages: Pipelines are organized into stages, which define the sequence of job execution. You define the order of stages using the stages: keyword at the top level of the .gitlab-ci.yaml file. Jobs are then assigned to specific stages using the stage: keyword under the job definition.
    • Rules: The rules: keyword allows you to define conditions for when a job should be executed. This often involves checking predefined CI/CD variables like CI_COMMIT_REF_NAME (the current branch or tag name) and CI_DEFAULT_BRANCH (the default branch name) using an if: condition.
    • Extending Job Configurations: YAML allows you to reuse job configurations using the extends: keyword. You can define a template job (often with a name starting with a dot, like .deploy) containing common configurations, and then other jobs can extend from this template, overriding or adding specific configurations as needed.
    • Common Mistakes: Beginners often make mistakes related to the YAML syntax, such as:
    • Incorrect or inconsistent indentation.
    • Forgetting the space after the colon in key-value pairs.
    • Forgetting the space after the dash in list items.
    • Using an incorrect file name for the pipeline configuration (it must be exactly .gitlab-ci.yaml).
    • Missing colons in key-value pairs.

    Understanding these YAML basics is crucial for effectively configuring GitLab CI/CD pipelines to automate your software development processes. While the syntax might seem a bit peculiar at first, with practice, you will become more comfortable with writing and debugging .gitlab-ci.yaml files.

    GitLab CI/CD: Understanding and Using Docker Images

    Docker images are a fundamental concept in GitLab CI/CD, as they define the isolated and reproducible environment in which your pipeline jobs are executed. Here’s a comprehensive discussion of Docker images based on the sources:

    • Job Execution Environment: GitLab CI jobs run inside Docker containers. You specify the Docker image that should be used for each job in your pipeline configuration file, .gitlab-ci.yaml, using the image keyword.
    • Image Retrieval: When a job needs to be executed, the GitLab Runner will retrieve the specified Docker image from a container registry. The most common public registry is Docker Hub. GitLab also offers a private Container Registry associated with each project.
    • Specifying Images and Tags:
    • It is crucial to specify a version or tag for your Docker images instead of just using the latest version (e.g., node or alpine). Relying on the latest tag can lead to inconsistencies and potential pipeline failures if the upstream image introduces breaking changes.
    • You can specify a tag by appending a colon (:) followed by the tag name to the image name (e.g., node:16, nginx:1.21.6-alpine).
    • Using official Docker images from verified publishers on Docker Hub is generally recommended when available.
    • Base Images: When creating your own Docker images, you typically start with a base image that provides a foundational operating system and necessary tools. For example, when Dockerizing a Node.js application, you might start with an official Node.js image.
    • Image Size and Efficiency:
    • Consider using smaller Docker images, such as those based on Alpine Linux (e.g., node:16-alpine, nginx:1.21.6-alpine), as they can significantly reduce the time it takes for the runner to download the image, thus speeding up your pipeline execution. Larger images contain more tools and dependencies, which might not all be necessary for your specific job.
    • Isolation and Cleanliness: Each job in a GitLab CI pipeline runs in a new and isolated Docker container. This ensures that jobs do not interfere with each other’s dependencies or state. Once a job is finished, the Docker container is destroyed, providing a clean environment for the next job.
    • Accessing Project Files: During the execution of a job, the files from your project’s Git repository are automatically made available within the Docker container.
    • Persistence and Artifacts: Any files or changes created within the Docker container during a job’s execution are not automatically persisted back to the Git repository. If you need to use files created in one job in subsequent jobs, you must explicitly define them as artifacts in the .gitlab-ci.yaml configuration.
    • Building Docker Images within GitLab CI:
    • You can build your own Docker images as part of your GitLab CI pipeline using the docker build command. This requires using a Docker image that has the Docker client installed (e.g., the docker image itself).
    • To enable docker build within a GitLab CI job, you need to utilize the Docker-in-Docker (DinD) service. You define services in your .gitlab-ci.yaml to start a Docker daemon alongside your main job container.
    • When building Docker images, you should tag them appropriately using the -t or -d flag. It’s common to use CI/CD environment variables like $CI_REGISTRY_IMAGE to tag images with your project’s container registry URL and a specific tag (e.g., latest or the commit/build version).
    • Pushing Docker Images to a Registry:
    • After building a Docker image, you can push it to a container registry (like the GitLab Container Registry) using the docker push command. The –all-tags option can be used to push all tagged images.
    • Pushing to a private registry requires authentication using docker login. You typically use the $CI_REGISTRY_USER and $CI_REGISTRY_PASSWORD predefined CI/CD variables for this purpose, along with the $CI_REGISTRY URL. It is recommended to pass the password securely using echo “$CI_REGISTRY_PASSWORD” | docker login -u “$CI_REGISTRY_USER” –password-stdin “$CI_REGISTRY” to avoid exposing the password in the logs.
    • Testing Docker Images in GitLab CI:
    • You can test your built Docker images within the pipeline by running them as services using the services keyword in .gitlab-ci.yaml. This allows you to define an alias (a friendly network name) for your running container, making it accessible from your test job.
    • Once the container is running as a service, you can use tools like curl in your test script to send requests to the application running inside the container and verify its behavior.
    • Docker Images and AWS Elastic Beanstalk: AWS Elastic Beanstalk supports deploying applications packaged as Docker containers. You typically provide a Dockerrun.aws.json file that specifies the Docker image to be pulled from a registry (like the GitLab Container Registry) and run by Elastic Beanstalk.

    In summary, Docker images provide the foundation for isolated, consistent, and automated execution of jobs within GitLab CI/CD pipelines. Effectively managing Docker images, including specifying versions, using smaller images, building and pushing your own images, and testing them within the pipeline, is crucial for efficient and reliable CI/CD workflows.

    GitLab CI/CD for AWS Deployment: S3 and Elastic Beanstalk

    Based on the sources, AWS deployment within GitLab CI/CD pipelines is primarily demonstrated through two key methods: deploying static websites to AWS S3 and deploying Docker containers to AWS Elastic Beanstalk.

    Here’s a breakdown of the concepts and processes involved in AWS deployment as discussed in the sources:

    • Interacting with AWS via AWS CLI: The sources heavily emphasize the use of the AWS Command Line Interface (CLI) within GitLab CI/CD jobs to interact with various AWS services. This allows for the automation of deployment tasks. To use the AWS CLI, you typically use a Docker image that has it pre-installed, such as amazon/aws-cli.
    • Authentication and Authorization: To allow GitLab CI/CD pipelines to interact with your AWS account, you need to configure AWS credentials. This is typically done by setting the following as secret CI/CD variables in your GitLab project settings:
    • AWS_ACCESS_KEY_ID: Your AWS access key.
    • AWS_SECRET_ACCESS_KEY: Your AWS secret access key.
    • AWS_DEFAULT_REGION: The AWS region where you want to deploy your resources (e.g., us-east-1). These variables are automatically recognized by the AWS CLI within the pipeline environment. It’s crucial to mask the AWS_SECRET_ACCESS_KEY variable for security. Additionally, the IAM user or role associated with these credentials needs to have the necessary permissions to perform the required actions on AWS services (e.g., AmazonS3FullAccess for S3 and administrator access for Elastic Beanstalk for the examples shown).
    • Deploying Static Websites to AWS S3:
    • Creating an S3 Bucket: The first step is to create an S3 bucket in your AWS account to store your website files. Bucket names must be globally unique.
    • Making the Bucket Public: To serve a static website, you need to disable “Block All Public Access” for the S3 bucket.
    • Enabling Static Website Hosting: You need to enable the static website hosting feature for your S3 bucket and specify the index document (e.g., index.html) and optionally an error document. This provides a public URL for your website.
    • Setting a Bucket Policy: You need to create an S3 bucket policy to grant public read (s3:GetObject) permission to the objects within the bucket.
    • Uploading Files: Within your GitLab CI/CD pipeline, you use the aws s3 cp command to copy individual files or the aws s3 sync command to synchronize an entire directory (like your build output) to the S3 bucket. The –delete flag with aws s3 sync ensures that files removed from your source are also removed from the S3 bucket.
    • Testing: After deployment, you can test if the website is accessible via the S3-provided URL.
    • Deploying Docker Containers to AWS Elastic Beanstalk:
    • Understanding Elastic Beanstalk: AWS Elastic Beanstalk is a service that simplifies the deployment and management of applications in the AWS Cloud, handling the underlying infrastructure (like EC2 instances) for you. It supports various platforms, including Docker.
    • Creating an Elastic Beanstalk Application and Environment: You create an application in Elastic Beanstalk, which acts as a container for one or more environments. An environment represents your running application.
    • Dockerrun.aws.json: To deploy a Docker container, you typically provide a Dockerrun.aws.json file in your source bundle. This file specifies the Docker image to be used (including the registry and tag). For private registries like the GitLab Container Registry, it also requires authentication information.
    • Automating Deployment with AWS CLI: In your GitLab CI/CD pipeline, you use the AWS CLI to automate the deployment process to Elastic Beanstalk. This involves two main steps:
    • Creating an Application Version: The aws elasticbeanstalk create-application-version command is used to create a new version of your application in Elastic Beanstalk, referencing the Dockerrun.aws.json file (which is typically uploaded to S3 first). You need to specify the application name, version label (often derived from CI/CD variables like $APP_VERSION), and the location of the source bundle in S3.
    • Updating the Environment: The aws elasticbeanstalk update-environment command is then used to deploy the newly created application version to your Elastic Beanstalk environment. You need to specify the application name, environment name (e.g., using an $APP_ENV_NAME variable), and the version label to deploy.
    • Authenticating with Private Registries: When using Docker images from a private registry like the GitLab Container Registry, you need to provide authentication credentials to Elastic Beanstalk. This is often done by including an authentication file (e.g., auth.json) in your source bundle, which contains a deploy token generated from GitLab encoded in Base64. The Dockerrun.aws.json file then references this authentication file. You can generate a deploy token in GitLab with read_repository and read_registry permissions. The process in the GitLab CI/CD pipeline involves:
    • Creating a CI/CD variable (e.g., $GITLAB_DEPLOY_TOKEN) containing username:token.
    • Encoding this token in Base64 using echo “$GITLAB_DEPLOY_TOKEN” | tr -d ‘\n’ | base64.
    • Creating or substituting variables in the auth.json file with this Base64 encoded token.
    • Uploading both Dockerrun.aws.json and auth.json to S3.
    • Waiting for Environment Updates: After triggering an environment update, it takes some time for AWS to deploy the new version. You can use the aws elasticbeanstalk wait environment-updated command in your pipeline to wait until the environment has finished updating before proceeding with further steps (like testing).
    • Testing Deployed Applications: After the Elastic Beanstalk environment is updated, you can use tools like curl to send HTTP requests to the application’s URL (provided by Elastic Beanstalk and accessible via the $CI_ENVIRONMENT_URL CI/CD variable when an environment is defined in GitLab) and verify if the deployment was successful.
    • Managing Environments in GitLab: GitLab provides a feature called Environments that allows you to define and track different deployment environments like staging and production. You can associate CI/CD variables with specific environments. This helps in managing configurations for different deployment targets. The $CI_ENVIRONMENT_URL predefined variable provides the URL of the currently active environment.
    • Continuous Delivery (CD) vs. Continuous Deployment: The sources touch upon the difference between continuous delivery (where deployments to production require manual approval) and continuous deployment (where every successful commit on the main branch is automatically deployed to production). You can implement continuous delivery by using the when: manual keyword for the production deployment job in your GitLab CI/CD pipeline.

    In summary, the sources illustrate a comprehensive approach to deploying applications to AWS using GitLab CI/CD, emphasizing automation through the AWS CLI, secure management of AWS credentials and deploy tokens as CI/CD variables, and the utilization of services like S3 for static websites and Elastic Beanstalk for more complex containerized applications. The concept of managing different environments and the choice between continuous delivery and continuous deployment are also highlighted.

    Understanding Continuous Integration (CI)

    Based on the sources, Continuous Integration (CI) is a practice and often considered the foundational first step when adopting DevOps. It addresses the challenges that arise when multiple developers work on the same codebase.

    Here’s a breakdown of key aspects of Continuous Integration as described in the sources:

    • Frequent Code Integration: At its core, CI involves developers integrating their code changes with the code created by other team members on a frequent basis. This means that every time a developer makes changes, their code is intended to be merged and combined with the shared repository.
    • Automated Testing and Verification: A crucial component of CI is the automatic testing and verification of code changes upon each integration. This typically involves running various types of tests, such as:
    • Unit tests: To ensure individual components of the software function correctly (yarn test is mentioned as a command for running unit tests).
    • Linting: Static code analysis to identify potential errors and stylistic issues (yarn lint is mentioned).
    • Basic integration tests: To verify if essential files are present after a build process (e.g., checking for index.html).
    • More comprehensive integration tests that verify the functionality of the built application (e.g., using curl to check for specific content on a running server).
    • Continuous Integration as an Ongoing Process: The term “continuous” emphasizes that integration should happen continuously, as changes occur, rather than in infrequent batches (like weekly or monthly). Waiting for longer periods to integrate increases the likelihood and cost of resolving integration conflicts and issues.
    • GitLab’s Role in CI: The sources highlight GitLab as a tool used to facilitate and automate CI. GitLab CI/CD pipelines are defined using a .gitlab-ci.yml file, which outlines the sequence of jobs and stages involved in building, testing, and integrating code.
    • CI Pipelines: A CI pipeline in GitLab automates the process of taking code changes, building the software, running automated tests, and producing a new version of the product. These pipelines can be triggered automatically upon code commits to a repository branch.
    • Importance of a Stable Main Branch: CI aims to ensure that the main branch (or a similar primary integration branch) remains stable and functional at all times, allowing for the delivery of new software versions whenever needed. Preventing broken code from being merged directly into the main branch is a key objective.
    • Feature Branch Workflow: To support CI and maintain a stable main branch, a feature branch workflow is often employed. This involves:
    • Developers creating dedicated branches for each new feature, bug fix, or change.
    • Committing their changes to these feature branches.
    • Allowing the CI pipeline to run on these branches to automatically test the changes.
    • Using merge requests to propose the integration of the feature branch into the main branch. Merge requests often involve code reviews and require the CI pipeline to pass before merging is allowed.
    • Job Artifacts: GitLab’s concept of artifacts plays a role in CI by allowing the output of one job (e.g., the built application in the build folder) to be saved and made available for subsequent jobs in the pipeline (e.g., testing jobs).
    • Stages in CI Pipelines: CI pipelines are typically organized into stages (e.g., build, test, package) to define the order of execution for different sets of jobs. This helps in structuring the CI process logically.
    • Fail Fast Principle: An important consideration in designing CI pipelines is the “fail fast” principle. This means that jobs that are likely to fail early (such as linting and unit tests) should be placed earlier in the pipeline to provide rapid feedback to developers.

    In essence, Continuous Integration is a set of practices that emphasizes frequent code integration, automated testing, and the use of tools like GitLab to ensure that software development teams can work efficiently, maintain a stable codebase, and deliver new features and fixes with confidence.

    DevOps with GitLab CI Course – Build Pipelines and Deploy to AWS

    The Original Text

    in this course valentine will teach you how to use gitlab ci to build ci cd pipelines to build and deploy software to aws hello frequent campers and welcome to this course which will introduce you to gitlab ci and devops my name is valentine i’m a software developer and i like to share my passion for technology with others in a way that is easy to understand when i’m not speaking at a conference or traveling the world i like to share what i know by being active in groups and forums and creating tutorials on youtube or online courses in this course we will understand what gitlab ci is and why we need this tool and start building cicd pipelines during the course we’ll create a pipeline that takes a simple website builds a container tests it and deploys it to the amazon web services cloud also called aws in other words we’ll be focusing on automation i’ve created this course for people new to devops who want to use gitlab to build test and deploy their software don’t worry if none of this makes sense right now if you’re a beginner that is totally fine you don’t need to install any tools or anything else also no coding knowledge is required but if you have some it is great it will help a bit i will explain to you everything you need to know step by step this course focuses on gitlab ci but the course notes are packed with resources i recommend exploring if unfamiliar with a specific topic go right now to the video description and open the course notes there you will find important resources and troubleshooting tips if something goes wrong i will also be publishing there any corrections additions and modifications yes this is a very dynamic industry and things change all the time so if something is not working first check the course notes i am a big fan of learning by doing and you will get hands-on experience building pipelines and deploying software to aws throughout the course i will give you assignments to practice what you have learned well in this course we’ll be focusing on a specific technology and cloud provider what you’re actually learning are the concepts around devops with the skills acquired in this course i’m sure you’ll be able to start using gitlab ci for whatever you need in no time at all this is an action-packed course which i’m sure will keep you busy at least for a few days as always here on freecodecamp please help us make such courses available to you by liking and subscribing also don’t forget to drop a comment below the video if you like this course i invite you to check out and subscribe to my youtube channel link in the video description which is packed with content around devops software development and testing also feel free to connect with me on social media i would really love to hear from you finally i would like to thank those who will support free code camp by clicking that thanks button and making a small donation i hope you’re excited to learn more about gitlab ci aws and devops and with that being said let’s get started i have designed this course to be as easy to follow along as possible you don’t need to install any software on your computer and you should be able to do everything just from a browser i’ll be using gitlab.com throughout the course if you don’t have a gitlab.com account please go ahead and create one by default you will get a free trial with your account which will be downgraded to a free one after 30 days it just takes a few steps to create an account if you don’t want to participate in the free trial that’s totally fine there’s also the possibility of skipping the trial altogether gitlab is a platform that offers git repositories where we store code and code pipelines which help us build software projects now that the registration process is completed let’s begin by creating our first project and we’re going to create a blank project and i’m going to call this project my first pipeline i have the option of providing a project subscription which is optional and i can also decide on the project visibility either private which means that only i or people who i explicitly grant access to can view this project or public which means that can be viewed by anyone without any authentication i’m not going to initialize this project with a readme file i’m going to simply go ahead and click create project this gitlab project allows us to store files and to use git to keep track of changes and also to collaborate with others on this project if the concepts around git are not clear check the course notes for a free course on getting started with git for gitlab since for this account i haven’t configured git you will also get here this warning in regards to adding the ssh key this is also covered in the material i have mentioned but for the moment we don’t need this so we can simply discard it the first thing that i like to do is to change a few settings in regards to how this interface looks like so from the user profile i will go to preferences and here from the syntax highlighting theme what i like to do is to select monokai so this is essentially a dark theme and as you probably know we like to use dark themes because light attracts bugs and we definitely don’t want any bugs now leaving the joker side some people like it some people don’t like it i prefer to use a dark theme when writing code but totally agree that depends on everyone’s preference on how to use this there are also some other settings i want you to do right now in the beginning i’m going to scroll here a bit further down and the first thing that i want you to enable is render white space characters in web ide this will show us any white space characters whenever editing files that’s super important okay we have everything that we need to do so i’m gonna go all the way to the bottom click on save changes and go back to gitlab you’ll see here your projects so currently you have only one project i’m going to click on it and currently we have absolutely no code inside this project the first thing that i want to do is to begin to create a file so from this new gitlab project we’re going to use the web ide to create the pipeline definition file i’m going to click here on new file this will open up the gitlab ide so i’m going to create a new file and we are already provided here with a template and this file name must be called dot gitlab dash ci dot yaml if it’s not exactly this name if you’re writing it on your own this pipeline will not be recognized so the pipeline find will not be recognized by gitlab this is a very common mistake that beginners make probably this is why if you click directly on this one you’ll be pretty sure that you don’t name this file like anything else now in this file we’re gonna define the pipelines and essentially gonna write here configuration to create pipelines in gitlab say it’s totally fine if you don’t know what that is right now just want to create a very simple example to make sure that we have everything we need in order to follow along with the rest of the course so what i’m going to do here i’m going to write something like test column and then i’m going to go to the next line and you will see here everything is already indented so with this indentation with this four spaces that you can see right now here i’m gonna write script column space and then i’m going to use the echo command to display a text this is going to be the text that we want to display the echo command is used to display a text and we’ll be able to see it later on and this dot gitlab ci that yaml file allows us to describe our pipeline as i said don’t worry if this does not make any sense right now this is just a test to ensure we have everything we need to get started so now what we’re gonna do is to actually commit these changes which means introducing this file inside the gitlab repository if we click again here on the project name we’ll exit this view and you will see here that the pipeline failed so every time we make changes to this project the pipeline will be executed based what we have defined in this yaml file but you will see here right on top there’s this indication that something is wrong and even here if you look inside the pipeline you will get this additional information so what is going on in order to run your pipeline using the gitlab.com infrastructure you need to verify your account now this is not the same as verifying your email address you already did that but you need to go an additional step of verification unfortunately some people decided to take advantage of this free service and have abused it for mining cryptocurrency for the time being you will be asked to verify your account using a credit card now your credit card will not be charged or stored by gitlab and it is used solely for verifying your account just to ensure that you are one of the good guys and i know that you’re one of the good guys and i know that this is a bit annoying in the beginning but this is how things are right now i also know that credit cards are not that widespread in some countries and this may be inconvenient maybe you can ask a friend to help out i hope that gitlab will introduce alternative verification options nevertheless verifying your gitlab.com account and using the gitlab.com infrastructure is the easiest way to follow along with the course so if you can invest five minutes now and get this done it will save you hours later you can use your own infrastructure to run gitlab but it is more complex and from my experience of training thousands of students is that people new to gitlab who use their own infrastructure have issues running their pipelines and waste a lot of time trying to get them to run properly you have been warned but if you want to go this path i’ve added some resources to the course notes which you can find in the video description now in order to get started with the verification process you have to click here on validate account you will be asked to enter your credit card information i hope that the validation has been okay and in order to see if everything is working properly i’m gonna go back to the project and open the web ide i’m gonna click on the get lab ci file to make a change to it and i’m gonna change here the message so this is gonna be now hello world 2 and going to go here commit so essentially making a new change to this file and commit it directly into the main branch and if i’m looking here at the bottom i should be able to see that something is happening so very soon a pipeline will be started and you will see here now pipeline with the specific number is running i can click on it and if you click here on this execution you’ll be able to see the job logs and what we’re interested in is firstly seeing this message here hello world and additionally what we’re also interested in seeing here is seeing this text here which says pulling docker image now this is very important not for this job itself but what we are going to do throughout the course we want to make sure that whatever we have here in terms of the execution that we are actually using docker if this is working it’s fantastic you can jump directly into the next lecture otherwise check the course notes for some troubleshooting ideas so what is a pipeline allow me to make an analogy to an assembly line used to manufacture a physical product any product goes through a series of steps let’s take a laptop for example to oversimplify the assembly line would have the following steps we’ll take an empty laptop body case we’ll add the main board and add a keyboard we would do some quality assurance to ensure it turns on and works properly then we would put it in a box and finally ship it to customers and we are not using gitlab to produce physical products we want to build software but producing software has similarities to what i have described before before we can ship anything we go through a series of steps let’s try to build the laptop assembly line in gitlab ci well kind of instead of real components we’ll use a file a folder and some text so let’s begin with the first task take an empty laptop case we’ll put this in a job in gitlab a job is a set of commands we want to execute so now let’s go back to our project and make some changes to it again i’m gonna open the web ide to be able to view the pipeline file now essentially we already have a job here but we’re gonna expand this and make it build a laptop if you’re facing any issues getting the following steps to run make sure to watch the next lesson where i’m going over some of the most common mistakes now we already have a job and this job is called test but probably we should go ahead and rename this to maybe build a laptop or we can also call it simply build laptop now in the script part this is where we can essentially write commands now so far we have used this echo command but the way we written this allows us only to write one command so what i’m going to do here i’m going to go on the next line and i’m going to start here with the dash and this will allow us to essentially write multiple commands one after the other so let’s rename this in something like building a laptop just to give us some information in regards to what we’re trying to do i want you to notice that after this dash i have added a space and you can see that space represented by a dot common mistake that people just getting started with gitlab and this language that you see here which is called yamo is that they don’t add the spaces or they don’t properly indent what they see so make sure that what you have inside your editor looks pretty much the same as what i have here and you’re doing that pretty sure this example will also work for you now as i said the language that we’re using here to describe our pipeline is called yaml and yaml is essentially a way to represent key value pairs it may look a bit weird in the beginning but after a few examples i’m sure you will get used to it now so far our problem doesn’t do anything and inside this job we actually said we want to build this laptop and we also said that we’re not going to use physical components we’re going to use folders we’re going to use files we’re going to use some text so let’s begin with the first command which will create a new folder now we want to put this into a folder which is called build so on the next line i’m gonna use a command that will create a folder this command is called make deer and i’m going to call this folder build so now we have a folder and let’s go to the next line and actually create a file so i want my file to be inside this build folder and in order to create this file we’re going to use the touch command now the touch command is generally used for modifying the timestamp of a file essentially you’re touching it you’re modifying it but it also has an interesting behavior which we will use here if the file does not exist it will create an empty file and initially for us that’s perfectly fine so we’re going to create this file inside the build folder which we created one step before and with the forward slash we go inside this build folder we can specify the file name which in this case will be computer.txt now the next question is how do we get some text inside this file and in order to do that we’re going to use a command we have already used before and this is the echo command now echo if we don’t do anything else like we did here it will just print this message and we’ll see it in our build logs but we can also kind of redirect this and send it directly to a file and for that we’re going to use an operator so let me show you what i mean by that i’m going to write here again echo and the first thing that we are going to do is we’re going to add here the main board now let’s see the file itself it’s uh just containing the the laptop and then we’re adding components to it so if you would keep this as it is right now it will just display this information just display mainboard but we actually want to get it inside this file that you see here so i’m simply going to go ahead and copy it and in order to get this text inside there we’re going to use this operator it’s actually greater than greater than and essentially this operator takes the output from one command so echo is one command and it will append it to a specified file so now we have specified here this computer.txt file which is inside the build folder so if you want to take a look and see exactly what is the contents of the file we can do that as well and for that we there are different commands that we can use one option would be to use the cat command and cat stands for concatenate and can be used for creating displaying or modifying the contents of files in this case we’re using this hat command to view the contents of a file and again we have to specify the path to that file so just to make sure i don’t make any mistakes and always go ahead and copy paste the value of the file the name of the file and of course we can also go ahead and add the other steps so which other steps did we had for example we wanted to add a keyboard i’m gonna add this again with a new command and i think that should be it’s a way we have the main board we have the keyboard and of course we can also try this cat command once again at the end and this will give us an idea of how the job itself is working in the previous execution we haven’t specified a docker image to use and by default it will download a ruby docker image which for our use case here doesn’t really make a lot of sense so we’re going to use a very simple docker image it’s a linux distribution that will be started with our job and we can do that by specifying here another keyword and you will see here it’s under the job itself which is build laptop but at the same level with the script so i’m going to write here image column and the image name will be alpine so alpine linux is a very lightweight linux distribution what’s most important for us that this distribution has these commands that we’re using so these are pretty standard commands that are available essentially in any linux distribution but having a very small linux distribution will make our job run much faster so let’s go ahead commit these changes and see how the pipeline runs i’m going to commit again to the main branch and if i’m in patient i can click directly here on the pipeline or i can also go directly on the project page and you will only see here the pipeline running pipeline is also available if you go here inside cicd pipelines it will display your list of all the pipelines that are running you can click here on this one which is the id of the pipeline you will see now our pipeline contains a job and this job has been assigned to the stage test by default if no stage is defined the job will use the test stage now it doesn’t really matter how the name of the stage has been called we just want to make sure that the commands that we have executed are actually working properly we’re getting here no errors so the job is running so the job has executed successfully we can click on it take a look at the execution logs see exactly what has happened here and you will see here in the logs the commands that we have executed they’re all visible here you will see here echo building laptop you will see the text being displayed after that we are creating a new folder we’re putting a file inside that folder we’re adding some text to that file then we’re checking the contents of the file to make sure everything is fine you see here the word mainboard being displayed and then we add the keyboard and then again you will see here the contents of the file containing both main board and keyboard so what is a pipeline it is a set of jobs organized in stages right now we have a single job that belongs to the test stage however in the upcoming lectures we’ll be expanding this pipeline it is quite common that when you are just getting started with defining these pipelines that you make some mistakes let me show you like some common mistakes that will lead to invalid configuration which will lead to git lab not running your pipeline i’m going to make this a bit bigger on the screen so that you can easily check and compare it with what you have it’s very important that in some places you have this column so for example here i’m defining the job and in order to add image and scripts as essentially properties of this job it’s important to have here this column if i’m removing that you will see here in the editor when you see these lines here it will indicate that something is wrong most of the time these messages are really not so easy to understand so it’s more important to double check what you have written here to make sure it is exactly as it should be it’s also important that you have some spaces where there are spaces expected so for example here with the commands and whenever you’re writing echo or make deer or touch there needs to be a space between the dash and the command this is why in the beginning i’ve asked you to enable this white spaces so that you can easily see them in your script so if i write something like this this again will make something weird so it will not show here an error because this is actually valid yamo but when gitlab will try to execute this pipeline and we’ll look at these commands we’ll think that you’re trying to run a command that starts with dash echo and we’ll say i cannot find this command so for that reason you need a space here to make a difference between this dash here which indicates a list and this command what’s also important is the indentation as you can notice here there are four spaces so everything that is under build laptop is indented with the level if i add something like this here it it will no longer belong to build laptop and will come out as something that’s really weird and most likely it will be invalid so always make sure that you have the right indentation uh two spaces are also fine by default this web ide will use four spaces just make sure you have that indentation there in place in the course notes you will find even more examples of why your pipeline may be failing including some common error codes that you may encounter so definitely check the course notes if you’re still having issues getting this pipeline to run in this lesson we’ll try to understand what yaml is if you already know yaml feel free to skip this as i will not be covering any advanced features the most important reason why you need to know some yaml basics is because you’ll be facing many errors while writing yaml and as i mentioned in the beginning this is normal trust me i also did the same mistakes that you did and sometimes i’m still making mistakes while writing yaml if i’m not paying attention if you’ve already been exposed to formats such as json or xml i’m sure you’ll be able to understand the basics around yaml very easily actually yaml is a superset of json and you can easily convert json to yaml both xml json and yaml are human readable data interchange formats while json is very easy to generate and parse typically yaml is considered easier to read as you will see not so easy to write but we’ll get to that quite often yaml is being used for storing configuration especially if you’re learning devops you probably face yaml a lot and this is exactly why we’re using it in gitlab as well to store a pipeline configuration at its core yaml follows key value storage principles and let me open up here inside the editor a new file and i’m gonna start writing some yaml basics so i’m gonna create here file i’m gonna call it test.yaml and let’s start with a few basics so for example how do we represent a key value pair so for example i have here a name typically would write something like the name is john right so this is something that anyone would understand now in yaml we use a column to separate that and so i’m going to add here name column john and what’s also more important that we have a space after this column so it will be like this and you will see here the color here will also change so now we have a key value pair the key is name and the value is john we also call this a mapping we’re mapping a key with a value of course on a new line we can start adding an additional key value pair for example let’s say here h 23 and what’s important to know is that the order of the properties does not matter so we can define them in any order so if i write first name john and after that h or the other way around that’s actually the same both are defined here in this yamo quite often we need to define lists which in yaml are called sequences and we do that by writing each value on a new line so for example let’s write here some hobbies call them sports youtube and hiking right and what you also do is we put them each of them on a new line but each line must start with a dash and a space so we have here a dash and a space a dash and a space dash and a space now we have a list now the way i’ve written this doesn’t really work like that so if we remove everything from here this will be probably a valid list but we cannot simply just combine this properties and now just add this list here in between what’s important to know about yaml is that it uses indentation for scope essentially it allows you to know which value belongs to which key because it has the proper indentation so for example now we know this is not valid here but if you write here for example hobbies as a key then this list here will belong to this key so essentially the mapping will contain a key and this list of values well this is valid we also like to indent this i’m going to select everything and click on tab and this will indent all these values here additionally we can have some nested structures so for example if i’m trying to write here an address so this will be the key and then on a new line i can write additional key value pairs which all belong to the address and you will see here i have the indentation so i can write something like street and on a new line i can go ahead and write a city and again on a new line i’m gonna write here the zip code and essentially by using this indentation we show that street city and zip they all belong to address if we didn’t have this indentation and it would all be a property of something else this is why in this case we’re using here name age hobbies and address these are like properties on the first level and street city and zip they are under address in terms of lists we can also build some more advanced lists so this is a very simple list that we’ve used for hobbies but let’s say for example that we want to add here a new key which is called experience and here uh for the experience we’re gonna create a list and we just wanna say like what is the professional experience of this person for example we can have here job title that could be junior developer and instead of creating here a new item in the list we just go to the same level as with title then we can start writing something like period so it will indicate in which period this person has been a junior developer let’s say from 2000 to 2005. and then we can go ahead and add a new item to the list again with title let’s say now this person is a senior developer and we can also define a different period so this will be since 2005. now additionally what we can do is we can take everything that we have here and how about we define here let’s say this is a person right and then we can take every everything that we have here and indent it once and then all these properties that we have here will belong to a person now you can see here that we are still have some mistakes here so we can we need to fix something because this editor will tell us this is a bad indentation and as you learn the indentation is very very important so i’m going to add here to spaces and add here again to spaces and now this indentation will be correct now when we are writing gitlab ci pipelines we don’t really get to choose which data structures to use we have the liberty of designing the pipeline but we need to do it in a way that gitlab will understand it so if we’re looking here at our pipeline we can just decide to use something else so for example instead of image we cannot just decide we’re going to call it docker image or docker or instead of scripts we cannot write here scripts because this is something that gitlab will not understand yes from a yaml perspective this is still valid but from the perspective of gitlab this is something that’s not according to what we agreed essentially in advance when we started writing these pipelines so it’s just important that you understand like what we’re doing here and how this indentation and how this key value pairs are being written but when we’re writing jobs we we decide how this job will be named we decide what exactly will happen inside it some keys will be reserved and simply cannot be renamed you just have to pay attention that what you’re writing here is really important and it has to be exactly at least exactly as the examples that i’m showing you otherwise gitlab will not be able to understand what you’re seeing throughout the course we’ll be using linux and linux commands we have already learned some commands such as echo touch make deer cat and so on we typically use these commands through a command line interface or cli what you see here for example i can go ahead and create here a new folder i can switch inside that folder we’ll see here for example with ls i will be able to list the contents of that folder let’s go ahead and write here touch this touch command and create here file name computer and again with ls i can see which files are inside the current folder so we typically type these commands that you have seen in the pipeline through this command line interface or cli sometimes we call it a console a terminal or a shell while technically speaking they are not the same thing you may notice me and others use them interchangeably a command line interface is the opposite of a graphical interface this is a graphical interface you will be able to see text colors you have buttons that you can click things are happening through your interaction you’re moving your mouse and you’re clicking or something or you’re using your keyboard and something changes on the screen this is a command line interface we will work with the command line interface to write commands computers we interact with have no user interface that we can use anyway automating graphical user interfaces is not as easy and as reliable as simply using commands the command line interface that computers have is called the shell the shell is simply the outer layer of the system the only thing that we can see from outside and interact with so we send these commands to the system and the system will run them this is how we can interact with it when using gitlab ci we essentially automate a set of commands which we execute in a particular order while i will explain every command that we’ll be using throughout the course if this is something new there’s absolutely no replacement for trying things on your own please see the resources in the course notes for this lesson on setting up a linux environment on your own computer and which linux commands you should know let’s talk for a minute about the gitlab architecture at the minimum the gitlab architecture for working with pipelines contains the gitlab server and at least one gitlab runner the gitlab server manages the execution of the pipeline and its jobs and stores the results the gitlab server knows what needs to be done but does not do this itself when a job needs to be executed it will find a runner to run the job a runner is a simple program that executes the job a working gitlab setup must have at least one runner but quite often there are more of them to help distribute the load a runner will retrieve a set of instructions from the gitlab server download and start the docker image specified get the files from the project git repository run all the commands specified in the job and report back the result of the execution of the gitlab server once the job has finished the docker container will be destroyed if docker is something new to you check the course notes for a quick introduction what’s important to know is that the git repository of the project will not contain any of the files created during the job execution let’s go back to our project you will see here that inside the git repository there’s no build folder there’s no computer.txt so what exactly is happening let’s go inside one of the jobs and quickly go through the log so that you can understand what is going on so right here on top you’ll have information about the runner that is executing the job and a runner will also have an executor in this case the executor is docker machine and here on line four you will see here which image is being downloaded and you’ll see here pulling docker image alpine because this is what we have specified and then essentially the environment is being prepared and then in the upcoming steps you get the files from the git repository so essentially all the files that you have inside the git repository will also be available here inside the runner so after this point the docker container has been started you have all the project files and then you start executing the commands that you have specified inside the pipeline so in this case we’re creating the folder we’re creating these files we’re putting some text inside there and then at the end we’re not doing anything else so the job succeeds because there are no errors and the container is destroyed so this docker container that has been created in the beginning just maybe a few seconds ago is then destroyed you cannot log into it you cannot see it anymore it doesn’t exist anymore so it has done its job has executed this command that we have specified and then it has been destroyed since every job runs in a container this allows for isolation and flexibility by having our job configuration stored in the yaml file we describe how the environment where the job is running should look like in practice we don’t know or care which machine has actually executed the job also this architecture ensures that we can add or remove runners as needed while the gitlab server is a complex piece of software composed of multiple services the gitlab runner has a relatively simple installation and can run on a dedicated server or even on your laptop now right here on top of the job you have seen some information in regards to the runner so the question is where is this job actually running and to be able to understand this we’ll have to go here to the project settings cicd and here we’re going to expand runners and you’ll see here two categories there are specific runners and they are shared runners now for this job we have used shared runners any project can use shared runners within a gitlab installation these shares runners are offered by gitlab.com and are shared between all users now as you’ve seen there are multiple runners here and honestly we don’t really care which of these runners picks up our job because we have defined exactly what our job should do which docker image we want to use which command should be executed so as long as that runner knows how to deal with a docker image and was able to execute these commands then we’re fine with any runner now this is an oversimplified version of the gitlab architecture the main idea i want you to get from this is that we are using docker containers in our jobs and every time for every job the gitlab runner will start a new docker container will execute any commands that we have and then when the job execution is done that docker container will be destroyed so now we have created this laptop and have defined all the steps inside the build laptop job and of course with this cad command we have also visually verified that indeed this computer.txt file contains everything that we expect but we really want to automate this process we don’t want to go inside the job itself and to look and see if everything was successful so let’s expand our pipeline and add a test job we want to make sure that our laptop contains all components so i’m going to go ahead here and create a new job i’m going to call it test laptop and we’re going to use the same image the alpine linux image and we’re going to also define the script and this time we need to find a way to actually check for example if this file has indeed been created so this would be like a very basic test and what we can do here is to use the test command so the test command will allow us to check if this file has been created and this command also has a flag which we’ll gonna write with dash f and then we’ll specify the path to the file so essentially this command will ensure that this file really exists and if it doesn’t exist it will fail the job so let’s go ahead commit this and see how the pipeline looks like if we’re looking at the pipeline we’ll notice something unusual and that is we’re building the laptop and at the same time we’re testing it and essentially what we wanted to do is to have this in two separate stages right we first build and then we test now currently what we did is we assigned by default both of these jobs to the test stage they both are running in parallel but of course these are not the kind of jobs that we can execute in parallel because they depend on one another particularly the test job depends on first the build job completely so i’m not even going to look at why the test build failed because by the way this pipeline looks like doesn’t make a lot of sense and we have to go back to the concept of stages so what we want to do is to have two different stages and we’re going to change our pipeline configuration and define these stages we want to have a build stage and we want to have a test stage in gitlab we can go ahead and define another configuration which is called stages and here as a list we can define the stages that we want to have so we can see here we want to have the build stage and we want to have the test stage and in order to specify which job belongs to which stage in the job configuration we will also say to which stage this job belongs so for example we want the build laptop job to belong to the build stage so i’m going to write here stage and the stage will be built the same goes for the test laptop now if you don’t specify a stage here it will automatically belong to the test stage by default but actually you want to make it as explicit as possible so we’re going to write here stage test so we here we have defined the stages we have assigned the build laptop job to the build stage and we have assigned the test laptop job to the test stage so let’s commit these changes and take a look again at the pipeline if we’re looking at the pipeline we’ll be able to see now the two stages that we have build and test so these stages will run one after the other this stage does not start until the build stage is over so first we have to build a laptop and then we can start with a test unfortunately this is still failing so in this case we really have to take a look inside this job and understand what is going on and we’ll see here that the last command that we have executed is this test command here so up to this point everything seems to be working fine this has been the last command that we have executed and somehow this job has failed in the next lecture we’ll try to understand why do pipelines fail so let’s jump into that and continue debugging this problem so let’s try to understand why did this job fail looking here inside the job blocks we’ll be able to see here this error says job failed exit code 1. exit codes are a way to communicate if the execution of a program has been successful or not an exit code 0 will indicate that a program has executed successfully any other exit code which can be a number from 1 to 255 will indicate failure so in this case exit code 1 is not a 0 so it means something has failed this exit code is issued by one of the commands that we have in our script as soon as one of these commands will execute an exit code execution of the job will stop in this case we have only one command so it’s relatively easy to figure out which command has issued this exit code most likely it is the last command that you see here inside the logs so in this case what test is trying us to tell is that it has tested for the existence of this file and couldn’t find it if the file would have been there would have gotten an exit code 0 and the execution would have continued in this case the file is for some reason not there and we have retrieved an exit code 1. this tells gitlab that this job has failed and then the entire execution will stop if a job in the pipeline fails by default the entire pipeline will be marked as failed as well let me show you an example let’s go back to our pipeline definition and we already know that the build laptop job works just fine so let’s see here somewhere in the middle we’re going to add a new command and this command will be the exit command and we’re going to exit with a code one so what we should observe is that commands echo make tier and touch are still being executed but for example here where we’re putting the main board we’re using cat and again we’re putting the keyboard inside this file this shouldn’t be executed anymore so now i’ve executed a pipeline again you will see here that a build job failed and because this job failed the rest of the pipeline was not executed so it was interrupted as soon as something has failed it actually doesn’t really make a lot of sense to continue with the rest of the stages if one of the jobs before has failed so let’s go inside it and take a look at the execution logs to see what exactly has happened here and you’ll be able to see as i said we have here the this echo command that’s being executed we have created this directory we have created this file and then we have exit one and then we says here job failed exit code one now we have forced this failure here but i just wanted to demonstrate like where in the execution you can notice that something went wrong and actually going through these commands that are executed looking at your job configuration trying to figure out what has happened what is the last command that did something sometimes inside the logs you will find hints in regards to what went wrong in this case there are not a lot of hints in terms of what went wrong but these are also very simple commands but it’s very important to read the logs from from the beginning as high as possible to understand like which docker image has been used what is happening before are there maybe any any warnings or any hints that something went wrong this case that’s not the case and then to locate okay what was the last command that i executed why did this command fail so i’m gonna remove this exit code and we’re gonna continue with the rest of the course and try to understand how we can get the simple pipeline to run let’s go back to our pipeline configuration and understand what are we doing wrong as you probably remember from the architecture discussion every job runs independently which means here the build laptop job will start the docker container will create this folder and this file will put this text as instructed and then at the end we’ll destroy the container meaning the file that we had created inside this docker container will be gone as well and test laptop will start a completely new container that doesn’t have this file there and for that reason it cannot pass this test file will always tell us that there is no file there because from where could this file come now does this mean that we can only use a single job meaning that we need to move this command here to test inside the build laptop well that would be very inconvenient because then would have a big job that does everything and would kind of lose the overview of what our pipeline steps really are now there is a way to save the output of the job this case the output of the job what we are actually interested in this job is this file including this folder and gitlab has this concept of artifacts now an artifact is essentially a job output it’s something that’s coming out of the job that we really want to save it’s not something that we want to throw away for example we may have used any other files or any other commands within the job but we’re only interested in the final output that we have here so in order to tell gitlab hey i really want to keep this file in this folder we need to define the artifacts so the artifacts are an additional configuration an additional keyword that we add to our pipeline as the name tells its artifacts don’t write it artifact because gitlab will not recognize that and as a property of artifacts we’re going to use paths so notice it is indented it’s not a list and then below paths we can add a folder or a file that we want to save now in this case we’re going to tell gitlab save everything that is inside this build folder so now let’s give it a run and see how the pipeline performs this time now if we’re looking at the pipeline execution we now see that both building the laptop and testing the laptop are successful so what exactly has happened behind the scenes did we reuse the same docker container or what has happened there well to understand exactly what has happened and how these jobs now work we have to go inside the logs and we try to understand what exactly did this build job do differently this time and what i want you to notice here is that towards the end if you compare it to the logs of the previous job we also have this indication that something is happening and it will tell you here uploading artifacts for a successful job and will tell you here which artifacts are being uploaded we’ll reference here the build folder and says that two files and directories were found and these are being uploaded to the coordinator not a coordinator to put it very simply essentially the gitlab server so in this case the runner has finished this job has noticed inside the configuration oh i need to do something with these files and we’ll essentially archive these files and we’ll give them back to the gitlab server and tell them hey i’m finished with this job i’m just gonna destroy this docker image you wanted to keep these files so you know here we go just handle these files i’m i’m done with my job here so essentially the runner is not saving these files they’re being saved somewhere else in the storage now when the next job is being executed something pretty similar is happening i’m gonna go here to the test laptop job and what you see here in the beginning there’s also a new indication inside the logs that something different is happening and we’ll see here downloading artifacts and says again downloading artifacts from coordinator which essentially means we now are downloading this build folder inside the new docker container that we have just created we have managed to copy this files from one job to the next one and this is why now this test command is able to find the build folder and the computer.txt file inside it and the job is passing if this job is still failing for some reason it’s always a good idea to take a look at the job that has generated the artifacts so in order to do that again we’re gonna go and visit the pipeline and if we go inside the build laptop job here on the right hand side you should see some information in regards to the job artifacts and in order for these job artifacts to exist they are saved even after the job has terminated and you have the possibility of inspecting them so if you’re not really sure like what is the contents of the file you don’t really need to go inside the build pipeline and make some debugging there you could do that but essentially what has been saved here is the final version of the artifacts that are being used so you can go here inside browse you’ll find here the build folder that we have specified and we’ll find here the computer.txt and of course you can download this file and you can take a look at it after you download it on your own computer and see if it has the right content make sure that when you’re testing this you’re actually giving the correct path and not some other path that’s a very important way on how you can actually inspect the outputs of the job at this point i wouldn’t say that we are really done with testing the build output yes this file exists and we have now tested it so this is a very basic test that we have written here but how about checking the contents of the file to see if it contains the main board and the keyboard as we expect again just using a command like cat and displaying this into logs doesn’t really help us automate this we need a tool that when it doesn’t find the respective text in the file it will issue an exit code and tell us that that text is not really present there and for that purpose we’re going to use the grep command so this is the grep command and grep is a cli tool that allows us to search for a specific string in a file so we’re going to specify here for example the string to be main board and i’m going to copy it from above just to make sure i don’t make any mistakes there and we also can specify in which file we’re looking for this so we know that this file already exists so this is an additional test here that we’re adding on top of this and now we actually know that the file exists but now we want to check if this word main board is inside the file and of course we can also duplicate this and write an additional test here gonna check for the keyboard as well we’re checking for the main board and the keyboard now grep is really complex command and can support regular expressions and many other advanced features but i won’t get into those i’m gonna commit these changes and in a few seconds take a look at the pipeline all right so now if we’re taking a look at the pipeline we’ll see the build job is still successful the test job is still successful so if we’re looking inside here what we’ll be able to see we’ll be able to see some log outputs and then grab here is looking for the word main board inside this file and is able to find it will be displayed here and it’s also looking for the word keyboard inside this file and is able to find it what’s important about writing tests is to also ensure that your pipeline will fail if one of these tests doesn’t work and sometimes you may think that the command does what it’s supposed to do but the best way to make sure that you’re really mastering that command to actually introduce an error inside your build job essentially and to check if the test job will fail so let’s go ahead and try that out so here inside our build job what can we do well for example we could try and make a change here for example i’m going to remove the m from main board and i’m gonna simply commit these changes and to see if this job is now failing we’ve been looking now inside the tests we’ll be able to see here that the last command that was executed was grep you will see it in comparison to the previous one there’s no text being outputted below this is the last command that was executed we’re getting an exit code 1 which essentially means grep has looked inside this file couldn’t find the word main board so let’s go back and fix our pipeline but this has been a very important test because if you’re not really checking that our pipeline will fail at one point then whatever we did inside the pipeline in terms of testing is really not very useful i’m gonna go and add here main board back to the configuration and as we expect now the pipeline will also work again tests also play a very particular role when we’re working on this pipeline when we’re building software there are many various levels of testing but essentially tests allow us to make changes to our software or to our build process and to ensure that everything works the same for example i heard that if we are using this operator here and we’re putting text inside a file that this approach doesn’t really require to have a file already created with the touch command so how about we try this out and see if we can rely on the tests to get this to work so i’m going to simply remove this touch command and i’m going to commit a configuration we’ll see what’s going on so let’s take a look at the pipeline and what do we see this test job is successful and if we really want to manually check once again we can go to the build laptop job we can take a look at the artifacts we’ll see the build folder is there the computer.txt file is there so apparently we didn’t need to have this touch command in our configuration and by having the tests we have gained an additional level of confidence that whatever changes we’re making we kind of trust the tests and if the pipeline is passing we know that in the end we’ll have this computer.txt file and i will have the proper content let’s take another look at our pipeline configuration what if we need to change the name of the file so so far we have used this name computer.txt but what if we need to call it for example laptop.txt now quite often we don’t really like when we have something for example a file name or something that could change and have that spread also the entire pipeline because if we need to make a change we need to identify all the currencies now this is a very simple example and it is relatively easy to identify this but quite often for example going into a large file and doing something like a replace all can lead to some undesirable errors that could occur so quite often when we have something that we are searching multiple times inside a file or inside the configuration we want to put that into a variable and that way if it’s inside a variable if we need to make changes to that particular value we only have to change it once and then it would be replaced all over so how do we define a variable well we can go inside our script block and define a variable here so for example you can just define a variable called build file name and we see i’m writing this in lowercase and we have also some underscores to separate the words and then we can use the equal sign and write something like laptop.txt now in order to reference this variable we’re going to copy the name and go whenever we need to have this and we’re going to start with the dollar sign and then the variable name so we’re referencing the variable by using the dollar sign before the variable name now this is a local variable that is available only inside this script and it will only be available inside this job so for this job we need to do the same thing as well as you’ve noticed quite often we write local variables in lowercase and using lowercase helps avoid any conflicts with any other existing variables now this is one way how we can do this but there’s also the possibility of defining a variable block we can go inside the configuration of the job and write something like variables this will not be a list that’s very important so this will be build file name column and the value will be laptop.txt so this is essentially almost the same as writing it like this but we are getting it away from the script and we are letting gitlab do this for us and of course if we need to do this we have to take it from here and also put this in the other job so we’ll have some duplication here in terms of the configuration we can also define a global variable which is available for all jobs in order to define something globally all we have to do is take it from here and due to the indentation we’re going to move it outside of the job and everything that’s happening here is on a global level so i’m gonna move it here at the root of this document essentially here we define variables and this is the name and whenever we have this we have to use this syntax and of course we can go ahead and remove it from the test job because it would also be available there now this is essentially as defining an environment variable which is available for the entire system while there is no hard rule we typically write environment variables in all caps and still use underscores to separate the words because we are inside an ide i’m going to select this text use the f1 command and then i’m going to transform to uppercase so this is how i want it to look like build file name and wherever i want to have this again dollar sign and this name so for example i’m going to have it here for computer as well here this command which we may decide to replace or not really depends on us but i’m going to replace it here as well i’m going to use it here and here and here so again wherever we had computer.txt we have now replaced this with this environment variable and whenever we need to make changes to it we can easily just change it here it’s pretty easy to see maybe adding some other variables that we have inside our pipelines it will make the managing of these details much easier well in this example it is not necessary depending on which characters you include in your variable value you may need to put everything between quotes this is just something to keep in mind but this is a very simple text here that we have so that will not cause any conflicts in general with the yaml syntax so let’s commit these changes and see if the pipeline is still working as it should the pipeline is running successfully and we can go inside the build job that we had here and we can take a look at the artifacts and indeed we’ll see that now we are using this file name laptop.txt and no longer computer.txt so i have mentioned devops quite a few times and by now you have heard of devops as well so what is devops let me tell you first what devops is not devops is not a standard or a specification different organizations may have a different understanding of devops devops is not a tool or a particular software nor is it something you do if you use a particular tool or a set of tools devops is a cultural thing it represents a change in mindset let’s take a look at the following example you have a customer who wants a new feature a business person let’s call them a project manager would try to understand what the customer wants write some specifications and hand them over to the developers the developers will build this feature pass it on to the testers who would test it once ready the project manager would review the work and if all looks good would ask the developers to pass the software package to the sysadmins to deploy it as you can see there’s a lot of passing stuff around and in the end if something goes wrong and often things go wrong in such situations everyone is unhappy and everyone else is to blame so why does it happen because every group has a different perspective on the work there is no real collaboration and understanding between these groups let’s zoom in on the relation between the developers and the sys admins the developers are responsible for building software ensuring that all these cool features that the customers want make it into the product the it operations team is responsible for building and maintaining the t infrastructure ensuring the i.t systems run smoothly securely and with as little downtime as possible do these groups have something in common yes of course the software the product the problem is the idea operation team knows very little about the software they need to operate and the developers know very little about the infrastructure where the software is running so devops is a set of practices that tries to address this problem but to say that devops is just a combination of development and operation would be an understatement actually everyone mentioned before works on the software just in a different capacity since the final outcome impacts everyone it makes sense for all these groups to collaborate the cultural shift that devops brings is also tightly connected to the agile movement in an ever more complex environment where business conditions and requirements change all the time and where we need to juggle tons of tools and technologies every day the best culture is not one of blaming and finger-pointing but one of experimentation and learning from past mistakes so we want to have everyone collaborate instead of working in silos and stages instead of finger pointing everyone takes responsibility for the final outcome if the final product works and the customers or users of the product are happy everyone wins the customers the project managers the developers the testers the sys admins and anyone else i did not mention everyone wins however devops is more than just culture to succeed organizations adopting devops also focus on automating their tasks manual and repetitive work is a productivity killer and this is what we are going to address in this course automatically building and deploying software which falls under a practice called cicd we want to automate as much as possible to save time and give us the chance to put that time to good use instead of manually repeating the same tasks over and over again but to automate things we need to get good at using the shell working with cli tools reading documentation writing scripts quite often you may see devops being represented by this image while this does not give a complete picture of what devops really is it does show a series of steps a typical software product goes through from planning all the way to operating and monitoring the most important thing i want you to notice in this representation is that this process never stops it goes on and on in an endless loop this means that we continue going through these steps with each iteration or new version of the software what is not represented here is the feedback that goes back into the product devops goes hand in hand with the agile movement if adrian and scrum are new to you make sure to add this to your to-do list nowadays many organizations go through an agile transformation and value individuals who know what agile and scrum are regardless of their role i’ve added some resources you may want to look into in the course notes if you have some free time while commuting or doing other things around your house i highly recommend you listen to the phoenix project as an audiobook it is an accurate description of what companies that are not adopting devops go on a day-to-day basis and realistically portrays such a transition this is by no means a technical book and i’m sure it will be a fun listen so devops is a set of practices that helps us build successful products to do that we need a shift in thinking and new tools that support automation however i must warn you that you can use tools that have devops written all over them and still not do devops so devops is so much more than just adopting a particular tool with that being said let’s continue diving into gitlab ci in this unit we will start working on a simple project we want to automate any of the

    manual steps required for integrating the changes of multiple developers and create a pipeline that will build and test the software we are creating in other words we will do continuous integration continuous integration is a practice and the first step when doing devops usually we’re not the only ones working on a project and when we’re doing continuous integration we’re integrating our code with the code other developers created it means that every time we make changes to the code that code is being tested and integrated with the work someone else did it is called continuous integration because we integrate work continuously as it happens we don’t wait for anything to do that we don’t want to integrate work once per week or once per month as it can already be too late or too costly to resolve some issues the more we wait the higher the chances we will run into integration issues in this unit we will use gitlab to verify any changes and integrate them in the project i’m going to be honest with you as we build more advanced pipelines you will most likely encounter some issues if you haven’t done it yet go right now to the video description and open the course notes there you will find important resources and troubleshooting tips finally let’s do a quick recap when we have multiple developers working against the same code repository ci is a pipeline that allows us to add and integrate our changes even multiple times per day what comes out is a new version of the product if you’re still unsure about continuous integration at this point don’t worry we’ll implement ci in our development process in the upcoming lessons for the rest of the course we’ll be using this project this is a simple website built with react which is a javascript technology developed by facebook now we don’t want to get too much into the technical details because they don’t really matter so much at this point but the first step in order to be able to make changes to this repository is to make a copy of it so for example if you’re trying here to open a web ide in this project i will get this option to fork the project by the way you will find a link to this project in the course notes and the course notes are linked in the video description so we can click here on fork and we’ll make a copy of this project under our account now that we made a copy out of this project we can then open the web ide and start making changes to it and particularly what we’re trying to do is to create the pipeline so let me give you an overview of the tasks that we are trying to automate essentially here in this project we have a couple of files one of these files is this package.json file and this file essentially documents which requirements this project has and in order to actually run this project we first need to install this requirement so locally i already have a copy of this project and the command to install these requirements is called yarn install so now all the requirements have been installed the next step would be to create a build and that would be done using the command yarn build and during this process what has actually happened is that a build folder has been created and this build folder contains multiple files that are required for the website so let me give you an idea how this website looks like and what we actually did here so i’m going to run the command serv minus s and going to specify the build folder and now essentially we have started a server we started an http server which is serving the files available there so i’m going to open this address in a new tab and this is how the website looks like so essentially what we’re trying to do in this section is to automate these steps so we want to install the dependencies we want to create a build we want to test the bill to see if the website is working and i’ve shown you these tools because it is always a good idea to be familiar with the cli tools that we’ll be using in gitlab ci now in gitlab we try to automate any manual steps but before we do that we must know and understand these steps we cannot jump into automation before we understand what the commands that we want to do are actually doing now i’m not really referring in particular to the commands that i’ve shown you here because they are specific to this project you may be using python or java or anything else so you don’t need to be familiar with these tools in particular i will explain to you what they do and how they work however what is important to understand is the concepts the concepts remain the same and this is what we are actually focusing on in this course we are focusing on understanding the concepts around automation so let’s begin creating the ci pipeline for this project so i’m going to go ahead and create here a new file and of course the definition file for the pipeline will be dot gitlab.ci dot yaml the first job that we want to add here is build website and what we are trying to do here where we’re trying to build this website and why do we need to build the website well essentially or most of their projects do have a build step in this case we’re essentially creating some production files production ready files which are smaller optimized for production from some source files so we have here in the source files you will see here an app.js and any other file so essentially the build process will take all these files and will make them smaller and will put them together in a way other programming languages may need to have a compilation step or any other steps so typically something happens in the build process where we actually are putting our project together now of course we don’t want to do that manually from our computer we want to let gitlab do this for us so let’s go ahead and write here the script for this the first step is to essentially run yarn and yarn is a tool that is helping us build this project this is specific for javascript projects essentially node.js projects and yarn is a tool that can be used for getting dependencies they can also be used for building the software so the command that we are running here is build locally as you remember the first thing that i did was to do a yarn install to install dependencies and this is something that needs to happen before the build so every time when we are building this website we need to get the dependencies to make sure that we have all the dependencies that we need and to ensure that all of them are up to date and because they don’t remain anywhere in the container we need to do that all the time and normally locally would do that only when we need to do that so for example when we know that we need a newer dependency of a software package but gitlab doesn’t have this information so we kind of like need to run this all the time you also need to specify your docker image so let’s try for example alpine which we have used before so essentially what i’m trying to do is to replicate the commands that i’ve executed on my computer installing dependencies and building the project so let’s commit these changes and see what pipeline does i’m going to commit them to the main branch and click on commit here we can take a look at what the job is doing and we’ll be able to see here that we get an exit code now remember any exit code that is not zero will lead to a job failure and it says here job failed now why did this job fail well we have to look like what is the command that we try to execute and we’ll find here something saying that yarn not found so essentially what this means is that the docker image that we have used does not have yarn so how do we install yarn or how do we normally do this now the thing is we don’t have to use this alpine image that you have seen here essentially for most projects and this includes node.js which is actually what we’re using here and which i already have installed locally this is why it worked locally there are official docker images that we can use and the central repository for such images is docker hub so this is a public docker repository so for example if i’m typing here node i will be able to find here the official image for node and i can essentially instead of using alpine i can simply use here node so let me go back to the project and i’m going to write here node now the thing is when we’re doing here when you’re writing alpine or node what is actually happening is that we are always getting the latest version of that docker image sometimes it may work but sometimes the latest version may contain some breaking changes which may lead to things not working anymore in our pipeline if one day we’re getting one version and next day we’re getting something else without us making any changes things may break so for that reason it is generally not a good idea to just use node for example or just to use alpine as we have done before it is better to specify a version to put it down like which version do we need here and to write it down as essentially as a tag now how do we know which version do we need now for node what we’re going to do here we’re going to head to nodejs.org and here you’ll see here two versions that are currently available for download what we want to do is we want to use the lts version now you will see here that the latest lts version that i’m currently seeing right now is 16.13.2 now what’s important here is the major version which is 16. when you are watching this it’s important that you come to this page and you look at a number here probably it will increase over time most likely it will increase over time you’ll get a different version it’s important here that you get the latest lts version that you see here all right so let’s go inside our pipeline and then write this 16 version here so the way we’re going to do here we have node which is the base image and then we can specify a tag by writing column and then i’m going to write 16 right and most likely this version is already available on node but you can always make sure and check here go to the tags and see if this specific tag is then available there but this major tags typically they are available and will have no issues downloading this so let’s commit the changes and see how it goes and again i’m gonna push to the main branch when this executing you will be able to see here which image is being downloaded and you will see here it’s node and the tag is for the version 16. now this job is now requiring much much longer to run you will see here the duration in this case was one minute and 35 seconds and this is because we have to download what is a relatively large docker image this node image and then we are installing the dependencies and you’ll see here that these dependencies take 44 seconds to be installed to figure out which dependencies are required to download them from the internet that takes a bit and then we are actually doing the build and this also takes a few seconds to complete but fortunately everything is then successful in the end and this is the most important step at this point we don’t want to waste a lot of time when we are actually executing these builds and i know for a fact that this note image is a few hundred megabytes large in size this is because it contains a lot of tools and a lot of dependencies which we may not need so for that reason it’s always a good idea for these larger images to go here on the specific image to click here on the text tab and generally to take a look at like what is the size of these images and what’s happening with them because some of them are quite big so we could theoretically search here for version 16. and by default we can see here that this is the one tag that’s more specific it’s not exactly what we’re using we will see here version 16 something it has about 332 megabytes so if we’re selecting to use this image every time we’ll have to download 300 something megabytes that’s a lot of download and a lot of time that we are wasting just to start this image so for that reason what i typically do is i go here in the tags and i search for alpine sometimes slim or other images can also be a good idea and what i’m interested in is something like 16 alpine so let me write directly 16 alpine and this looks absolutely fine here so 16 will be again the node version that we are trying to use and if we’re looking here at the size take a look at this it’s 38 megabytes so what we’re going to do we’re going to simply take this and go in our pipeline and replace 16 with this stack so here inside instead of using 16 we’re going to use 16-alpine gonna commit these changes we’re gonna take a look again at the pipeline to see how long does it take right now to build this so this job still needed quite a bit of time one minute and 26 seconds you may see this duration varying but generally it is a very good practice to use images that are as small as possible because this can save time and you may see that maybe this job will even go below one minute it really depends like how fast the runner will pick up the job and will able to start this image but the main idea is the same we now have managed to automate the first steps in our project we’re installing some dependencies and then we are running the build so this seems to be working just fine just to recap we are using this node image before we have tried using alpine image now the alpine image didn’t work because it didn’t have node js installed now essentially what we’re using here is essentially the same image as before the same alpine image alpine linux image but it has node installed so it has the dependency that we need and this node dependency contains yarn this is why yarn didn’t work before now it’s working just fine and it’s building the project the most important thing that you can do when you’re learning something new is practicing and i want to give you the opportunity in this course to practice along so not just following along what i’m doing but you also have the opportunity to do something on your own and for the next assignment i already think that you have to know how necessary in order to do that what we’re trying to do is to create two new additional jobs in this pipeline this is your job now to write these jobs so what are these jobs all about the first job is trying to test the website and the website is currently inside the build folders if i go inside the build folder you will see here a list of files now your job is to ensure that the index.html file is available inside the build folder so this is the first job that you need to create the second job i want you to create is in regards to running units so in order to run unit tests the command that we are using is yarn test and the only thing you need to do is to create a job and to run this command to take a look at the logs and see that the tests have indeed been executed the upcoming lesson will contain the solution to this but please it’s super important that you pause this video and try this on your own try as much as you can because this is the best way to learn and to ensure that what i’m showing you in this course is something that you will be able to use in your projects as well i hope you have tried to solve this assignment on your own but anyway this is how i would approach this problem so i’m here inside the editor for the pipeline and let’s begin just with the skeleton so what are we trying to do we have two new jobs we have test website where we’re essentially trying to test the output of the build website job and we also have a unit tests job so first of all we have to think about stages what we want to do happens essentially after the build well it really depends a bit so for example the test website this definitely needs to happen after the build the unit tests they don’t necessarily need to happen after the build but we’re going to put them inside a different stage as well so let’s go ahead and define the stages here so we’re going to have two stages we’re going to have here built and we’re going to have another stage which is called test and what we need to do is to assign this jobs to a stage so the build will be assigned to the stage built of course and then the test website will be assigned to the stage test and the same goes with the unit test right in order to test the website what we are trying to do well let’s try and write the script we are trying to test if we have an index.html file there so as you probably remember the command test dash f so this we are testing for the existing file this needs to be inside the build folder and the name of the file is index.html now what we haven’t done so far is inside this build website job to declare artifacts so as it is right now this command will fail so what we have to do here is of course think about the artifacts so which artifacts do you have we have to define the paths and the only paths that we’re interested in is the build path so then this will be able to test for this file the next thing we need to think about is which image do we need for test website so theoretically we could use node 16 but actually this command test doesn’t require something like that so we could just use alpine and test is a pretty general command so we don’t have to worry about specifying a version or anything like that so just going with alpine should be just fine there’s of the unit test we essentially need this node image because we’ll be using yarn so i’m going to simply go ahead and copy this image that we have used in the build website and of course the script will also be kind of a similar uh we still need to install dependencies so yarn install is necessary here when we’re trying to run the unit test and the command that we want to run is yarn test so let’s double check to make sure that we have everything in place we have defined two stages build and test we have assigned build website to the stage build and we have assigned the test website and the unit tests we have assigned them both to the stage test so this means these two jobs will run in parallel here we’re just using the test command to verify if this file exists and here we’re installing the dependencies and then running the tests so i’m gonna go ahead commit these changes and we’ll inspect together the pipeline and after a minute or two this buy plan will succeed and you’ll notice here the two stages build and test we’re building a website and then we’re testing what we have the unit tests are not necessarily dependent on the build itself but we put them together in a test so if looking here at test website what do we see well we see here the command test minus f so it’s testing if this file actually exists if it’s part of the job artifacts of course if you’re looking inside the build website you can go here and look at the artifacts see here the build folder we’ll see here multiple files so not only the index.html file but we’ll see here some images some additional files that we don’t really care so much at this point but they are available there so that’s the most important aspect that we really care about and here the unit tests the unit tests are part of the project and generally when you’re writing code we are also writing unit tests to document that what we’re doing there is working properly you will see here the command that has triggered the test has been yarn test there’s only one file here that contains only one test all these tests have been executed and because the tests are passing the job is also succeeding any of these tests would not work then the job will be failing and the entire pipeline will also be failing the first tool we need for integrating work is already in place we are using git for versioning our code we also use gitlab as a central git repository we take git for granting nowadays but it solves a critical issue and is one of the first devops tools you need to know while this course does not go into git i highly recommend checking the course notes for a comprehensive git tutorial so in our current model every developer pushes changes to the main branch whatever version of code has been pushed last time in the main branch that is the latest version anyone wanting to add new changes will have to make them compatible with the main branch however this model does have a serious flaw nothing is safeguarding the main branch what if the latest changed has introduced an issue and we can’t build a project anymore or what if the tests are failing this is now a massive problem as nobody on the team can continue working and we can’t deliver new versions of the software a broken main branch is like window production halts in a factory is not good for business we must ensure that the main branch does not contain any broken code the main branch should always work and allow us to deliver new versions of the software anytime we need to do that so how do we solve this the idea is simple we just don’t push untested work in the main branch since we can’t trust that developers remember to run all tests locally before pushing a change we want to take advantage of automation and gitlab to automatically run the tests before adding any changes to the main branch the idea is to work with other branches which are later integrated into the main branch this way every developer can work independently of other developers there are various git workflows but we will use one that is simple to understand it’s called the git feature branch workflow the idea is simple for each new feature bug idea experiment or change we want to make we create a new branch we push our changes there let the pipeline run on the branch and if everything seems okay we can integrate the changes in the main branch so in other words we simulate running the main branch before we actually run the main branch if our branch pipeline fails no worries all other developers are unaffected and we can focus on fixing it or if we don’t like how things turned out we can just abandon the branch and delete it no hard feelings there either as part of working with branches we create a merge request that allows other developers to review our code before it gets merged a much request is essentially one developer asking for the changes to be added to the main branch the changes are reviewed and if nobody has objections we can merge them so this is the plan for the upcoming lessons to start working with git branches and merge requests so let’s get to work so how can we create merge requests in gitlab first of all to ensure that the chances of breaking the main branch are as small as possible we need to tweak a few settings so we’re going to go here to settings general and right here in the middle you should see merge requests i’m going to expand that and what i like to use when using merge requests is to use this fast forward merge essentially no merge commits are created and generally the history remains much cleaner there’s also the possibility of squashing commits directly from gitlab and you can put it here and encourage scratching commits is when you’re pushing multiple changes to a branch and instead of pushing all these changes back into the main branch we scratch them all together so we essentially like we have only one commit again makes the history much easier to read going forward here in the merge checks we want to make sure that the pipelines succeed before we are merging something so this is a super important setting here that we have so let’s go ahead here go at the bottom and click on save changes and additionally again from settings when go here to repository and from the repository you’re going to go here to protected branches i’m going to expand this and what we want to do here is we want to protect the main branch so essentially we don’t want to commit changes to the main branch anymore we want to prohibit that so nobody will be allowed to directly commit something to the main branch so in order to do that we have to go here to allow to push and instead of having something selected or some role selected here can i use no one so no one is allowed to push to this protected branch only if it goes through a merge request so these are the initial settings that we need to do and we’ll be able to see here now let’s try and make some changes and open the web ide open the pipeline file and now let’s say i’m trying to add here a new stage where i’m trying to do some checks for example there is linter that i can use and linter is simply a static code analyzes tool that is used to identify potential programming errors stylistic issues and sometimes you know questionable code constructs since it is static it does not actually run the application it just looks at the source code and most projects do tend to have such linter inside and just for the sake of completion i also want to add here linter so i’m gonna go ahead here and i’m gonna write here linter this is the name of the job the image that i’m using is still node here and the reason for that is inside the script the command that we are running is actually yarn lint and of course we also need to install all dependencies and additionally we have the possibility of assigning this job to a stage now by default gitlab comes with some predefined stages prefined stages include a press stage build stage a test stage deploy stage and post stage so these are all actually predefined and to be honest this is just to make it clear like which stages we are defining here and which stages we are using but these are already defined by gitlab so we’re just essentially redefining something that exists so for this linter i could go ahead and use a stage and this stage name will be dot pre and this linter has absolutely no dependencies on the build itself the same goes for example here with the unit tests the unit tests are also not dependent on the build so i could go ahead and move them to the same stage just to make this clear so there’s a dot there and because this is a predefined stage we don’t need to redefine it again so i could go ahead and write it here but it’s not really needed all right now going back here the editor that i opened here is from the main branch you will see here main is selected right and let’s go ahead and commit and you’ll see here i’m no longer allowed to commit to the main branch so this is by default disabled the settings that we did ensure that we are no longer allowed to directly make changes to the main branch so we have to create a new branch quite often when we are creating branches we tend to use some naming conventions now totally up to you what you want to use or how your organization uses that quite often you will have something like feature for example for what slash and then a name of a feature sometimes you may reference a ticket or something like that so for example you have your ticket number one two three four add linter so you will see here i’m adding no spaces or anything like that it’s totally up to you how you use these forward slashes hyphens as separators and so on you’ll also see here the possibility of starting a new merge request so i’m essentially these changes now are not committed i’m going to create a new branch the branch name will be feature forward slash and some name i’m gonna click here on commit this will open an additional window so the branch has been created but now i’m also opening a new merge request and here in the merge request there are a few things that i should fill out so for example the title is not very suggestive so i’m going to write here something like add linter to the project you can also go ahead and provide a description this is also useful for the people who are looking at this merge request to know why this is important what this feature is bringing or if it is a bug fix which issues is this fixing and so on so you could do that there are also some additional labels and options that you can set here i’m not going to go over them because they are essentially relatively easy to explain and i’m going to go here and click on create merge request and now a new merge request has been created and additionally we also have here a pipeline that is running against this branch so what is happening here is that the changes that we made which you will be able to see here what we do do we added here a new stage we added here a linter uh also made some changes to the do the unit test so we can go ahead and look at these changes we have here a pipeline that’s executing it’s going through all these stages we have here the pre-stage the build stage and the test stage and now we can simulate running this pipeline to see if it fails if it fails we have the opportunity or of fixing things if it doesn’t fail then we have the option of merging these changes the next important step in the life of a merge request is the review process especially the code review where somebody else from the team is taking a look at our changes and is giving us feedback now just in case you don’t know where your merge request is you will find here on the left hand side this panel says here merge requests and we’ll indicate how many merge requests are open at this time so we only have one merge request you will see here this is the title of the merge request so we’re trying to add linter and you’ll find here like the most important information so this is a request to merge this feature into the main branch right if you need to make changes to it you can open this branch in the id and continue making changes to it you have your information about the pipeline so we know that the pipeline has passed and someone else looking at this we’ll be able to see here which commits have been made and also to have an overview of the changes inside here there are different views that you can have you can compare these files and someone can also leave comments here so for example i could simply click here on the line and ask why did you use the prestage right and you can add a comment now where does this merge request go now typically when someone has reviewed this can go ahead here and approve it so you will see here who has reviewed these changes and you may have internal roles like you need to people to review any changes before they are being merged so the changes can be approved and then they can be merged comments can appear so for example here i’ve added a question to these changes so maybe there’s some discussion that needs to happen there feedback can be gathered from different people involved and this feedback may need to be integrated so for example as i mentioned you can still do changes to this the merge request will always have the latest changes so we can make more commits it’s also possible that for some reason these changes are no longer needed they are wrong or the approach that was taken doesn’t really make sense so so there’s also the possibility of closing a merge request you can go here and see closed merge requests so this will be closed and no changes will be merged so that’s also a possibility but typically what happens after the review after some changes have been integrated we’re gonna go here on this merge button and what is happening here is we are merging this branch into the main branch and there are two options that are enabled by default the source branch so the branch that we have created will be deleted which makes sense because we don’t need it anymore and in case we have multiple commits we can also squash those commits and in this case we have only one commit we can modify the commit message if we want to in this case that’s not needed and then we can simply click here on the merge button this merge button is available only if the pipeline succeeded but our pipeline did succeed so this is why we can see it so now gitlab will merge these changes into the main branch and again a new pipeline will be generated and this is the pipeline for the main branch so you will see exactly the changes that we made we have here the pre-stage the build and the test these are all available here and exactly what we have run inside a merge request the pipeline is running again against the main branch and this is needed just to ensure that indeed everything is working fine and that there are no integration issues or no other issues so it’s a good practice not something that may seem that it’s happening somehow as a duplicate it’s actually important to ensure that the main pipeline is working properly this is what we care most about to ensure that this main pipeline is working as it should work that’s the reason why we have reviewed the merge request to ensure that whatever we’re changing here in the main branch is as good as it gets and that the chances of breaking something are drastically reduced right in the beginning of this section i’ll show you locally which are the steps they need to build this project but i’ve also taken the build folder and i’ve opened the website and you have seen something that looks like this how can we replicate this in gitlab essentially we want to make sure that we indeed have a website that’s running that can be started by a server and we want to make sure that you know for example this text here shows up learn gitlab ci how can we do that inside the pipeline currently we’re just checking if we have a file which is called index.html that’s not really telling us the entire story so this lecture we’re going to take a look at how we can do essentially what’s sometimes called an integration test we’re really testing that the final product is really working i’m going to open up the web ide and inside the pipeline file kind of focus on this test website job so this current test is not really that helpful we want to do something else so for example you have seen me running the following command which has been serve dash s build right so this takes the build and we’ll be able to start a server and then we can take a look at what’s going on so we’re going to remove this because we don’t need that anymore i’m going to start here server and just because i want to make sure that this is running as fast as possible and that we’re not wasting a lot of time i’m going to go ahead and disable some jobs so i’m going to disable here the linter job that’s not really needed in the beginning i’m going to also disable the unit test so to disable the job you simply add a dot in front of the job name and that job will be disabled let’s go ahead and commit these changes and again we’ll have to create a new branch call this feature integration test i’m going to start a new merge request as well i’m going to call it add integration tests i’m going to click here on create merge request if you’re looking at the pipeline we’ll see that the build has succeeded but there seems to be an issue with testing the website so let’s take a look at the logs and understand what is going on we’ll try to understand here what is the error we’re getting an exit code the last command that we executed is serv and we’re getting here surf not found so again we’re trying to run a command this command was working locally for me but at least the docker image that we are using here does not have this command so which docker image are we using we’re using alpine and alpine does not have this command so we need to find a way to install this command as well so for that we’re going to go back to the editor we can go here from the merge requests click on the merge request and we’ll find here open in web id and then we’ll be able to see here this is essentially a list of changes that we have this is the review it shows us what we have changed but if you want to make changes to the code we’re going to go here this will allow us again to select the pipeline file and make changes to it so let’s see what we need to do in order to get this job to run as well now first of all we’re gonna use here this same node image because serv is actually something that runs on node so it’s a dependency that runs on node and we’re gonna use yarn and we’re gonna add this dependency i’m gonna see here yarn add serve and actually you’re going to use this as a global dependency so i’m going to write here global so using yarn global add serve right this will ensure that we now have serve we can use yarn because we now have the node image again this time we are already in a merge request so we can go ahead click on commit i’m going to commit directly here to the existing branch that we have and the pipeline will be executed again be sure to watch what i’m explaining next because we have introduced an issue in this pipeline and it’s a big issue but can explain in a second what that issue is i’m going to go ahead open this pipeline and i’m going to go directly to this test job and wait a bit here to see what is happening and i wait for this job to start so i want you to notice what is happening with this job so we have started this docker container which contains node then we have added this dependency which is serve and then we have executed serv which folder are reserving or serving the build folder then it says here info accepting connections on localhost port 3000 so what is going on well we have started an http server which is serving these files inside this gitlab job and when does a server end well it never ends that’s the purpose of the server it will serve these files indefinitely and of course this approach that we started here is not good because we’re gonna wait here forever well actually not forever jobs do have a specific timeout you will see here the timeout that is one hour so if we don’t stop this job this job will run for one hour so this is an important concept when you’re starting a server inside a job that job will never end or will run into this timeout so be very careful when you’re doing things like that now i’m going to go ahead and manually stop this job you will see here this cancel button here we cannot wait forever so i’m going to cancel it and then this will stop now there is a way to start a server and there is a way to check that text but we still need to make some changes so i’m gonna go back again to the pipeline let’s go inside the code open the pipeline and see what we have here now what do we want to do so we have managed to start this server it will run forever but how do we test something so in order to test something we have to execute the upcoming command and in order to actually get this information we need essentially an http client we need another tool that will go to this address which is you have seen there it’s starting on localhost which is the host name on port 3000 and the protocol is http so we don’t have a browser that we can open here so we need something that is as close as possible to a browser so that tool is curl so curl is a tool that will be able to go to this address download the website and then we can do something i’m gonna use here a pipe so this will send the output of this website to the next tool and as you remember from working with files we have the possibility of using grep so grep will search for a specific string so what is the string that we’re looking for i think the string is learn gitlab ci i’m gonna check again against the website to make sure that i have this exact text here because this is what we are trying to find inside a website right so we’re starting a server then we have this curl command and in grab we’re getting the website and then we can search for this string we still haven’t solved the problem with the server running forever we don’t want it to run forever so we’re going to use a small trick and i use here this and sign and what will this do is to start this process in the background so it will essentially still start the server but it will be in the background and it will not prevent this job from stopping so when the curl command has been executed this job will also stop because this command will be executed right after this one curl will not wait for the server to actually start the server needs a few seconds to start and in order to wait for that we don’t know exactly when this will start so we’re going to use another command which is called sleep and sleep will take a parameter here so this will be in seconds so we can decide how many seconds we want to wait just to be sure i’m gonna use here 10 seconds additionally curl is also another command which we don’t have and this is actually a dependency of this alpine image so this has nothing to do with yarn or npm as serve does this is an exception so for that reason we’re have to use here the package manager from alpine so a b k this is the alpine package manager and we’re gonna add here curl so this is how we are adding curl as a dependency so let’s give it another try and see if this works now if we’re looking at the logs we’ll see that this is still failing maybe this is not our day but let’s go through the process again and try to understand what is going on and generally i have to tell you we have to get used with errors and the more we understand about this the more we are able to solve this kind of problem so we have added surf we have added curl and we started the server we can see here if we’re still getting this information accepting connections at we waited here for 10 seconds curl has downloaded this and then has sent this to grab but we are getting here an exit code one because we’re not seeing any errors from curl which means that curl has downloaded this information we have to wonder like what’s going on with grep why isn’t this information available there in the response now we could go and debug this a bit more but essentially when carl is downloading a website it’s not really downloading what you see right now here it’s actually if you click on this go here to view page source on any website what curl is actually doing is downloading this now this is the index.html file and this index.html file will download other things in the background we’ll download some scripts so that information there is not really coming from the page from this page that we are getting with curl and curl doesn’t have the possibility of actually rendering javascript and images and so on so it’s very basic tool what we have here for example is this react app in the title so how about if we change this and we are asserting that we have this title so let’s go ahead in the application and figure out where this react app appears now i change it to something with git lab so instead of using learn gitlab we’re gonna simply try and use react app because we know this is available here so from the project i’m going to open the web ide if i’m not in the right branch i can simply switch the branch open the file that i want to edit so we’re going to use instead of learn gitlab ci i’m going to use here react app and we are also gonna enable the rest of the jobs which are the unit tests which have been disabled the linter has been disabled and then we’re gonna put all these changes together commit them to the branch we’re not going to start here a new merch request we already have a merge request so we’re going to disable that if it’s enabled i’m going to commit again and finally the tests are working so let’s go again and inspect the job logs so what we find here well we have everything that we are interested in and particularly the last part was the one that was failing and now it’s able to locate this inside here so this is why you’re getting back the entire text you’ll see here react app it’s in the response so curl and grab now work together and this job succeeds has been a bit of work but i’m happy that we have this test and it ensures that our pipeline really works a bit better and that we’re more confident in what we’re building i’m gonna go here to the merge request you will see here we have four commits we have here essentially the changes that we made so we reverted all the other jobs that we have disabled and we just have adapted this test website so from that point of view everything is good we’re gonna squash here the commits so we can also modify the commit message commit messages add integration test so that looks good i’m gonna click here on merge and this changes will be merged into the main branch so currently our continuous integration pipeline looks like this it is divided in three separate stages and the way i’ve structured this is just an example there is no hard rule different programming languages and technologies may require a different order you are smart people and i’m sure you’ll be able to apply these ideas to whatever you’re doing however i just wanted to mention two principles or guidelines that you should consider one of the most important aspects is failing fast for example we want to ensure that the most common reasons why pipeline would fail are detected early for example here linting and unit test quite often developers forget to check their linting make a mistake or forget to run the unit test so if we notice that this is happening a lot it really makes sense to put this as the first stage of the pipeline they typically don’t need a lot of time so the faster they fail the better it is because we’ll get fast feedback we’ll use less resources and of course we know if we’re running here in parallel jobs that are similar in size that’s also a relatively good way of grouping jobs together now you also have to be very careful when you’re grouping jobs in peril for example if you have a job that finishes in 15 seconds and you have a job that finishes in five minutes for example let’s say here linter needs 15 seconds unit test needs five minutes if the lender fails after just five seconds the unit test will still run for five minutes there’s no no stopping there so it does make sense to put in parallel jobs that are similar in size another aspect that you need to consider are the dependencies between the jobs so what i mean by dependencies well for example here we cannot test the website until we have actually built the website but the linter and the unit tests they don’t depend on the build output so for that reason we can run them before the build we can run them after a build we don’t have a dependency there but in this case for the website we are dependent on the job artifacts because we’re using them for something so from that perspective we always need to have the build job before here we can test the website so if jobs have dependencies between them they need to be in different stages now i have to say that there is no replacement for experimentation and trying things on your own if you are unsure if something works or not just give it a try and see how far you can go and why that pipeline fails as you have noticed so far our pipeline does require a bit of time and actually optimizing these pipelines is a lot of work so what i wanted to show you is just a very simple way on how we’re gonna save a bit of time throughout this course so that we don’t wait so much for each of these stages so again i’m gonna open here the web ide and what i want to do is to essentially restructure these jobs now as you can notice here the build website the linter and the unit tests they are all pretty similar they all use the same image they all need to install yarn dependencies and they just run a command after this so one idea would be to for example take the linter so yarn lint and to put it right here after we have installed the dependencies and in that case we don’t need this job anymore same goes for the unit tests right we’re gonna take here the unit tests and put them after yarn lint and again we’re not gonna need this anymore also not gonna need a stage we can reduce this to just two jobs so this job indeed has this image which is the same as this one but doesn’t have any dependencies so it’s just installing surf here it’s installing curl so i don’t want to really combine this in a single job we could theoretically put everything in a single job but i’m just doing it now for performance purposes and in order to save a bit of time we’re just gonna stick to this two different stages the build stage and the test stage and even if you’re doing here a bit of testing it’s acceptable now to be able to save a bit of time and as i said it’s totally up to you how you structure your pipeline but this is why we are removing now this i just wanted to show you how they look like but for the rest of the course we don’t want to wait so much time waiting for these stages to complete so for that reason this will be a bit easier i’m going to create a merge request and merge these changes into the main branch if you’re binge watching this course make sure to take a break after a few lessons there is a lot to take in but you need to take care of your body as well so get up from time to time drink some water eat or go outside and relax your mind and trust me taking a break is so much more productive than staying all day in front of your computer if your body does not feel well this impacts your productivity and energy levels so i’m going to take a break as well and i’ll see you in a bit [Music] in this unit we’ll learn about deployments and we’ll take our website project and deploy it in the aws cloud along the way we’ll learn about other devops practices such as continuous delivery and continuous deployment by the end of this section we’ll have a complete ci cd pipeline that will take our website build it test it and let it float in the cloud just a quick reminder in case you get stuck or something does not work check the video description for the course notes which include troubleshooting tips this sounds exciting let’s begin amazon web services or simply aws is a cloud platform provider offering over 200 products and services available in data centers all over the world aws offers a pay-as-you-go model for renting cloud infrastructure mostly for computation or data storage so instead of buying physical hardware and managing it in a data center you can use a cloud provider like aws such services include virtual servers managed databases file storage content delivery and many others aws started in the early 2000s and its adoption continued to increase over time maybe you have heard the story when the new york times wanted to make over 1 million old articles available to the public a software architect at new york times has converted the original article scans into pdfs in less than 24 hours at a fraction of the cost a more traditional approach would have required and this was back in 2007 today many organizations can’t even imagine their idea infrastructure without using some cloud components while this course focuses on aws the principles presented here apply to any other providers if you don’t already have an aws account don’t worry it’s quite easy to create one and your personal details and click on continue since we’re using aws for learning and experimenting with it i recommend choosing a personal account aws is a paid service and even though in the beginning there is a free tier that has some generous limits and is ideal for learning and experimenting you are still required to provide a credit card or debit card for the eventuality you will go over the free limit i’ll go with a free basic support plan which is fine for individuals now that we have an aws account that is verified and has a payment what we need to do next is sign in into the console because from the aws console we can access all services that aws offers if you’re seeing the aws management console it means that you have set up everything correctly and you can continue with the rest of the course and essentially start using aws right away let’s do a bit of orientation this is the main page from where you navigate to all aws services aws services are distributed in multiple data centers and you have the possibility of selecting the data center you would like to use right here on the right side on the top of the menu i will go with u.s east 1 north virginia you can use whichever region you like just remember the one you have selected as this will be relevant later on typically the data centers in the u.s have a lower cost than others that are spread around the globe and right here you have a list of all services available you can also search for services in this search bar finally i highly recommend that you go to your account profile and enable multi-factor authentication this will significantly raise the security of your account that’s about it now you can start using aws the first service that we want to interact with is aws s3 which stands for simple storage service you can find a service by simply searching for it you will see here s3 i’m gonna click on it and there you go this is aws s3 s3 is like dropbox but much better suited for devops actually for a long while dropbox was using aws s3 behind the scenes for storing files but let’s go back to the course since our website is static and requires no computing power or database we will use aws s3 to store the public files and search them to the word from there over http on aws s3 files which aws calls objects are stored in buckets which are like some super container folder now you may notice that your aws interface looks a bit different than mine aws is constantly improving the ui and things may change in time however the principles that i’m showing here will stay the same so let’s go ahead and create our first bucket first of all we have to give our bucket a name and the name of the bucket needs to be unique so i’m going to give this bucket a unique name so that i don’t run into conflicts with anyone else who may have created the bucket with the same name so i’m gonna just give here my name and a dash and after that i will give here a date additionally you’ll have here the aws region and in my case i’m gonna keep it exactly as it is and there are also a bunch of settings that we’ll not look into right now so right here at the end i’m going to click here on create bucket all right so the bucket has been successfully created and we can go inside the bucket and to see if there’s anything in there but at this point there are absolutely no files of course we could go ahead and manually upload files or download files or things like that so through the web interface it’s possible to add files and folders but this is not what we are trying to accomplish with devops we want to automate this process so for that reason i’m going to leave it as it is and in the upcoming lectures we’re going to find a way on how to interact with aws s3 from gitlab so how can we interact with the aws cloud and particularly with aws s3 in order to upload our website if you remember in the beginning we talked about how we typically interact with computers and that is through a cli command line interface so for the aws cloud we need a command line interface to be able to upload files for example and fortunately aws has already thought about that and there is an official aws command line interface that we can use there’s also here a very in-depth documentation about the different services that are available and throughout the course we’re gonna also navigate through this documentation because i want you to understand the process of interacting with such services and the importance of actually using a documentation and not just replicating what i’m doing on the screen now to be able to use the aws cli inside our gitlab pipeline we need to find the docker image and the best place to search for docker images is in docker hub so here inside docker hub i’m going to go ahead and write aws cli and i’m going to find here we’ll see it it’s under verified content if possible we always try to use verified official docker images i’m gonna take this one so all i have to do is simply copy this and then we’re gonna go inside our pipeline open the web id again and let’s go ahead and create a completely new job here so i’m going to call this job deploy to s3 and we’re gonna add an additional stage here because we need to do this deployment after the ci pipeline is done so we have tested everything and after that if everything passes then we’re gonna move to the next stage which is the deploy states and right here stage deploy and the image that you want to use is the one i’ve just copied which is amazon aws cli now by default this image will not really work the way we have used for example the node image or the alpine image this image has a thing which is called an entry point essentially when we’re starting the image there’s already a program that’s running with that image and this is something that conflicts a bit with the way we’re using images in gitlab so we need to override that entry point so for that reason i’m going to move this to a new line i’m going to write here another property which is called name so this is under image and this is not a list and below it i’m gonna write entry point and here i’m gonna overwrite the entry point with this square brackets and this quotes here so then comes the script part and in the script part we are essentially writing well let’s test if the tool is working so typically almost every tool will have the possibility of printing out the version so the name of the tool is aws and then the version will be dash dash version now what we’re interested in is using aws cli in version two this is the current version another version is version one but we’re definitely interested in using version two so for example here from docker hub you can go to the tags and you can also specify a tag if we don’t specify a tag we’ll always get the latest version but generally it’s recommended that you specify a tag in it you go with a tag so for example if i want to specify this tag i’m going to copy the name of the tag i’m going to add here column and then the value that i want to use it’s a best practice to always specify a tag but of course by the time you watch this there will be newer version typically anything that starts with two should be good enough so let’s commit these changes in a branch and see how the pipeline looks like now right now i’m getting an error and if i’m looking inside the pipeline to see what’s going on there are some errors because the stage i have chosen in this case the deploy stage doesn’t exist so the pipeline hasn’t been executed so we still need to make some changes to it so here where we have the stages i’m gonna add here deploy so now we have the stage build test and deploy again i’m going to commit and see if the pipeline runs so now the entire pipeline has been successful let’s take a look at the deploy job see exactly what has happened there and we take a look at the logs here we have used this aws cli image and of course we have printed here the version so we’ll see exactly which version we’re using i’ve used the tag so with that tag i’m gonna always get this version but sometimes if you don’t wanna specify a tag it’s still a good practice to lock the version in the logs just in case you know from one day to the next one or like one month later or something like that your pipeline doesn’t work anymore at least you can compare previous logs with existing ones and see if there’s any differences in the version that you have used for the various tools i’ll tell you right from the start that getting this upload to work will require a bit of learning and a bit of steps so we’re not gonna be able to do this right in this lecture and i get some errors just as a heads up but we’ll be making progress toward that goal so how do we upload the file so let’s continue editing this pipeline now as i said because you’re going to need a few tries and go ahead and disable the build website and the test website jobs and this will ensure that we are only running the deployed 2s3 job for the moment until we’re ready and we have tested that upload works i always like to make things as simple as possible and whenever i’m working with a new tool i want to make sure that i don’t make any mistakes or that i can easily understand what’s going on so for that reason i’m gonna leave aside this complexity with building and testing the website we’re gonna only focus on deploy 2s3 so we have here the aws version and then what you’re going to use here is aws this is the name of the cli tool then i’m going to specify the service name which is s3 and then on the service name we are gonna use copy right so we are copying something to s3 now what are we copying well for example let’s create a new file from scratch just to make sure that everything is working properly so how do we create a file i’m gonna use echo i’m gonna write here hello s3 i’m gonna put it inside let’s call this test.txt right so we are putting hello s3 inside this file so we now we have this file so we know exactly what we are uploading and the question is where are we uploading this where we’re uploading this to the bucket that we have created now for that reason we need to have the bucket name and if you still have your bucket open you’ll be able to see here the destination now this is my destination you will have a different bucket name of course i’m going to go ahead and simply paste this so it’s s3 column forward says forward slash the name of the bucket then we have to specify the name of the file so in this case i’m going to keep the same name it’s going to be test.txt so we are taking this file that we have created inside gitlab we are uploading it to this bucket and the name of the file will be the same in this case let’s give it a try and see how it works because we have disabled the build and the test stages essentially we now only have the deploy to s3 so this makes the entire pipeline execution much faster but let’s take a look why this failed so what we did we’re still printing the aws version and then the last command that we have executed is this one and i cannot stress how important it is to go through the locks and try to understand what has happened so it says here upload failed trying to upload this file to this location and it says the error essentially is unable to locate credentials the thing is how is aws cli supposed to know who we are i mean if this would work as it is right now we should be able to upload files to any bucket that belong to other people or to delete objects from buckets that belong to other people so we need to tell aws cli who we are and this is what we’re going to do in the upcoming lectures and in order to have the entire context in terms of this we have to go through a few stages of preparation existence right now there’s something about this pipeline that i don’t like and that is because we have something inside here that may change later on and i would really like to make this configurable and not to have this information inside here at the beginning of the course we looked at the defining variables and i’ve shown you how you can define some variables within the pipeline there’s also another place where you can define variables and for that i’m going to copy this value and i’m going to go inside the settings so from your project you can go to settings here on the right hand side and what

    you’re gonna do here is select ci cd and right here in the middle you will see variables i’m going to expand this so that we are able to add variables now typically here we tend to store passwords or secret keys but i’m going to use this bucket name as an example for some important features you need to know about so let’s go ahead here and click on add variable and i’m going to name my variable aws underscore s3 underscore bucket and the value will be exactly what i’ve copied from the pipeline now there are some additional settings here i want you to pay attention to there are two flags one of them is typically enabled by default and that is protect variable this flag alone is the single cause for a lot of confusion and a lot of wasted time for people who are just getting started with gitlab i just wanted to point this out if this flag is enabled this variable that we have defined here will only be available to protected branches so for example the main branch that is a protected branch that information will be available in our current branch i created here a feature branch for working with aws cli if i leave this flag on and try to use this variable there it will not work typically the idea here is let’s say you have two environments you have a test environment and you have a production environment you may have different credentials for each systems for securities reason now you typically want to keep the protected variables for the main master branch which deploys to production in this way you ensure that nobody has the credentials to accidentally deploy to production from a different branch so this is a security measure now we are using branches and we are using these services directly so at least at this stage we’re going to go ahead and disable this flag because we don’t want to protect this variable because it will not be available in our pipeline execution as we are running this on a branch the second flag is disabled by default this is something that will be masked in order to mask a variable this is particularly useful for password so for example if you accidentally try to print one of these variables in your code it will not appear there it will just appear like a must masked out with like simply stars it’s okay to have for example usernames or other things like this packet name we don’t need to mask this because it’s not really a secret but if we had here a password for a password it would make sense to mask this so for this variable i’m gonna disable the protect variable flag and i’m gonna disable the mask variable flag many people think that if you don’t have the protect variable flag the variable is somehow public or unprotected that’s not the case it’s simply available for all the branches inside a project and at this stage that’s totally fine i’m gonna change this a bit later on but at this point that is fine i’m gonna go ahead here and add the variable you will be able to see it here being added and of course if you need to make changes to it you can go ahead and make changes from here going back to the pipeline instead of having this value i’m going to start here with the dollar sign aws s3 bucket so the name has to be exactly as you have defined it and don’t forget to put the dollar sign in advance because this is what makes this a variable we can run this pipeline again and see how it looks like see if we notice any changes now if we’re looking at the logs we should be able to notice something interesting so what is interesting about this is the following we have the same command but now inside the command you notice this variable so when we’re writing the command the command that’s being locked here this doesn’t change it will still show you with the variable so it may seem that it’s not resolved but actually if you’re looking here at the error that we’re getting you will see here that the variable has been resolved an indication that the variable has not been resolved is for example not seeing this text at all so i invite you to play around with the protected flag and to run the pipeline again to see what’s happening and you should be able to see here that after s3 you’ll got you will get three forward slashes you will not be able to see your bucket name here so let’s go back to what we wanted to achieve to actually upload this file and we’re still getting this error unable to locate credentials so what should we do should we put our aws email and password somewhere in these variables so that aws can locate them well you’re getting pretty warm with that yes essentially we have to provide some credentials but those credentials won’t be our regular email and password because that will be highly insecure whenever we’re using a service we try to give limited access to that so in order to give this limited access to services in this case we only need s3 there is also a service that manages that so if we go back here to aws and search for services the service that we are interested in is iam or yum and this is essentially an identity manager service that aws offers let’s go inside it and see what we can do so first of all if you didn’t activate multi-factor authentication you’re going to get this warning and i definitely recommend that you do that this is just a testing account for me which will be very short-lived so for that reason i didn’t do that at this point now here from this service we can create new users essentially we want to create an user that will be able to connect to s3 so from the left hand side here let’s go to users and i’m gonna add a new user so we have to give here this user a username can be any user typically i’m using something so that i know why i created it so i’m going to call this gitlab i know exactly that this user is intended for gitlab and we have to select an access type now what we are interested in is also called programmatic access so we’re going to click this and you will see here that this enables an access key id and secret for aws api cli and so on so we’re interested in having this for aws cli so this is why we are creating this user and we are enabling this programmatic access we don’t need a password we don’t need this user to have access to the management console like we do so for that reason that’s sufficient let’s go to the next part which is the permissions so essentially the permissions tell what the user is allowed to do and permissions in aws i’m going to say it’s not an easy topic so i’m going to go the most straightforward way i’m going to attach an existing policy so i’m going to go here to attach existing policies in this search bar i’m going to select s3 i’m going to search for s3 you’re going to get some predefined policies so policy is essentially like a set of rules that we can apply to a user and the set of rules that we’re going to use here is amazon s3 full access so essentially we’re gonna give this user access to everything that is aws s3 so we should be able to create new buckets should be able to delete files and so on so a bit more than what we actually need for this use case but just to simplify things i’m gonna give this full access to the user but of course it’s the topic on its own let’s go to the next page we see the tags we don’t need to add any tags here and then we’ll be able to review what we’re trying to do and we can go ahead and create this user so now we have successfully created a new user and aws has created something for us and that is an access key id and a secret access key to put it in plain english this is like a username and a password you will see here that the password is not even displayed so what are we going to do with this information so first of all let’s go ahead and copy the access key id go inside gitlab again going to go to settings ci cd and expand here the variables and let’s go ahead and add a new variable i’m going to start here writing aws and you will see there already some predefined things that pop up so one of them is aws access key id this has to be written exactly as you see it here there’s something that’s different about this away cli will not be able to pick up this variable it will look exactly for this variable name and will automatically pick it up without us doing anything else so it has to be exactly as it is i’m going to paste here the value of course this is as per my account so you’ll have a different value and i’m gonna disable the protect variable flag because we want to be able to use this in a branch as well i’m gonna go ahead add this variable and going back to aws i’m gonna also show here the secret access key of course i’m gonna delete this right after recording but i’m showing you just to know exactly how it looks like it’s also important that you don’t add any spaces or anything like that so i’m going to copy this secret access key and go here add a variable and i’m going to start typing aws secret access key and i’ll paste the value here i’m gonna disable the protect variable flag and click on add variable finally there’s still another thing that we need to configure and that is our region so again i’m gonna add here variable start typing aws and we have here the default region so when we’re setting this variable all services that we use will be in this region and we don’t need to specify the region all the time aws know exactly where this bucket or any other service is located and we cannot simply write here anything we have to write exactly as aws uses it internally so what do i mean by that well let’s go back to the aws console and you will see here the s3 service and recently visited so this will make our life easier in terms of grabbing this information and for example for my bucket this is in us east north virginia what we’re actually interested in is this code so i’m going to copy this information and paste it here so this will be the default region and again i’m not going to protect this variable and i’m going to add it here what i forgot to do in terms of the secret access key is i haven’t masked it so it would be idea to go back to it click on this edit and you will see here this flag because this is essentially a password and i click here this flag to mask it and update it and then you also have here an overview like which one of them is protected which one of them is masked and of course if you need to inspect some of these variables you can go back and click on them and you will be able to see the value but list on this overview here they are hidden there’s also this possibility of copying the value without revealing this can be useful as well or you can click here on reveal values and they will be displayed all right so it seems that we have everything in place now at least we have these credentials and because we named them the way we named them aws cli will automatically pick them up so there’s actually nothing that we need to change in our pipeline so for that reason we can simply go ahead and rerun the same pipeline once again i’m going to go here to ci cd pipelines go here to the job that failed and here i can simply click on retry this will start the exact same job once again all right so this already looks so much better the job has succeeded we don’t see here any errors aws cli is telling us which files have been uploaded so we could jump into s3 and take a look inside our bucket to see if we have any files here you will be able to see here our test.txt file right inside here say when it was modified and so on we can download it if we need to but essentially this has been the first major step in terms of interacting with the aws cloud all right so we have made some progress and we have managed to finally upload this file but if you remember we have an entire website with a lot of files so you know going file by file and uploading that is not really the way to go so i can make sure that whatever we have inside the build folder it’s going to be synced to our packet now if you remember in the beginning i mentioned the documentation the reference for the aws cli essentially for any service there is here documentation there is here an additional command so we have used aws s3 as the name of the service so in this list of services here you will find s3 somewhere so here we go this is s3 so i’m going to click on it i’m going to be honest with you in the beginning this documentation will look very scary but if you take a bit of time if you have a bit of patience and you go through the documentation this is the way to really master the cli commands that you will be using and this doesn’t only apply to aws cli applied to any tools out there so reading the documentation looking at all the parameters everything that you can do with this is really super helpful so for s3 you will see here a lot of documentation but also at the end you will see here a list of available commands so we have used cp for copy so we’ve copied this file and it’s also definitely possible to copy an entire folder but i wanted to show you also different command which is sync now typically syncing something means that whatever we have on one side we have on the other side as well for example to ensure this with copy we first have to remove the folder just to ensure that we really have everything so for example if we added the file or we have removed the file we don’t want to have those files on s3 anymore if we don’t have them on our website anymore so for that reason using something like sync which ensures that all the files are in sync does make a lot of sense so we can go here on sync and we’ll tell right from the description it syncs directories and essentially it ensures that whatever we have there on one side is also available on the other side so how do we use this let me demystify this and take a look at our existing pipeline that we have here so first of all we’re going to give up on this test file that we have uploaded here this was just for testing purposes and to make sure that we have everything set up correctly so we’re gonna use here aws s3 so the command will be sync so what are we thinking we’re syncing the build folder so instead of that file we’re going to have an entire folder and the question is where are we syncing this we’re syncing this to the bucket so we’re going to remove this part here because we are going to put it directly inside the root and additionally we’re going to also add a flag so if you’re looking here through all the options that are available one of the options should be dash dash delete it will essentially ensure that if we delete some files during our build process which existed previously in previous builds that are also deleted during the sync so if i had a file in the build folder i synced it in the last build then i later removed that file i want that one removed from s3 as well so i’m gonna add here delete so this essentially should be enough to upload our build folder now in order to get this to run of course we have to re-enable our previous jobs so now build and test are enabled gonna commit these changes and see if this command worked and the pipeline is now successful and we’re most interested in this last job that we have executed and now if we’re looking at the logs we’ll see exactly which files we are uploading so we’ll see here uploading this uploading that so these are all the files that we have inside the build folder what’s also important to notice is you notice here this delete so this file is then deleted because it no longer exists in the build folder it actually never existed in the build folder but sync detects that this file doesn’t exist there and says okay if it doesn’t exist in the build folder it doesn’t make any sense to stay on s3 and goes ahead and removes it so let’s take a look at the s3 service i’m going to refresh this page and take a look to see how it looks like so now we are able to see all the files that we are actually interested in there all have been uploaded here and we made a very important step towards hosting our website in the aws cloud now currently the files that we have uploaded here they are not publicly available whenever we create a bucket by default it will be private to us so nobody else external has access to it actually that’s kind of like the normal use case you don’t want the files that you’re putting here to be available for everyone but of course there are use cases when you want to make these files accessible to anyone else and actually quite a lot of companies are using s34 storing files that they offer for download because it’s so much cheaper than hosting them on their own website now in order to make this bucket publicly accessible we need to change a few things and we’re going to start here with properties and actually what we are definitely interested in is enabling static web hosting so again i’m here in my bucket inside properties right at the end there’s static website hosting i’m gonna go ahead here click on edit and we’re going to enable static website hosting and the hosting type will be host a static website and there are also some settings here that we are gonna input websites typically have an index page which is a start page and also they have an error page for this application that we have the index page and the error page are the same and this is the index.html file if you look inside the bucket you should see the index.html file so this is exactly what we’re going to have here for the index and the error so i want to save these changes and we’re still under properties and if we go back here at the static hosting part you would see here that now we have an address so now our website is hosted on aws we could also get the domain and point this to this address but for what we’re trying to do right now this is board enough so we can go ahead and click on it and in the beginning we’re gonna get this error and there’s still a few things we need to configure don’t worry about it just wanted to point out the address and where you can view it so let’s see which other settings do we need to make i’m gonna go here to permissions and you will see here the first thing that appears is access bucket and objects not public whatever we have here in the bucket is not public so this is why even if we have enabled static website hosting we still cannot access this information so let’s take a look at how we can enable public access i’m going to click here on edit and i’m going to disable this block all public access and save the changes and again because this is like a major thing abs really wants to make sure that we know what we’re doing so i’m going to enter here confirm i’ve been going to the website we can try a refresh and it will still display the same error page here so there’s something that’s still not working properly what we need to do in addition to what we did here with in regards to the public access is a bucket policy now essentially we need a policy for the objects that are inside the bucket i want to get too much into the details but essentially we need to write this policy so we can go ahead here and click on edit this is like policy generator now what you see here is json and this is the format in which this policy will be written here there are a few things that we need to change about this policy so essentially we can go ahead here and add an action so let’s search for example for s3 i’m going to select hit here and the permission that we are looking for is essentially get object i’m going to click on this get object but additionally there are also a few other things that we need to change here just to make sure that everything is working properly and this changes will be made in text but you also find the template in the course notes just in case something doesn’t go well you will have the possibility of using that so essentially i’m gonna give here a name and i’m gonna call this public read this is just a name to identify this policy and the principle here we need to be put between double quotes and i’m going to put here this star and then the action will be get s3 object that’s allowed the effect is allow and resource we also need to specify the resource and the resource is our bucket i’m going to copy this name it’s essentially the address of the bucket and i’m gonna put this here also in quotes and i’m gonna add here very important a forward slash and this star which essentially means that everything that is in this bucket can be retrieved get object it is allowed so this is essentially our policy for public reading all right i agree it’s not so easy for beginners but let’s hope we haven’t made any errors here and that it will work as expected i’m gonna click here on save changes to apply this policy you will see here warnings all over the place you will see under the bucket in red publicly accessible access public so aws is really trying to tell you hey watch out uh if this is not what you intended or this is not what you want this bucket is publicly accessible so really really make sure that you know what you’re doing but for us it’s fine this is what we wanted to have and now by refreshing this page we’re getting the website that we have created and now congratulations really this is absolutely amazing we created pipeline that now takes every commit that we make and deploys it in the aws cloud we take it through all the stages but we only have to make the change and the entire process is automated so if you manage to follow along up to this point wow i’m sure that you have learned so much and trust me we’re just getting started there’s still so much more to learn and i’m super excited about the upcoming steps now let’s go back to the pipeline that we have designed now and i’m still here in an open merge request and if i’m looking at the pipeline i have here all the three stages so the build the test and the deploy now if you think about it this really doesn’t make a lot of sense and what doesn’t make sense is to deploy to s3 from a merge request or from a branch now if we think as s3 being our let’s say our production server or production environment then it means that anyone who opens a merge request and tries out some small changes if the build and the test pass this will automatically get deployed to production and actually this is not what we want we want inside a merge request or as long as we are in a branch we just want to run the build and test essentially to simulate the execution of the main branch and then only when we merge this we want to run the deploy so we want to deploy to production so in order to achieve this we still need to make a few changes to our pipeline now this is how the configuration looks like at this point and the changes that we need to make are here in the deploy to s3 job now how do we exclude this job from this pipeline which is running on the branch and to ensure that it only runs on the main branch gitlab has this feature which is called rules now rules allow you to create really complex conditions and i won’t get into those but we’re gonna use a relatively simple condition which will check if we are on the main branch or not now in order to set up rules we’re going to go to the deploy to s3 job and what i’m going to write here is rules this is a completely new keyword and here we can define a list of rules now we’re going to add only one rule and this will be a list so you notice that i’m starting a list and then we have here an if column and now after the if we can specify a condition so in a condition we’re typically checking if something equals or does not equal something else so in this case you want to check if the branch we’re currently at so we have here something i don’t know where we’re at equals so we’re gonna make two equal sign the main branch right so if we are on the main branch only then run this right now we don’t want to put here something hard-coded so we cannot know in advance like which is this branch name where we are currently evaluating this condition so in my case i mean feature deployed to s3 or something like that so it doesn’t make sense to add that there to to check this it needs to be dynamic so in order to have something dynamic we need to use some variables now luckily gitlab comes with some predefined variables and one of these variables is ci commit ref name so i’m going to search here for ref name you will see here so see i commit ref name this will give us dynamically the branch or tag name for which the project is built so we can use this as a variable and i’m going to write here dollar sign this one so this will be evaluated to our current branch now with any of these variables if you’re not sure what they do how they look like simply use here echo and just keep something like this for debugging purposes in your pipeline until you get familiar with the values that you’re having i’m going to remove it because i don’t need it but for you definitely recommend use echo to inspect the different values that you’re using if you were on a different branch this would be something else so if this equals main then we’re gonna essentially run this job otherwise it will be excluded from the pipeline still having something hard coded here is also not something that we like doing yes we could keep here main but there are also variables that can handle this so another variable that we can use is default branch it should be ci default branch and this variable will give us dynamically give us the name of the default branch so if later on for whatever reason we decide to switch from master to main or from main to something else then we don’t need to worry about these pipelines because the default branch will automatically get updated in this variable and then we can use it directly here so again i’ve added here another variable which is the ci default branch so now everything is dynamic so this rule makes sure that when the current branch equals the default branch this code gets executed in our case right now for maine will have this job in other pipelines this job will get excluded not be part of the pipeline so i’m going to commit this and i’m going to take a look at the pipelines after this so what i invite you to do is to go to cicd pipelines and now our pipeline only contains two stages so we have the build and the test stage the deploy job was completely removed i can also inspect the merge request and take a look at what is going on here and also i see that this pipeline has been updated it contains only build and test so at this point i would say okay i’m pretty confident that this functionality is working properly gonna click here on merge and also let the main pipeline run so again if we’re looking here cicd pipelines we will now see that the main pipeline has started so this is on the main branch and now this pipeline contains the three jobs required so we have build test and deploy so now the main pipeline is also working we have deployed again to s3 but how do we know if our website is still working again we could go to the address again hit the refresh button check again if it’s working but again that’s not really the point of automation we want to make sure that the entire process is automated so if this is a worry that we have like is the website still working that we have deployed it’s probably time to add another stage and to test after this deployment if everything works correctly on the website or at least as much as possible so let’s open up the ide and take a look here at adding another stage so we already have here build test deploy so probably the next stage should be let’s call it post deployment so this will be after the deployment and then it’s probably time to also add another job i’m going to call this production tests and i want to make sure that the stage is the right one let’s post deploy and what i’m proposing is to simply run a curl command so that’s pretty similar to what we did here so simply going to go ahead and copy this of course this has to be under the script block and there are a few things that we need first of all we need a docker image that has curl and one option would be to search again docker hub for such an image but essentially any docker image that has curl would be sufficient so i can go ahead here on docker hub and search for curl and what i’ve used in the past is curl images curl but probably also one of these verified content images is probably just as good so the address to this image is this one so this is what we need to do without docker pull so we have here image and paste remove this docker pull that’s the name of the image and because curl is such a generic tool don’t need to specify a specific version what we’re doing is pretty basic and i’m pretty sure it’s not going to change very soon the other thing is the address right so the address where we have deployed this and this is available in s3 so we’ve been going back in s3 to our bucket looking here at properties and right at the end this is the address right so can go ahead and copy my address go back to the editor i could paste it here right but again we had a discussion about like not having you know things that could change later on inside our pipeline or have it all over the place so again let’s go ahead and define a variable now this time i’m gonna define a variable within the pipeline itself that’s also totally fine and also way on how to do things i’m gonna define here variables block and let’s call this variable app base url of course you’re free to name it as you wish column and then i’m gonna paste here the address to it so this is now the full address so it’s all i need is this this is the variable name and here instead of writing something like this and simply go ahead use curl app base url i’m going to search for react tab because this is what we have actually in the body of the index html that are grabbing this should be enough at least to test that what we have on s3 is still reachable that at least this text is still available there additionally we need to make another configuration because otherwise as it is right now this will also run inside the branch so i’m going to copy this rule that we have here on deploy to s3 same rule is valid here we want to run these jobs only in the main pipeline so i’m going to go ahead commit these changes and merge them into the main branch and i’m a pretty confident that this pipeline will work without any issues so there’s no additional review that i need at this point i’m going to simply merge when the pipeline succeeds and i’m going to grab a coffee until this is merged so after a few minutes here in the merge request we’ll see a few things so first of all we have here the pipeline for our branch with two stages we have to build and the test that’s it after we have merged this then the pipeline for the main branch has started and this pipeline then has four stages build test deploy and post deploy so essentially the stage that we have added right now and this contains the production tests let’s take a look at them to see what has happened here so we have executed this command carl was able to download the website has passed this information to greb and grab a search for react app in the text and somewhere here was react app so for that reason everything worked out successfully so we know that website is still there it’s still working so that’s perfect so let’s take a minute and take a look at what we did so far so this is our main pipeline right this is the pipeline that now deploys to aws s3 and we have the build and the test this is what we essentially call continuous integration right this happens also in the merge request but also here we are rechecking that our build is still working and let those tests are executed here the second part this is the cd part in ci cd now cd can mean continuous deployment or continuous delivery i’m going to explain a second what that means right now we’re kind of doing continuous deployment in the sense that every commit that goes into the main branch will actually be deployed to production we have automated the deployment to s3 and we said that you know our s3 bucket it’s hosting our website and is essentially our production environment that the whole world can see now this is a very simplistic view on cicd now quite often pipelines also have a staging environment or a pre-production environment essentially if we make changes to our deployment to s3 and something doesn’t work anymore we’re not really testing this before and the main branch may still produce some errors and again we come back to the same problems we had when we did ci when nobody else can work on a project anymore so for that reason it kind of makes sense also to add another environment at least if the environment if there are some issues with the environment before production and the main pipeline breaks at least we haven’t affected the production environment so it’s still good right now we’re just deploying anything and we’re testing afterwards what is a staging environment so a staging environment is essentially a non-production usually non-public environment that is actually very close to the actual production environment right so we want to keep things as similar to the production environment as possible quite often we use automation to create these environments and to ensure that they’re really identical we haven’t done that we have used manual work to actually create the s3 bucket and we did some automation afterwards but ideally we would create this entire infrastructure automatically so the idea is to add a staging environment as a pre-production environment essentially we want to try out our deployment on pre-production before we go to production so in case it fails the production environment is unaffected the main two concepts i want you to be aware of is continuous deployment and continuous delivery now with continuous deployment as i said every commit that goes through the pipeline would also land on the production server in the production environment with continuous delivery we’re still using automation and every successful merge request again triggers the main pipeline which leads on automatic deployment to the staging environment however we don’t go directly to production there is like a button that you click in order to promote a build from the pre-production environment to the production environment so these are like the differences between continuous delivery and continuous deployment in upcoming lectures we’re gonna get a bit more into them and you’re gonna better understand like what they really mean now it’s time for you to practice a bit more and at this point i have an assignment for you and the assignment is essentially you building a staging environment and i want you to follow essentially the same process as we did when we created the production environment i also want you to make sure that you write some tests that ensure the environment is working as expected please pause the video and take this as an opportunity to practice along more both with aws and with building the pipeline i hope you had a good assignment and that everything is working properly in this video i wanted to show you how i would do something similar so first of all i’m going to start with creating the bucket and i’m just going to copy this name that i already have click here on create bucket and essentially i’m going to call this staging essentially the same name but i’m just adding staging to it and in the beginning i’m going to leave all settings as they are and in the upcoming steps i will essentially ensure that this bucket is public so they are the same steps as before and i’m not going to go again over them so after a few clicks i now also have this staging bucket also as public and i’ve enabled website hosting which means you can go inside the bucket you can go here to the properties and then right here at the bottom i will see this address which we will need later on first of all let’s begin with the name itself so i’m going to copy it and i’ll have to go inside gitlab and save it in a variable so here in gitlab i’m going to go to settings ci cd go here to variables what you’ll notice here is that we already have an s3 bucket of course that’s kind of an inconvenient at this point so we’ll need to figure out exactly how we can manage this what i’m going to go ahead and also call it aws s3 bucket i’m going to call it staging i’m going to add this thing that will make it different essentially now we can protect this variable because we don’t want to deploy from a branch anymore so it definitely makes sense to have this protected masking it doesn’t make any sense at this point all right so at least now i have this i’m gonna copy the name so that they don’t forget it and let’s go inside the project open the web ide and start making some changes to the pipeline now first of all because this is a pre-production environment we need to define another stage before we deploy to production let’s call this deploy staging and of course what we need to do here is to define a new job now most of this job is pretty similar to what we already have here in terms of deploy to s3 and also in terms of the production test that we have here so it kind of makes sense to copy everything that we have and paste it here again and let’s call this deploy to staging and we can call this deploy to production just to have like more clarity in terms of where we’re deploying and what we’re doing also we need to have the right stage so the stage name is deploy staging apart from this we are using the same image we still want to run this on the main branch and the only thing that we need to adapt here is the bucket so it’s going to be a little s3 bucket staging and again the test that we have here they can be called staging tests we’ll see here that you know the editor is complaining about the duplicate key so that’s a good thing because you know exactly what we need to fix so then we have the staging tests and also here where we need another base url so that’s also something that we need to consider i’m gonna add staging to this one as well and from aws i’m gonna copy the url paste it here you can notice that we have two different urls so let’s double check everything we have added a new stage deploy staging which is before deploy or we can even call it deploy production right so we have even more clarity just need to make sure that we are adopting this everywhere so we have here deploy staging and then we have deploy production but then we have post deploy now we also want to have this testing after the staging so we need something like test staging for example and we can call this test production right so we’re deploying to staging we’re testing the staging if both of them are successful then we go ahead and deploy to production and after that we test again the production so let’s make sure that we have the right stages otherwise we’re going to get some issues so deploy production yes test production test staging deploy staging we have the right bucket name here we don’t have the right url we need to grab that as well for staging all right and all of a sudden our pipeline got a bit bigger but no worries one last check deploy staging test staging deploy production test production deploy staging is the stage we’re using the staging bucket paging tests using the staging url test staging deploy production test production all right i’m going to go ahead commit these changes and we’ll see how they work in the main branch and a few minutes later after the main pipeline has also completed we’ll see now that we have here a bunch of stages so after the tests then we’re deploying to staging we’re testing staging deploying to production we’re also testing production of course we could go ahead and combine the staging so for example because our tests are very simple there’s nothing that prohibits us from just putting this simple test inside the deployed job itself so that will save us some stages here but i just want to demonstrate like how how longer pipeline essentially looks like and what’s the meaning of the stages so most importantly now if something happens with the deploy stage the pipeline breaks here production environment is unaffected we don’t have to worry about it so we have time to fix any problems in our deployment if you’re looking again at the pipeline well you know all the staging environment has really created a lot of duplication and all these variables with the bucket now we have ideal ss3 packet with staging we also have the space url and the staging url obviously we have now two environments but we haven’t really figured out a way on how to properly manage this environment luckily there is a functionality that gitlab offers for managing these environments going inside the project you will find here on the left hand side deployments and inside deployments there’s this optional environment what is an environment staging is an environment production is an environment wherever we’re deploying something that is an environment and it really makes sense to have these environments defined somewhere and to work with the concept of environment instead of fiddling around with so many variables that’s really not the best way on how to do that so what i’m going to do here i’m going to create a new environment and i’m going to call this environment production i’m also going to get the environment url from the pipeline so let’s remember this is the production url so let me get that and i’ll save it here and going back to environments i’m going to create a new environment and this will be staging again going back to the pipeline copying the name of the environment and putting it here what gitlab will also do is to keep track of deployments on different environments now currently we don’t have any deployments but we can directly open the environments from here so in case we forget where the our environments are we can easily click on this open environment link especially for non-technical people that are also working with gitlab it’s easier for them to see exactly where is the staging environment what’s the url i don’t have to ask someone it can just directly go to their respective environment and see that respective version so very very useful feature in terms of environments but what does it do for our pipeline well i’ll tell you what this will do we’re going to remove this from here right i’m going to go ahead first of all this is out i’m not using this anymore second of all i’m gonna go back to settings ci cd and expand here the variables and i’m gonna go here to this aws s3 bucket i’m gonna click here on edit and this is essentially our production environment this is for production and now we can also give a scope so we can tell gitlab hey this variable is associated with production and not with something else so we can essentially scope it and of course in this case because we’re using it in main only we’re gonna also protect this variable i’m gonna go ahead and update here and the same goes for this other variable right now we don’t need this staging added at the end we can just call it aws s3 bucket and we’re going to select here the staging environment scope and update the variable so now we have two variables that share the same name but now they belong to different environments now the idea is to do the following we need somehow to adapt our pipeline and let’s begin here by deploying to staging first of all we have this aws s3 bucket so i’m going to remove here staging from the end and additionally we need to tell gitlab hey this deployed to staging this is associated with the staging environment i’m going to say here environment staging there are still a few things i would like to change first of all the entire pipeline right now is too long we still have this staging test and production tests and because this is just one curl command we can just move it away from here i’m gonna remove this stage all together and all i want to do is to add it here on deploy to staging so essentially right after you have deployed we’re also using this curl command it’s pretty similar to what we’re also going to do on production but as you probably noticed already this app base url doesn’t exist anymore we have removed those variables and we need to find a way to get our environment url and luckily again gitlab to the rescue there is a variable i’m going to go ahead and search for environment and i’m going to have here the ci environment name environment slug environment url so this is what we’re interested in the environment url i’m gonna copy this and i’m gonna put it here in the curl command so i’m gonna send the curl command directly to the ci environment url same goes to the production deploy to production i’m gonna add it here so in this situation the production tests they also don’t make any sense so i’m gonna remove them and all these extra stages that we have here test production it’s not going to be needed test staging it’s not going to be needed so now we have a much more simpler pipeline but we’re still achieving the same thing most importantly we are using these environment we still have an error inside here but i just gonna commit these changes let the branch pipeline run merge these changes into the master we’re gonna take a look at the main pipeline to see which errors we still have there now if we’re looking at the main pipeline you will notice something interesting deploy staging has passed it’s working perfectly you will see here curl ci environment url it’s fetching the page it’s passing however if you’re looking here at deploy to production it’s all of the sudden complaining it’s saying here that aws is three bucket parameter validation failed invalid bucket name so what’s going on well the following thing has happened we somehow did not associate this with an environment or the environment is not correct so we need to go back and see why this job does not have access to this environment variable i’m going to go ahead here and take a look again at the configuration to make sure that the configuration itself of the job is correct what do we have here or we have deployed to production but as you can notice i haven’t defined an environment right so i have defined an environment for deploy to staging but i haven’t defined an environment for deploy to production by looking here at the variables you will see that this aws s3 bucket is now scoped only for production and because this job doesn’t say anything about production this environment is not exposed in the production job so to be able to solve this we also have to add here environment production i’m going to go ahead and copy this and the environment will be production and of course unfortunately this time we have to go again through all the stages the merge request and committing this in the main branch and just a few minutes later the entire pipeline will then succeed so we have deployed the staging we have deployed to production everything seems to be working fine we can take a look at the jobs to see what they’re doing and we’ll see here that everything worked out without any issues and of course we can also go here to deployments environments and here we are able to see the staging environment and the production environment and we can easily open them and we can see like what was deployed when was it deployed so it really keeps track here of the different environments and the deployments that took place so we see here on production there’s only one deployment in staging we have two deployments that we have committed so we’re essentially keeping track of what is going on on this environment you know what i’m still not super happy with this pipeline i mean generally when i see that a lot of things are repeating but it kind of makes me wonder if there’s not a better way so if looking here at the deployed staging and we’re looking at deeply production well essentially at this point because we have used all these variables these jobs are almost identical the only thing that is different is the different stage different job name and we’re specifying the environment but the rest of the configuration is identical and yes the answer is yes we can simplify this even more so essentially we can reuse some job configurations so i can go ahead here and i’m going to copy this and i’m going to go here and i’m going to define essentially a new job i’m just going to call it deploy this will be a special kind of job it will be a job that has a dot in front of it and as you remember we have used the dot notation to disable jobs by having this we can essentially have a configuration here we don’t care about the stage right we don’t care about the environment so this is something that is not in common with the rest it doesn’t even have to be a valid job configuration we have just put here the parts that are really important for us and then in these other jobs we’re going to keep what we need so what we need we need a stage and we need the environment and of course we need the job name and this other part here we can simply write something like extends so this will be the extents keyword and what are we extending we are extending dot deploy that dot in front is very important so don’t miss it and the same goes for deploy to production let me remove everything i’m just going to keep here the extents and make sure it’s properly indented so deploy to staging deploy to production we now have two simple jobs here essentially the deployment part is the same this also gives us peace of mind because if we’re making changes to the deployment part we’re only making changes here so if we make a mistake there the chances are we’re going to be able to catch it before it goes to production let’s commit this give it a try see how it works and it should essentially work the same as before and after a few minutes if we take a look at the main pipeline we’ll see that it works as it should work it looks as before and this is exactly what we expected we didn’t want to see anything else but the pipeline working but now we have a much simpler pipeline configuration and now feel it is time for another assignment now let me tell you my idea now we have this pipeline and we’re testing with the curl command that that specific text is on the website and that’s fine it ensures that the website is still working we currently we haven’t really made any changes to the website itself we haven’t added any text or removed anything how can we make sure that you know with what we actually built here actually lands on the staging and in the production environment and it’s not somehow a cache or something old there maybe you know maybe the deployment is not even working and you’re getting the older version and we think that everything is fine so here’s my idea what if we add another file in our build let’s call it version.html for example and inside there we put a build number something that is all the time different with each build so for example it will increment from one two three four and so on so that with every build we can identify to build number and we can check which version has been deployed to staging and production how about this idea and in order to get you started i’m going to show you how to get this information inside there how to get this build information this dynamic part maybe this is something that right now you don’t know exactly how to do it so no worries i’m going to get you started but the rest of the assignment will have to figure it out on your own trust me you already know all the concept needed in order to implement something very simple so i’m going to go ahead here and define a variable and i’m going to call this variable app underscore version so essentially let’s say that this is our application version so want to have something like 12 13 14 and so on now again when we’re thinking about something dynamic we have to think back to the list of predefined variables that gitlab offers right so on this list there are various variables that we can use including something that is related to the pipeline id so if we’re looking for pipeline underscore id find here some pipeline ids that we can use so these are variables that of course change all the time they are injected by gitlab so we can use them in our jobs so we can get here like an instance level id this typically includes like very large numbers because there are a lot of pipelines on a gitlab instance but you can also get like a project level id so this will be something we can easily relate to it’s and we’re going to see it increment all the time so i’m going to get this variable and i’m going to use it here so i’m redefining this because i want to give it a bit of meaning i could have used this directly somewhere this is really your solution but if i’m using here app version i’m very clear that this is the application version that we’re using here not just some random id so that’s it i’m gonna let you do the rest of the assignment so just to recap here somewhere in the build add a new file which is called version.html add that new file put this app version inside that file and then when we’re testing our deployments including staging and production so here create a new curl command that will check that that application version is actually available on that environment okay so i hope that you have managed to solve this on your own i already feel i gave you like quite a lot of hints but just to make sure this is how i would solve it so let’s go here to the build website apart from all these things that we’re doing here what i want to do additionally is to create this file so again in order to create this file we have to first think about the name so it’s going to be inside the build folder right this is this is what we’re deploying and the name will be version.html and what we’re doing is we’re taking this application version so in order to get the application version inside the file gonna use echo i’m gonna print the application version and then eventually again redirect it to the file so this is enough to create this file to put it inside the build folder which already exists build yarn build has created that and because this is dynamic it will be available there the next step is part of the deploy template so i’m working directly in the template here so we can essentially duplicate this command we’re going to the environment url and we hope that we don’t have a forward slash already gonna check that quickly i’m gonna go here to deployments environments i’m gonna go to staging click on edit i’m noticing here i don’t have a forward slash so that’s already good and i presume that the production environment is also similar but just to check i did i don’t have a forward slash okay so inside our configuration i’m gonna add here forward slash i’m gonna write here version and what are we looking for well we’re looking for the application version and because the application version is simple number we don’t need these quotes here so we can just write grep app version and that should work so already as part of the merge request as soon as the build has completed i want to go inside this build website and i’m going to take a look here at the artifacts to see if this artifact contains this version.html file so i’m able to see it here version.html it has a size that is not zero that’s a good thing and we can also download all the artifacts or just look at a single file so after clicking on this somewhere here on the screen it will open up this web page essentially so this is the build number 35. it’s pretty small but you should be able to see it on screen as well so this is part of the file so it’s already looking good when this much request is completed i will also take a look in the main branch now the main pipeline is also working makes me very happy because i now have more confidence in what is going on in this pipeline i have confidence that if i’m making a change that change actually gets deployed to staging and production and we have this additional check then in place to ensure that all right so let’s recap a bit what we did the first part is the ci part continuous integration the second part is cd continuous deployment now we have a more realistic continuous deployment pipeline we’re testing something and then we’re first deploying to staging making sure that on staging everything works and after that we deploy to production but what is a continuous delivery pipeline this is exactly what i wanted to show you essentially a continuous delivery pipeline is just a pipeline where we don’t automatically deploy to production essentially what we want to do is add here a button and only when we are sure that we really want to make that change to production we can click on that button and make that change so let me show you how to do this i assure you it’s super super easy so inside the pipeline configuration the job that is affected by this change is deploy to production so here on deploy to production i’m gonna add a condition and the condition is when and we’re gonna write here manually it’s actually when manual not manually so this will tell gitlab that we only want to run this job manually so we need some manual intervention gonna commit these changes and we’re gonna take a look at the final pipeline now this is the final pipeline and i want you to notice something for deploy production this job now looks a bit different right if you look at deploy staging click here on deploy production we’ll say here this is a manual action and we have here an additional button right if we also go to our deployments and we take a look at environments we’ll be able to see here like what we have on each environment so you’ll see here if we’re going here on staging this is the staging environment and of course here inside the staging environment we can go ahead and write here version.html so we’re getting here on the staging environment for version 40 right so this is the version on the staging environment and if we’re opening up to production environment it looks absolutely the same but if we look at a version.html you’ll see that this is a different version right on staging and on production we now have different versions because our pipeline has not deployed to production yet so in order to deploy to production we have to click this button and only then will the deploy to production job start so this is essentially the difference between a continuous deployment and a continuous delivery pipeline what we have here is a continuous delivery pipeline we’re continuously building packages of software we’re deploying them to staging but we’re not automatically deploying to production without that manual intervention for some organizations this is mandatory and this is why i’m also showing you in some legacy systems you cannot always deploy everything without checking a few things in advance now if we’re looking here deploy to production has also completed so we can take a look here if i’m gonna refresh this website i’m gonna get here version 40. and here on the staging also version 40. so now both staging and production have the same version now i hope that the difference between continuous delivery what we did right now and continuous deployment where every commit that lands in the master branch gets deployed to production makes more sense now hey how are things going are you like the course so far you’re following along let me know by leaving a comment in the section below or by sending me a message on twitter or linkedin or on any social platform you can find me i’d love to hear from you and to know how are you using this course we are at the end of this unit so i’m gonna grab a coffee and i will see you in a bit [Music] so far we have worked with a static website and deployed it to aws it’s probably one of the easiest scenarios involving gitlab ci and aws however modern applications tend to be more complex and most of them use docker nowadays so in this section we’ll dockerize our website and instead of copying some files to aws we’ll be deploying an application that runs in a docker container to do that we’ll build a docker image as part of the build process store it in the gitlab container registry and deploy to a service on aws called elastic beanstalk so if you’re eager to learn more let’s jump right into it when we use a cloud provider like aws we can rent virtual machines that have a dedicated cpu memory and disk storage and we can use any operating system we desire but this also means that we are in charge of managing that machine we need to ensure that it’s secure and that all software running is updated this is often too much overhead especially for some types of applications but there is a way to take away this complexity and focus only on the application we want to deploy aws elastic beanstalk is a service that allows us to deploy an application in the aws cloud without us having to worry about the actual virtual server that runs it this is a great way to reduce complexity and it’s probably one of the easiest way to deploy an application in the aws cloud by default elastic bin stock can run python java node.js php and many other types of applications but it can also run docker containers which really gives us a lot of flexibility we’ll be running a web server application that serves our simple website files so this time instead of just uploading some files to aws we are providing the entire application which is self-contained in a docker container i’m sure you’ll be amazed by how easy it is to run an application here in the background we will be using a virtual machine but we don’t need to worry about managing it however at this point i need to warn you about potential costs while running these servers for a few hours or days will most likely be free or cost you only cents if you let the service run for a month you may get some unexpected charges on your card even if you are not actively using a service once you have created it it uses resources in the cloud so stop any services if you are no longer using them find a way to set a reminder so that you don’t forget about them with that being said let’s start using elastic bean stock so let’s go ahead and create an elastic beanstalk application so i’m here in the aws console and the first step would be to search for elastic bean stock so i’m going to write here eb and you will see here one of the results is elastic bean stock so since i have no applications here i’m getting this getting started guide so i’m going to click here on create application so let’s call this application my website you don’t need to include any application tags and here on the platform there are different platforms that are supported but what you’re actually interested in is in the docker platform so that we can essentially deploy anything we want i’m going to select here docker i’m going to leave the defaults as they are and the first step would be to start with a sample application so we’re not going to upload any code we’re going to let elastic beanstalk create this instance this application and after that we’re

    gonna add our own application on top of that so i’m gonna click here on create application and it will typically take a few minutes to get this to run and in the end you should see something that looks like this so what has happened here well we have created an application and we have initialized a sample application that elastic bean stock provides in order to actually run that application this wizard that we have used is set up has also created an environment so we have applications which is like an umbrella for environment so if i’m looking here at environments you’ll be able to see here that we have an environment that is called my website dash nf it belongs to the application my website and then you get some information in regards to when this was created under which url is this available which platform is this using and so on in order to see like what this environment is doing we’re gonna click here on it you’ll be able to see here a link so if you click on this link this is the sample application that was deployed so it’s just an idea tells you that everything is working properly that we have nothing to worry about this entire setup has worked without any issues now the question is what has actually happened in the background in order to understand that we’re going to go here to the services and we’re going to take a look at ec and right here ec and that is ec2 the service that we’re actually interested in this stands for elastic compute these are essentially virtual servers that we can create here but we haven’t actually created one but if you’re looking here at instances you’ll see that we have one running instance and this instance is called my website dash and you’ll see here which kind of an instance this is this is a virtual server that’s running here and this is the server that’s actually running our application additionally if you’re going here to s3 we’ll be able to see that we now have an additional bucket so we still have our buckets that we have used for hosting the website but now elastic bin stock has also created a bucket so actually what elastic beanstalk has done has created this required infrastructure in order to run the application we didn’t have to worry about creating that but this is why it took a few minutes to create this entire thing now let’s go ahead and try to understand how we can deploy something on your own like how can we get our own application to work and because we’re using docker we need to provide a manifest file essentially a file that describes the application that we’re trying to deploy so i’m here inside the project again and if you go into the templates you’ll find here a file called docker1.aws.public.json and this is the manifest file that i’m talking about it essentially tells aws which container we want to run because we have selected a docker platform we can only run docker containers there and this is a public container the name of the image is nginx which is a web server but what we want to try here is to actually use this configuration to use this file and to deploy this application to aws to make sure that this deployment process is working and this is again something that we’ll do manually at this point just to make sure that everything works properly so go ahead and download this file and after that let’s go back to aws and open up elastic bean stock here inside the environment will have the opportunity to upload a new version of the application you will see here right in the middle running version this is the sample application then we have this option upload and deploy and we want to upload that json file so i’m going to go ahead and select the file and i can also write here a version label this will be sample application version one that’s totally fine and i’m going to go ahead and click on deploy and what will happen is elastic beam stock will take this file and then we’ll start a deployment we’ll start here updating the environment of course this will take a few minutes but in the end what we want to see here is health status okay wanna see that everything is working properly and want to see that this application has been deployed and is available at this address and now we’re seeing here status is okay so health is okay everything seems to be working properly you can also go ahead and refresh this page just to see here which version is running you will see here that the running version is sample application one so this is exactly what we have deployed we can open up this address and we’ll see here welcome to nginx so this is the welcome page from the nginx server which we have deployed by having this json file which describes which container we want to use with elastic bean stock so in order to actually deploy our website we need to create a docker image we need to provide this json file and of course we also want to automate everything so this is what we are gonna do in the upcoming lectures so how do we create a docker image with our website a docker image is created by following a set of instructions something like a recipe we store these instructions in a file called docker file so let’s go ahead and create one so here from the web ide i’m gonna go ahead and create a new file and you will see here in the suggestions one of the suggestions is docker file so which are these instructions that will write inside the docker file first of all we have to start with a base image essentially an image that we want to add something on top of it in our case because we want to deploy web server with our files we’re going to start with the engine x image it is exactly the same image we have used in order to test our deployment to elastic bean stock so i’m gonna write here from nginx and additionally what i highly recommend is writing a version so writing a specific tag so in order to know which tag to choose i’m gonna go to docker hub i’m gonna search here for nginx i’m gonna take here the verified content so this is the official image for nginx i’m gonna take a look here at the tags and let’s search for alpine because alpine generally provides us with a very small docker image so what i’m searching here for is a specific tag so this here would be a very good tag to use just as an example i’m going to go ahead and copy that and here in the editor column and the version so you have to be very careful how you’re writing this has to be something like this so we’re starting from this base image so we have everything that is in the base image we have there so we already have the application essentially the application is the web server now the next step is to actually add our files which are in the build folder and to put them on this web server and we do that by using the copy command and right here copy so what are we copying we’re copying the build folder then we have to specify where we are copying it now by default there is a folder where nginx will store files that we want to serve and that folder has the following path forward slash user usair share nginx and then html so we’re moving this folder inside html folder so this is everything that we need to do in order to build our docker image these are the only instructions that we need at this point we’re getting this base image nginx in a specific version because we’re using this tag and then the next instruction is copy everything that is in the build folder and move them to this html folder just because we have created this docker file doesn’t mean that something will automatically happen we still need to make some changes to our pipeline and because we’re already making changes and because we want to deploy to elastic bean stock we don’t really need this s3 deployment anymore so i’m going to remove essentially all the jobs that are related to s3 this includes deploy to production deploy to staging this deploy job which is just a template and also test website we’re going to introduce a better way of testing so i’m going to remove all of them and what remains is the build website and of course also the stages here they can be removed so what we’re gonna do is we’re gonna introduce a new stage let’s call it package and we’re gonna associate this stage with a new job gonna call it build docker image and now the stage will be packaged so how do we build the docker image well it’s relatively simple the command is docker build i’m going to also add a dot here because that’s going to make a reference to the current folder and the current folder contains the docker file that we have created in order to be able to run this docker command we also have to define the image that we’ll use and the image will be docker just using this docker image will not work we’re gonna get an error the reason for that is the docker architecture is composed of a client and a server essentially what we have here docker this is the client and the client sends some instructions to a server actually to the docker demon which is the one who builds the job and in order to get access to a daemon inside gitlab ci we need to use a concept and that concept is of services we want to define here a tag services and this contains a list of services that we can start and what we’re starting here is actually service called docker in docker you’re going to see me using here this docker in docker tag so this service is another docker image that can build docker images cannot be accessible for us over a network and docker here which is the docker client will be able to talk with the docker daemon which is here inside this service i know that now at the beginning this may seem all bit confusing but this is like the minimum what we need in order to be able to build docker images from gitlab what i also like to do is to set some tags so i’m going to go ahead and write a fixed stack for docker and for docker in docker so i’m here at docker hub and this is the docker image you will find the link in the course notes because this is something that’s not so easy to find actually not so many people are actually looking for docker and i’m gonna use this version here i’m gonna go ahead and copy it and i’m gonna add it here to my job image and additionally there’s another tag with docker in docker this is the tag that i’m gonna use for the docker daemon i’m gonna remove here dent and this will be the docker in docker image you’ll see both of them have the same version additionally when we’re building images we also like to specify tags like label this will help us identify the images that we create because we can create multiple images of course in order to tag images we first have to specify an image name and also the tag so we can use here dash d you’re gonna keep here this dot at the end that’s very important don’t forget it and what we’ll use here is this is the list of environment variables and one of these environment variables is this ci registry image so we’re going to use here ci registry image don’t forget to put a dollar in front of it this will build the latest tag so this is a tag that always points to the latest image that we have created and additionally i’m going to create another tag which still contains cr register image i’m going to add here a dollar in front to make it a variable i’m going to add here column and we’re going to use the app version that we are still having here in our job so we are creating two tags so don’t forget the dot at the end this is super important this is one tag that we are creating and this is the second tag that we are creating and this variables will ensure that we have the right name for our docker image additionally in order to make sure that we have indeed built this docker images we’re going to use here docker image and ls actually we are building only one image but we are tagging that image with two different tags the latest and the app version so this will show us all the images that are available with all the tags that are available in this instance i’m not gonna run this job only on the main branch i’m just gonna run it inside the branch i’m still here in the branch playing around so we’re gonna see how this looks like after we execute it and once the execution is done we’re gonna jump directly into the build docker image and try to understand what has happened here so we’ll see where there are a bunch of locks that have been generated but most importantly what we want to see here is which docker image are we using so we’ll see here this is the docker image which is darker and then we’re also this is new we’re starting this service with docker in docker so the service will be available over the network inside this image that we have right now this will give us the docker client and there are a lot of locks here and they’re not really so relevant the interesting part comes when we’re actually building the image and you’ll be able to see here this is our command there are some steps how these docker images are being built docker works with concept of layers but we won’t go into that but this is exactly what’s happening here is every essentially every command that we’re executing will create an additional layer on that image we have successfully built this so this is what you want to see it has been successfully built and then we have also the two tags that we have created you will see here the latest tag and this is also the version tag now they both point to the same image but they are different text and you will better see it here with this command that is listing images so this is here the essentially the name of the image we’ll see here the tag and you will see that internally the image id is the same so we have the same image but with two different tags so now we have successfully managed to build and tag our image as you might have guessed the docker image that we have just built has been lost as soon as the job finished the docker image that we have created is not a regular file or an archive that we can just declare as an artifact and upload it to s3 when we want to preserve a docker image in this case this docker image that we have created here we need to save it in a registry for example docker hub which we’ve been using to find out tags and images that we can use is a public registry with docker images however typically our projects are not public so it doesn’t really make sense to use docker hub so for that reason we need a private registry both aws and gitlab offer private docker registries and for this example we’ll be using the docker registry offered by gitlab and you’ll see here on the left hand side packages and registry and we’re going to use here the container registry because docker is all about working with containers and of course at this point there are no container images stored for this project so alone just building that ochre image does not automatically add it to the container registry actually this is called pushing so when we want to save a docker image in a registry we need to push it to the registry so let’s go back and make some changes to our pipeline so let’s take a look how do we push so the command in order to push something is docker push and we actually want to push all tags so we’re going to use here parameter dash dash all dash tags and then we’re going to specify here the ci registry image so this is in term of pushing but where are we pushing and especially if it’s a private registry don’t we need to log in somehow yes that’s correct so alone pushing this will not work in our case we need to do something before we push and we’re going to do it right here in the beginning because in case there are some issues which the login we want to know as soon as possible and not only at the end so the command to login is docker login relatively easy and we’re going to specify username and password for this service we’re not actually using our username and password that we used to log into gitlab we’ll use some variables again that will give us some temporary user and passwords don’t really care so much about that but essentially this is what we’re to do so if you’re looking here at the variables that are available we’ll find here ci registry see a registry user say a registry password first of all where do we want to log in this is the cia registry so we need to specify where do we want to log in and additionally we need to specify our username and password so we typically specify the username and password like dash u and then specify the username and we’re gonna use another variable ci registry user with the dollar sign in front of it that’s very important and additionally we can specify password p and we’ll see here ci registry password so these are all the credentials that we need in order to log in in more recent versions of docker just specifying your password like this is not really the best way to log in you may get a warning and it’s possible that in future releases this argument this dashb will not be available anymore so what i like to do is the following i’m gonna remove it from here just gonna copy this variable and actually what we can do here is we can say to docker login get the password from the standard input and i’ll explain a second what the standard input is you wanna write here dash dash password dash std in this is the standard input so the standard input is essentially something that we are piping from another command and that other command in our case will be echo so we’re going to echo this ci registry password but we’re not going to display it in the logs we’re going to use this pipe so this will be available in the standard input for the docker login and docker login will look at that standard input and we’ll see oh okay the password is coming from there so we’re going to grab it from there using it in this way will ensure that this password doesn’t get exposed but really for ci in our case it will not get exposed anyway but it’s any good to know and this is why we have this construct here so once again we are echoing this password and we’re sending it to docker and docker login will know that this password is coming from the standard input and apart from that we have the parameter with the user for docker login where are we logging in to the registry make sure that all the variables that you are using here have a dollar sign before them otherwise they will not be resolved as variables so let’s give this a try as well and see if our image lands up in the registry the build docker image job has been successful we can take a look at the logs to see exactly which tags we have pushed and you will see here this are the tags that are available so we had tag 46 and latest and this is pushing the tags to the registry so no errors here and what we can do next is to go here inside the registry inside the container registry and we’ll see here this root image and if we take a look at it we’ll see currently we have two tags we have tag 46 and the latest tag will also tell us the size of our image because we are using alpine as a base image also our images are relatively small so that’s definitely a good thing so now we have successfully built this container but how do we know that application is actually working how do we know if nginx is serving our files if we have the right files and if everything is working as we think is working before we actually move on to the next step which is deploying to aws so for that reason it does make sense to do some accepting testing on the docker container itself just to make sure that we have everything as we expect so with that being said let’s go ahead and add another stage i’m going to call this stage test and of course we’ll also have to add another job let’s call this job test docker image i’m going to assign the stage test and let’s think about like how we can test this essentially the way we’re testing this docker image is not much different than the way we have tested for example a deployment or any other things in the past we can still use curl to do that so for example i’m going to write here curl we need here an address http column forward slash forward slash we don’t know the address but then again we can use grep to search for example for the app version right and of course we can also search for anything in the page that a website has but say for example here we’re going to check the version that html that we have so just want to make sure that we have the right app version and remove one of the dollar signs here so what do we need okay we need curl so we need a simple image that has curl so we’re going to use curl images slash forward slash curl so that should be enough to get this but the question is how do we start this container right this is where the part that we have learned before can be very useful and that part is services so again we’re going to take advantage of the services to start the docker container that we have just so we’re gonna write here services and this time that we’ll do is in this list we’re gonna define the name this will be the name of the image and the name of the image is of course dollar sign ci registry image and the tag will be the app version we don’t want to use the latest i want to go exactly like what is a current tag that we want to see so this will help us start the image and additionally what we can do is to specify an alias an alias allows us to give a friendly name so that we know where this service is available over the network so i’m just going to give the alias of website and here in http the address that gonna use is simply website so http column forward slash forward slash website gitlab will take care of starting this docker container registering it in the network as website and then we can simply call it in our curl script with website gonna go to version.html and inside we’re gonna search for the app version so let’s give it a try and see how it works and the confirmation that indeed our docker image is working properly can be obtained from this test docker image job so in this case indeed we have checked that the docker image is starting an http server and that the files that we have uploaded in our case this version.html it is available there and it contains the application version that we expected so with that being said in this case we now know that we have a docker image that works and we can deploy it to aws if you remember the first time that we have deployed an application to elastic bean stock we have used this json file which described which docker image we want to deploy in order to automate this process we kind of have to do the same thing in other words to give elastic bin stock this file now whenever we’re interacting with aws services we need to provide some files most of the time we need to do that through s3 so we first need to upload those files to s3 and all other services can read from s3 and have that information so in order to automate this we need to generate essentially this json file upload it to s3 and from there we’ll tell elastic bin stock to start a new deployment so going back to the pipeline what we need to do is to re-enable the deploy stage and the deploy to production job this time focusing on deploying to elastic pinstock so let’s go ahead and add a new stage deploy and of course also a new job here deployed to production since we’ll be using aws cli i’m just gonna add the basic structure that we need to this job in order to be able to use aws cli so using aws cli as a docker image we’re overriding the entry point this job is part of the deploy stage we have set the environment to production and we are starting our script now if looking at the files here we’ll see here in templates that we have this docker run that aws the name of the file is not that important the format of the file is very important because this file tells aws elastic binstock what we are trying to deploy and it will say here that this is our image and this is our tag so we have them here as variables but additionally because this is a private registry we also need to provide some authentication information and that authentication information is another file so that file is this oauth file and in this auth file we will need to add a token that gives access to the registry and essentially what we need to upload to s3 is this file docker run and this auth file which is actually linked from here it’s mentioned here of that json so let’s go to the pipeline and quickly add these files i’m going to simply paste here the configuration required so now i’ve added this copy configuration to the script so we’re still using aws s3 copy it’s pretty similar to what we did before in terms of copying files to aws s3 so i’m going to skip a bit this part now as you may have noticed these files contain some variables and those variables won’t get automatically replaced so if we upload the files as they are they will not get replaced second of all we don’t even have the right paths here they are in templates and not in the current folder where we’re executing the script just as a second thing but i wanted to show you something very important and that is how to do environment variable substitution in files and for that we’re going to use a very cool command it is env so from environment subs so relatively easy to tell what this does it replaces environment variables then we need to specify like what is the input file and we do it with this smaller than sign essentially and then we’re gonna go here and say something like templates and give here the name of the file and then we will have to specify the output file that will be generated and the output file will be this one so the name of the file will stay the same the location will change it will be put in the current folder and this is perfect for the next command that we’ll be using we’re doing this environment substitution replacing all the variables here for the auth file just as well so i’m just gonna replace that there and there so any environment variables that are in these files will get replaced in order to use this command we need to install an utility which is called get text now this doesn’t come directly in this amazon image so this is something that we need to install additionally i’m going to use this package manager to install this dependency which is called get text and get text has this tool for environment substitution gonna also add this flag here this will essentially answer with yes any questions that this installer may ask because we are in a way that we cannot interact with an installer we’re gonna specify this and say okay just install it if there are any questions answer with yes just to ensure that the installation process works without any issues now we have this so we can do environment substitution of course it’s going to be very nice to actually take a look at these files so we’re going to use here cat just to make sure that these environments were substituted as we expect them and as a final check we should also go in these files and make sure that we have all variables here so see a registry image app version this is something that we already defined so we have this information here aws s3 bucket this is relevant for where we are uploading these files now because we are using elastic bean stock an elastic beanstalk has already created an s3 bucket well maybe we should use that one so i’m gonna switch here to the s3 service and what i’m going to do is i’m going to copy this name could have used this other package that we have created here but i don’t want to put anything any credentials in a public folder so with that being said i prefer to use this bucket that is not public and here in the environment variables of the job we still have this aws s3 bucket for production so i’m gonna click here to edit it i’m to change here the value i’m not going to protect this variable because i’m still in a branch and i’m going to update this so at least at this point we have everything let’s see this other file the off file you’ll see here in the auth file there is this deploy token now what is the thing well we have pushed our docker image in the gitlab repository but aws has no credentials to connect to it and again we’re not providing username and passwords not our account username and password we will generate a token and gitlab allows us to generate a deploy token that can be used by aws to log into our private docker container repository and pull that image so now to do that from settings we’re going to go here to repository and we’ll see here deploy tokens as one of the options and here we can create a deployed token so we’re gonna name it aws so that we know why we have created it and i give the username aws and the permissions that we need is read repository and read registry i’m going to go ahead create this deploy token this will be created here this is essentially like a password but it will only be displayed once so i’m going to go ahead and copy this and i’m going to create a new variable to store it i’m going to call this variable gitlab deploy token and format how we’ll store this information is i’m going to write here the username aws column and then the password that i’ve just copied i’m not going to protect this variable but i can just as well mask it just to be sure now here inside the pipeline we still need to do a small change to the pipeline itself to the script itself and the thing is we need to convert this username and password to base64 because this is what we need inside this configuration json this is what aws expects and in order to convert something to base64 it’s relatively easy so for example say you have here a string right hello you can convert it to base 64 by simply piping this and writing here base 64. this will output the string encoded as base 64. now of course we don’t want to have anything hard coded here this is why we have defined this variable so we are going to use here gitlab deploy token right and additionally we want to make sure that we don’t have any new lines or anything like that in our token so we’re going to pipe them again to another command and this command will remove any new lines that exist here so so this n here with the backslash this will be a space essentially so if there are any spaces they will be removed additionally we need to set this into a new variable that we can replace so i’m gonna put here a dollar sign at the beginning i’m going to put this between parentheses and what i’m going to do here i’m going to export so this is a way to create an environment variable but from a script and the name will be deploy token right and equals this expression so what we have here this part is an expression this will be evaluated and whatever output comes from this it will be stored in this deploy token and this deploy token is exactly what we have here in our pipeline so before we commit these changes we only are interested in making sure that this job works well so what i’m gonna do is i’m gonna simply disable all the previous jobs i’m gonna put a dot before them so that they don’t run and after that i’m gonna commit this pipeline and let’s see it in action and it seems that i’m in luck no errors while running this job let’s take a look at the logs to see if we have everything and indeed it seems here that something is missing right some information has been replaced the image is replaced here i have the version i have here the bucket that’s definitely working fine here in the odds the registry is correct but if i’m looking at all that’s information it is missing so let’s try and debug and to understand why is this information missing there so here’s the pipeline let’s double check if we have everything correct if we have used the right variables so gitlab deploy token that should be the name of the variable that we have already find in gitlab as a variable so that seems to be fine deploy token is exactly that what we have used in the file we can double check again the file you see here it’s exactly deploy token so that seems to be working well so what is going on let’s take a look again at the pipeline to see if there are any clues why this may have failed you know again here we don’t see it but let’s take a look at the commands that we have executed to see if there’s anything suspicious in there so what did we do well first of all we have just started the script here we have aws cli great we are installing here get text so that seems to be working don’t see any errors up to this point and then we’re exporting this variable right we’re running this command this expression and then if we’re looking here at line 77 we’re seeing here td command not found so apparently this expression that we have placed here gets evaluated into this variable but when it fails it doesn’t feel the job i’m going to open up here the pipeline and let’s take a look at the deploy to production so this is the td command that we have used it should actually be tr which stands for translate now this was just a silly mistake but it just goes to show how important it is to actually read the logs to understand what’s going on now we have both files all the variables have been replaced including this time in the auth file and we can also jump into s3 and take a look at this bucket here we’ll be able to see here the docker run file and the auth file they are now inside s3 docker run tells essentially contains information about what are we trying to deploy links and mentions this auth file which contains authentication information about how to connect to our private registry now let’s continue with the actual deployment so we have copied here the docker1.aws.json and the auth.json file and they are in this s3 bucket and now we can initialize the deployment and the deployment happens with the aws cli but this time we’ll use a different service so the service to which we’re trying to deploy is elastic beanstalk so i’m going to use aws elastic bean stock and this is something that is done in two steps first of all we have to create an application version so the command that i’m going to use here is create application version with dashes the next command that we need to run is update environment so i’m going to copy this and i’m going to write here update environment so why are these two steps necessary in the first step we are taking this docker run file and we are creating a new application version and then once this application version has been created we actually tell elastic bean stock which environment we want to update on elastic bean stock we have the application but there can be multiple environments in our case we have a single environment but theoretically it’s possible to take one version to create it once and to take it through different environments but in this case we only have one environment so maybe this is why it may look a bit weird in the beginning but these are the steps that are required in order to get this to run now just saying create application version and update environment is not actually sufficient in order to get this to run we need to specify some additional parameters and the first parameter is we need to identify the application right so this is done by writing here dash dash application dash name and if we’re looking here inside elastic bean stock you will see here this is the application name i’m going to copy this as it is here and of course i have the possibility of putting it here directly like application name like this but i don’t want to do that i want to take advantage of variables there are multiple places where we can store this but i’m just gonna put them here inside the job itself so we can add here variables block and for example i’m gonna call this app underscore name column and then this will be the value my website so whenever i need to use this i’m going to put it here so application name dollar sign and i’m going to reference the variable now there’s something you need to pay attention here because my website is composed of two strings so there’s this space here in between if we put this like this this variable gets replaced it will look like this so this command will think that my application name is my and then that website is some other command that we’re executing or some other parameter that we’re giving here and this is not something that is recognized so in order to get around this we need to put this between codes so we’ll have this string between codes because this is why it will know okay it is my and website it contains also the space and the word after this so again i’m gonna also put the variable here between quotes and this application name is also needed for the update environment command so i’m going to copy that there as well additionally when we’re creating the application version we need to specify like a label or which version we are deploying we already have an application version here so it does make sense to use that so i’m going to go ahead and right here dash dash version dash label don’t worry if this goes on a new line it’s just the way it’s edited but actually the command is on a single line and i’m gonna use here app version and this here for update environment yes it does also make sense to specify the application version and i’m gonna add the same command here the same parameter version label to this command now also when we creating this we need to specify somehow this file right so elastic bean stock will not know which bucket is this file how can i read it and everything so for that reason we also have to specify that and we’re going to use this additional parameter source dash bundle and this will allow us to specify an s3 bucket and the s3 bucket will equal this time the bucket that we have used so it’s this one here and we also have to specify the name of the object that we’re referencing and this is s3 key you need to pay attention how you write this it needs to be exactly as i have written it here but i also noticed people in the beginning they don’t understand when they are specifying these additional parameters here is not for example it’s not equal application name equals app name it’s just a space and then comes the value here this is a bit different because this entire thing this entire configuration here is a value that will be read later on so for that reason here it’s okay to have this equals but otherwise there are no equals in these commands all right so to create the application version we have specified application name we have specified the version label and additionally we have specified this source bundle so essentially where is the file that tells us what we should deploy this is almost the same as when we have uploaded that file manually but with that we have also triggered the deployment automatically so we have done two steps in one step here we are doing it in two separate steps so the next part is after creating this version is to update the environment so we have here the application name we have here the version for which application are we deploying this which version are we deploying the next step would be to also specify which environment are we updating i’m gonna write here environment dash name and we also need another variable for the environment i’m gonna call it here app and name you’re free to name them as you wish and again i’m going to go here to elastic bean stock and i’m going to copy this i just want to make sure that i have everything exactly as it is in elastic bean stock if i specify something else then it’s not the same so the environment name will be here and as you can notice here this doesn’t have any spaces so we don’t need to put this value between codes and also all the other values that we had here there are no spaces so we don’t need to put any quotes there but if any of these parameters would have a space that value would need to be between quotes finally let’s also go ahead and re-enable these jobs because we’ll still need a docker image now so we will run the entire pipeline and check if our deployment is working properly however when we take a look at the pipeline we’ll see that the deployed production job has failed so let’s jump into the logs try to understand what has happened and why this job failed you can see here the commands that we have executed so copying these files to s3 still works so no problems there the last command that was executed is this one so elastic beanstalk create application version so there’s a good chance that this command is responsible for whatever error we’re getting here and then we need to look into the logs and see what is the error and we can see here it says something access denied essentially it’s telling that our user which is the gitlab user that we have created if you remember we have created this user with programmatic access is not authorized to perform some action so create application version if you remember we have selected that the user is allowed to work with s3 resources right so uploading files deleting files things like that with s3 that works but we haven’t authorized our user to work with elastic bean stock so we need to change that so from the aws console let’s go ahead and open the im service for identity management we’ll open up our user you see here this is the username gitlab and what we can do here is essentially to attach additional policies so we have here this amazon s3 full access so that works but now we want to add an additional policy so we’re gonna click here on add permissions and gonna attach existing policies i’m going to search here for elastic bean stock and i’m going to give here administrator access for aws elastic beanstalk now i’m not going to go into this policies in very detail we’re just interested in getting this to run but of course if you would use something like this in production you would need to be very careful about what each user is allowed to do so just keep that in mind just using this very generous policies in production is typically not a good idea so i’m gonna add these permissions then and now we’ll see that our user now has this additional permission here so we can now work with elastic beanstalk so going back here to the logs instead of re-running the entire pipeline we can just go to this job and hit this retry button and it will retry the same job we haven’t changed any files and dynamically this policy should now be applied to the user and hopefully this job will work so let’s take a look to see what the job is doing and now this time it looks much better so we have no errors and let’s take a look at the first command create application version now we have executed this and we’re getting back a response right so it’s going to tell us again like the application name which version we have which bucket we’re using which is the name of the file and so on it also starting to process this and the next thing what we’re getting is the update environment so that work without any issues it’s going to tell us again what is the environment name so we can take a look at that application named version and also some technical details about this we can take a look back into the aws console and then look here into the elastic bin stock service to see what’s going on and we’ll be able to see here running versions is 60. i can click here on the environment just to make sure that we indeed have everything the health is okay so we can also go ahead and click on this url and this will open up our website so it does seem to work very well so now we have manually tested that the deployment has been successful but how about that we check in our pipeline that the deployment has been successful and we can use the exact same approach that we have used so far i’m gonna jump back into the pipeline and here inside the deployed production job essentially what we want to do is to use this curl command so should be pretty similar to what we have here when we’re testing the docker image i’m gonna add it here to a new line of course we also need to get the address itself of the application of the environment and if you remember we had something for that already we just need to update the url for the application so i’m going to simply copy this url here and from gitlab i’m going to go to deployments environments and for the production environment i’m going to go ahead and change some settings and we’re no longer using the s3 url we’re using this one and remove here the forward slash so this will be my external url for this environment and what we want to use here is of course the variable that will get us this environment and from the predefined variables probably remember we have used this in the past ci environment name slug url we actually want the url so i’m going to copy this get here into the pipeline configuration and instead of website can also replace http because i don’t need that anymore so i can use ci environment url forward slash version and then we’re getting that version from there now i’m going to tell you if we leave this command as it is right now this will not work and the reason for that is when we are updating this environment aws needs just a few seconds to actually do this deployment to put the new version on the environment this is not instant but the connects command will run immediately after we have essentially triggered this update and that environment may not be ready or may still have the older version so for that reason this curl here will fail so we’ll not get this response the deployment cannot be verified at this point that it has been successful or not so what we need to do here is to wait a bit so what we can do is of course to use something like sleep and wait for like 10 seconds or something like that but in aws there’s a better way to do that when we’re using the aws cli we have a tool which is called weight so weight is an option it is for various services that are at aws including for elastic bean stock so we can right here elastic bean stock weight and then what are we waiting for we are waiting for this environment to be updated so i’m going to write here environment dash updated and then we need to specify the application name the environment name and the version label right so essentially everything that we had here on the previous command i’m gonna simply copy and i’m gonna add it here to the new line and this command will do the following it will essentially in the background it will check with aws hey are you done updating that environment no not yet i need some time okay wait a bit and then we’ll try again hey are you done updating that environment okay i’m done good and then we can stop waiting and then the next command the curl command can run and then we can check if indeed the correct version has been deployed and if our environment is working properly so i’m going to go ahead and commit these changes and we’ll let the entire pipeline run and see how this goes and the pipeline is still successful so we can take a look into the deploy to production job and see what exactly happened here and you’ll be able to see here that this command is executed and after this the next command with curl is also executed and it passes without any issues you will see here version 61 has been deployed this is exactly also the version label that we have here defined so this again confirms that the version that we wanted to deploy has actually landed on the environment where we wanted to have it so that’s about it in terms of deploying to elastic beanstalk so just to recap we have gone through all the stages and we have started by initially building our code whatever we have here compiling it building it running some tests and then publishing this as artifacts and then we have created a docker image essentially we have created an application we have tested that application and then of course we have really a simple pipeline here we didn’t went through any other stages we deployed directly to production but the principles that i’ve shown you here can be used for similar projects and of course based on all the other information that i’ve shown you throughout the course you can build power plants that are more complex than this one but the most important thing is to understand how you can build such pipelines how you can deploy them and how you can end up with something that works and generally to make this process as enjoyable as possible since now we are coming towards the end of the course i thought it would be a good idea for a final assignment and what i have here is a project which will document who has completed this course so what you have to do is to check the course notes and in the course notes you will find the link to this repository and go ahead and click here on request access and in a few hours probably you receive access to this repository and you will be able to make changes to it so you can open the web ide and open a merge request and you will have to add if you wish of course your name to a list of people who have completed this course so let me show you how to do this so once you have access to this you will no longer see this part with requesting access and you can open the web id and you will be not asked to fork this project that’s the main difference and then you can go ahead and change the files the files that we want to change are located here in source and the file that contains the code is this app.js and here there will be a table and i invite you to add your name and other information to this table so for example i’ve added here my name i had it here my username for gitlab but also my country of origin and also message to the entire world and i’m gonna submit this as a merge request and i invite you to do the same so i’m going to go ahead here commit this and i cannot commit directly into the main branch so i have to go through the path of creating a merge request so i creating a feature branch with my name let’s give it a meaningful title and then i can just go ahead and create the merge request once you have access to this project you will be able to see any other merge requests and i invite you to take a look at them to see what other people have changed to make sure that everything is working properly and if someone breaks the pipeline in their own branch maybe you can give people some tips in regards to what they did wrong and what didn’t work so well essentially be part of this review process try to understand how to collaborate on this project and of course once your merge request gets reviewed it will be merged into the main branch and then you will be able to see your name appearing on a web page so i think that’s kind of a nice and an interactive way of essentially concluding this course almost and so yeah i hope you will do this along in terms of editing let me give you an advice once there are a few people that have been added to this list here what i highly recommend is that you don’t just add your name at the end because that’s the highest chance that you will run into complex so when you’re trying to make changes try to put your name somewhere in the middle or something like that between others try to keep the indentation and everything so that it looks nice but yeah that’s my advice to you so see you even after this course i’m gonna collaborate more inside the merge request try to play more with gitlab with pipelines see how it works and yeah looking forward to your contributions all right you did it this is the end of the course but don’t go away yet i still have some valuable tips for you first of all i want to give you a final reminder to terminate any aws services you have created so that you don’t encounter any unexpected costs we have accomplished so many things in a very short amount of time i know this was a lot to take in but i hope it was useful and that this has opened your appetite for learning more about devops gitlab and aws if you enjoy this content you could support me in creating more courses like this one by going to my youtube channel and subscribing link in the video description thank you very very much but there’s also so much more to learn if you found it hard to work with cli commands i do recommend learning about unix utility commands and bash there are also other gitlab features worth exploring if you like working when deploying docker containers you may also want to learn about kubernetes for all the topics mentioned above and anything else i forgot to mention you will find links in the course notes if you enjoy my teaching style and you want to take a more advanced gitlab course go to vdespa.com and check out the courses that i’m offering if you are unsure which course is right for you just send me a message on social media i’m more than happy to help i hope you enjoy spending time with me and i will see you next time

    By Amjad Izhar
    Contact: amjad.izhar@gmail.com
    https://amjadizhar.blog

  • GitLab Project Management, Workflow, and CI/CD Features

    GitLab Project Management, Workflow, and CI/CD Features

    This tutorial series introduces the core features of GitLab, beginning with understanding GitLab and Git basics and navigating the GitLab interface. It then progresses to GitLab Flow, demonstrating its application through practical exercises like modifying a project readme and managing merge requests. The series further explores GitLab CI/CD, detailing pipeline creation, job configuration, artifact management, and caching. Finally, it covers migrating from Jenkins to GitLab CI/CD and utilizing GitLab for packaging and releasing software, including interaction with the package, container, and infrastructure registries.

    GitLab Core Features Study Guide

    Quiz

    1. What is GitLab, and what is its primary function? GitLab is an open-source software development platform that, at its core, is a source code management system. However, it extends beyond this by offering additional functionalities like CI/CD and project collaboration tools. GitLab describes itself as a DevOps platform.
    2. Explain the difference between Git and GitLab in 2-3 sentences. Git is a distributed version control system used to track changes to source code files within a codebase. GitLab, on the other hand, is a source code management system that provides a platform to host Git repositories, enabling collaboration among software development teams. Think of Git as the engine for version control, and GitLab as the online service to manage and share Git-controlled projects.
    3. Describe the basic GitLab workflow, also known as the GitLab flow, in brief. The GitLab flow generally involves creating feature branches off the main branch for specific development tasks. Once development is complete, a merge request is created to propose merging the feature branch back into the main branch after review and testing. Depending on the GitLab flow variation, changes may then be merged into environment branches (like staging or production) or managed through release branches.
    4. What are merge requests in GitLab, and what is their purpose? A merge request in GitLab is a request to merge changes from one branch into another, typically from a feature branch into the main branch. It serves as a central place for team members to discuss, review, and verify the changes on a branch before they are integrated into the main codebase, often triggering automated testing.
    5. What are GitLab issues used for, and how should a software development workflow ideally begin with them? GitLab issues are used to track work related to a GitLab project, such as reporting bugs, tracking tasks, and requesting new features. Ideally, a software development workflow should begin with the creation of an issue to clearly define the scope and objectives of the work that needs to be done before any code changes are made.
    6. Explain the concept of a branching strategy in the context of Git. A branching strategy is a defined workflow that a development team follows when using Git (or another version control system) to manage concurrent development. It outlines how branches are created, how collaboration occurs on these branches, and how changes are eventually merged back into the main codebase, aiming to maintain code stability and facilitate feature development.
    7. Describe the key difference between the GitHub flow and the Git flow branching strategies. The GitHub flow is a simpler strategy primarily using feature branches off the main branch, which are merged back in after review and testing. The Git flow is more complex, utilizing long-lived develop and main branches, as well as supporting branches for features, releases, and hotfixes, making it suitable for more structured release cycles.
    8. What are GitLab pipelines, and what are their basic components? GitLab pipelines are a top-level component used to define the CI/CD (Continuous Integration/Continuous Delivery or Deployment) process for a GitLab project. Their basic components include stages, which define the chronological order of jobs, and jobs, which are associated with stages and define the specific steps (scripts) to be executed by GitLab runners.
    9. What is the purpose of GitLab runners in the CI/CD process? GitLab runners are open-source applications that execute the instructions defined within the jobs of a GitLab pipeline. They pick up and run the scripts specified in the .gitlab-ci.yml file, performing tasks such as compiling code, running tests, and deploying applications, either on shared GitLab infrastructure or on self-hosted machines.
    10. What is the GitLab Package Registry, and what types of packages does it support? The GitLab Package Registry allows users to use GitLab as a private or public repository for various software packages. It supports a number of package managers and formats, including Maven packages, container images (via the Container Registry), and Terraform modules (via the Infrastructure Registry), enabling teams to manage their dependencies and releases directly within GitLab.

    Essay Format Questions

    1. Compare and contrast the three main Git branching strategies discussed (GitHub flow, Git flow, and GitLab flow). Discuss the advantages and disadvantages of each and in what scenarios each strategy might be most appropriate.
    2. Explain in detail the environment branches variation of the GitLab flow. Describe the typical branch structure, the process of developing and deploying features, and how hotfixes are managed within this workflow.
    3. Discuss the role and importance of merge requests in a collaborative software development environment using GitLab. Explain the key features of a merge request and how they facilitate code review, discussion, and integration.
    4. Describe the fundamental concepts and benefits of Continuous Integration and Continuous Delivery/Deployment (CI/CD) as implemented in GitLab pipelines. Explain the relationship between pipelines, stages, jobs, and GitLab runners in automating the software development lifecycle.
    5. Discuss the various registries offered by GitLab (Package Registry, Container Registry, and Infrastructure Registry). Explain the purpose of each registry and how they contribute to the overall DevOps lifecycle within the GitLab platform.

    Glossary of Key Terms

    • Branch: An independent line of development within a Git repository. Branches allow for isolated work on features or bug fixes without affecting the main codebase.
    • Commit: A snapshot of the changes made to files in a Git repository at a specific point in time, along with metadata like author and commit message.
    • CI/CD (Continuous Integration/Continuous Delivery or Deployment): A set of practices that automate the building, testing, and deployment of software, enabling faster and more frequent releases.
    • Git: A distributed version control system used for tracking changes in source code during software development.
    • GitLab: A web-based DevOps platform that provides source code management (Git repositories), CI/CD pipelines, issue tracking, and other collaborative features.
    • GitLab Flow: A streamlined branching strategy that offers a balance between simplicity and structure, with variations for environment and release branches.
    • GitLab Group: A way to organize multiple projects and users, allowing for centralized management of settings and permissions.
    • GitLab Issue: A tool within GitLab used to track tasks, bugs, feature requests, and other work items related to a project.
    • GitLab Pipeline: A configurable automated process defined in a .gitlab-ci.yml file that describes the steps for building, testing, and deploying code.
    • GitLab Project: A container in GitLab for a single codebase (Git repository) along with its associated issues, merge requests, CI/CD configuration, and other features.
    • GitLab Runner: An agent that executes the jobs defined in a GitLab pipeline. Runners can be shared, group-specific, or project-specific.
    • Merge Request: A request to merge changes from one branch into another in GitLab, facilitating code review and discussion.
    • Package Registry: A feature in GitLab that allows you to store and manage software packages (e.g., Maven, npm, NuGet) within your projects or groups.
    • Release: A specific version of a software project that is made available to users, often marked with a Git tag and potentially including release notes and assets.
    • Repository (Repo): A directory where your project’s files and their history are stored, managed by a version control system like Git.
    • Stage (in CI/CD): A phase in a GitLab pipeline that contains one or more jobs. Stages are executed in a defined order.
    • Tag (Git Tag): A static marker in a Git repository that typically points to a specific commit, often used to denote releases.

    GitLab Tutorial Series Briefing Document

    Date: October 26, 2023 Prepared For: Review of GitLab Tutorial Series Sources Prepared By: Gemini AI

    This document provides a detailed review of the main themes, important ideas, and facts presented in the provided excerpts from the GitLab tutorial series. Quotes from the original sources are included where appropriate to illustrate key points.

    Main Themes

    1. Introduction to GitLab: The series begins by defining GitLab as an open-source software development platform, emphasizing its core as a source code management (SCM) system with added DevOps functionalities like CI/CD. It differentiates GitLab from Git, highlighting GitLab as a platform for hosting and collaborating on Git repositories.
    2. Core GitLab Features: The tutorials cover several core features of GitLab, including the user interface, Git integration, the GitLab Flow, CI/CD pipelines, package and release management, and integration with external testing platforms like LambdaTest.
    3. Fundamentals of Git: A significant portion of the early tutorials focuses on introducing and explaining fundamental Git commands and concepts, such as git init, git status, git add, git commit, branching (git branch, git checkout), merging (git merge), and stashing (git stash). Best practices for using Git, like developing on feature branches and not committing directly to the main branch, are also emphasized.
    4. GitLab Flow as a Development Workflow: The series introduces the GitLab Flow as GitLab’s primary branching strategy, contrasting it with other common Git workflows like GitHub Flow and Git Flow. It details the two variations of GitLab Flow: one using environment branches and the other using release branches. The tutorials then demonstrate the environment branches variation in practice.
    5. Continuous Integration and Continuous Delivery/Deployment (CI/CD) in GitLab: A key focus of the later tutorials is GitLab’s CI/CD capabilities. The series explains the core components of GitLab CI/CD, including pipelines, jobs, stages, and runners. It then guides the user through creating and implementing GitLab CI/CD pipelines, including defining jobs, stages, specifying Docker images, using variables, caching dependencies, and generating artifacts.
    6. GitLab Package and Release Management: The series introduces GitLab’s features for managing software packages and releases. It explains the GitLab Package Registry, Container Registry, and Infrastructure Registry, detailing how to publish and consume packages and container images from within GitLab pipelines. The concept of GitLab Releases, including associated release notes and evidence, is also introduced.
    7. Integration with LambdaTest: The series explicitly covers the integration of the LambdaTest platform with GitLab CI for performing cross-browser testing, indicating GitLab’s interoperability with other developer tools.

    Most Important Ideas and Facts

    Tutorial Series Overview:

    • The tutorial series aims to teach users how to utilize the core features of GitLab.
    • Topics to be covered include: What is GitLab, basics of Git, GitLab interface, GitLab Flow, hands-on activities using GitLab Flow, CI/CD in GitLab, migrating Jenkins pipelines to GitLab CI, GitLab’s packaging and releasing features, and integrating LambdaTest with GitLab CI.
    • Learning objectives include understanding GitLab CI, fundamental Git commands, working with GitLab Flow, performing CI/CD in GitLab, migrating from Jenkins, and deploying software using GitLab’s packaging and releasing features.
    • The course is intended for DevOps engineers, software teams migrating from Jenkins to GitLab, and developers whose team uses GitLab.
    • Prerequisites include access to a GitLab instance and a recent version of Git installed.

    What is GitLab and Basics of Git:

    • GitLab is an “open source software development platform” and a “DevOps platform.”
    • At its core, GitLab is a “source code management system” built on top of Git.
    • Git is a “version control system” used to track changes to source code files.
    • GitLab is used to “host Git repositories so that they can be shared with other people on your team,” similar to file sharing platforms but specifically for source code.
    • Benefits of using GitLab include enabling collaboration, built-in CI/CD functionality, and high interoperability with other tools.
    • Basic Git commands covered include:
    • git –version: To verify Git installation.
    • git init <repository_name>: To initialize a new Git repository in a specified directory.
    • git config –global init.defaultBranch main: To set the default branch name to main.
    • cd <repository_name>: To navigate into the newly created repository.
    • git branch -m main: To rename the current branch to main.
    • .git directory: A hidden folder that makes a directory a Git repository.
    • git status: To check the current state of the Git repository, including untracked files and staged changes.
    • Main branch (or pristine/stable branch): Supposed to contain bug-free, deployable code. Developers should avoid committing directly to this branch for feature development.
    • Feature branch: An isolated copy of the codebase for developing new features without impacting the main branch.
    • git add <file_name>: To move changes from the working directory to the staging area. Git employs a “two-stage commit” process.
    • git commit -m “<commit_message>”: To create a commit object, recording changes from the staging area to the Git history. Commit messages should be concise descriptions of the changes.
    • git log: To view the history of commits in the repository, including commit hash, author information, date, and commit message.
    • git branch <branch_name>: To create a new branch.
    • git checkout <branch_name>: To switch to an existing branch. git checkout -b <new_branch_name> creates and switches to a new branch.
    • git merge <source_branch>: To merge changes from the specified source branch into the currently checked-out branch.
    • git branch -d <branch_name>: To delete a branch.
    • git stash or git stash push: To save uncommitted changes temporarily without creating a commit.
    • git stash list: To view a list of stashed changes.
    • git stash apply: To reapply stashed changes to the working directory, keeping the changes in the stash.
    • git stash pop: To reapply stashed changes and remove them from the stash.
    • git stash clear: To remove all entries from the stash.
    • git clone <repository_url>: To download a Git repository from a remote source to the local machine.

    GitLab Interface:

    • Key GitLab terminology includes:
    • Group: Manages settings across multiple projects, enables logical categorization of users and projects, and provides cross-project views of issues and merge requests.
    • Project: A container for a Git repository with built-in CI/CD functionality, issue tracking, and collaboration tools like merge requests. There is a one-to-one mapping between a GitLab project and a Git repository.
    • Members: GitLab users or groups with access to a project, assigned roles with specific permissions.
    • Merge Request: A request to merge one branch into another, providing a space for discussion, review, and verification of changes.
    • Issue: A way to track work related to a project, used for bug reports, tasks, feature requests, and more. Software development workflow should ideally begin with the creation of an issue.
    • The GitLab interface includes a projects dashboard, navigation bar with profile and help menus, to-do list, merge request and issues pages, a top-level search, and a “+” menu for creating new items.
    • Account settings allow users to manage profile information, access tokens, notifications, and more.
    • Project creation allows starting from a blank project, a template, or importing from other systems.
    • Project home page displays commits, branches, tags, file structure, and the rendered README. It also provides options for creating new files, uploading, creating directories, branching, tagging, using a web IDE, downloading the source code, and cloning the repository.
    • Project features include issue tracking with agile boards, merge request management, CI/CD configuration, package and container registries, infrastructure registry for Terraform modules, project wiki for documentation, and code snippets.
    • Project settings allow configuration of general information, merge request behavior (merge methods, squash options, merge checks), repository settings (default branch, protected branches), and monitoring settings (GitLab Pages).

    GitLab Flow:

    • A branching strategy is a software development workflow within Git that describes how teams create, collaborate on, and merge branches.
    • Choosing a branching strategy depends on team requirements, SCM system, deployment environments, deployment management, and code base structure.
    • Common Git branching strategies include GitHub Flow, Git Flow, and GitLab Flow.
    • GitHub Flow: Simple workflow with feature branches created off main, changes pushed to GitHub, pull requests opened, automated testing, review and verification, and merging into main.
    • Git Flow: More complex, uses a long-lived develop branch, feature branches off develop, release branches off develop for testing and bug fixes before merging into main (tagged with release version), and hotfix branches off main for production issues, merged back into both main and develop.
    • GitLab Flow: Simpler than Git Flow, more structured than GitHub Flow, with two variations:
    • Environment Branches: Long-lived production branch. Feature branches off main, merged into environment branches (e.g., staging) and then into production. Upstream first policy for hotfixes (created off main, merged back into main and pre-production branches before production).
    • Release Branches: Used for releasing software to the outside world (e.g., open source). Similar to environment branches but uses release branches (created as late as possible, only major bug fixes, upstream first policy for bug fixes). Release branches are long-lived until a release is no longer supported.
    • GitLab Flow with environment branches uses main as an integration branch and promotes changes through pre-production environments to production.

    Practicing GitLab Flow:

    • The GitLab Flow practice demonstrates the environment branches variation.
    • The process begins with an issue defining the work scope. Issues can contain subtasks using Markdown checkboxes.
    • A production branch is created from main to represent the production environment.
    • Protecting the production branch in project settings restricts who can merge into and push to it (e.g., only Maintainers). Force pushing should generally be disabled on protected branches.
    • git clone is used to download the remote GitLab project (Git repository) to the local machine.
    • A feature branch (readme-introduction) is created off main using git checkout -b.
    • Local changes are made to the README.md file, staged with git add, and committed with git commit -m.
    • git push -u origin <feature_branch_name> is used to push the local branch and its commits to the remote GitLab repository, setting up tracking.
    • A Merge Request is created from the feature branch to the main branch. The title is often automatically populated from the latest commit message.
    • Merge Requests provide a platform for discussion, assigning reviewers, and automated testing (though not explicitly set up in this initial demonstration).
    • Reviewers can add comments to specific lines of code in the “Changes” tab (diff view).
    • Requested changes are made locally, committed, and pushed. These new commits automatically update the existing Merge Request.
    • Threads on code comments can be resolved once the requested changes are made.
    • Merge Requests are approved by reviewers and then merged into the target branch (main). The source branch can be automatically deleted upon merge.
    • Since the environment branches variation is used, a second Merge Request is created to merge the main branch into the production branch.
    • A Git tag (e.g., v1.0) is created on the production branch to mark a release. Tags can have messages and release notes.
    • Creating a tag can also generate a release in the “Deployments” section of the GitLab project.
    • git pull is used to sync the local repository with the remote repository after merges and tag creation.
    • git branch -d <local_branch_name> deletes a local branch that has been merged remotely.
    • git branch –all shows both local and remote branches.
    • git pull –prune removes remote branches from the local repository’s tracking information if they have been deleted on the remote.
    • The original issue related to the changes should be closed and marked as done after the changes are merged and the tag is created.

    CI/CD in GitLab:

    • A GitLab pipeline is defined in a YAML file named .gitlab-ci.yml at the root of the project.
    • The pipeline editor in GitLab provides syntax validation and visualization of the pipeline.
    • A pipeline consists of:
    • Pipelines: Top-level component defining the CI/CD process.
    • Jobs: Associated with stages, define the actual steps (shell scripts) to be executed. script is the only required property of a job.
    • Stages: Define the chronological order of jobs. Multiple jobs within a stage can run in parallel by default.
    • GitLab Runners: Open-source application that executes the instructions defined in jobs. Can be local, cloud, or on-prem. GitLab hosts shared runners.
    • The image keyword at the top of .gitlab-ci.yml specifies the Docker image to be used for the pipeline’s jobs, providing necessary dependencies.
    • variables keyword allows defining pipeline-level variables that can be referenced in job scripts. GitLab also has predefined environment variables (e.g., CI_PROJECT_DIRECTORY).
    • cache keyword is used to cache directories (specified by paths) between pipeline runs to speed up execution (e.g., caching Maven dependencies in .m2/repository).
    • stages keyword defines the names and order of pipeline stages (e.g., build, test, deploy).
    • Each job is associated with a stage using the stage keyword.
    • artifacts keyword in a job definition specifies files or directories to be persisted after the job completes.
    • when: always specifies that artifacts should always be generated.
    • reports: junit: <path> specifies the path to JUnit test report XML files, which GitLab can then render in the UI.
    • environment keyword in a job definition associates the job with a specific environment (e.g., staging), creating deployments visible in the Environments page.
    • workflow: rules: can control when a pipeline runs based on conditions (e.g., only run when a Git tag is created using if: $CI_COMMIT_TAG).

    Migrating Jenkins Pipelines to GitLab CI:

    • The example uses a Maven project with TestNG for testing and aims to migrate a Jenkins pipeline to GitLab CI.
    • The Jenkins pipeline has a single “test” stage, defines environment variables, uses the Maven Integration Plugin, authenticates with LambdaTest using Jenkins credentials, and runs Maven tests.
    • The GitLab CI pipeline for the migration would need to:
    • Define stages (potentially build, test).
    • Specify a Docker image with necessary dependencies (e.g., Maven, Java).
    • Set environment variables, potentially including LambdaTest credentials.
    • Define jobs to execute Maven commands for building and testing (mvn clean install).
    • Integrate with LambdaTest by setting up the desired capabilities and running tests against the LambdaTest hub URL, likely using environment variables for LambdaTest username and access key.
    • Configure JUnit or TestNG report generation as artifacts for visualization in GitLab.
    • The GitLab CI example provided focuses on deploying a Maven package, but the principles for running tests against LambdaTest would involve similar steps within a test job.

    GitLab Packaging and Releasing Features:

    • A software release in GitLab can include:
    • Generic software packages (artifacts).
    • Release notes.
    • Release evidence (issues, milestones, test reports).
    • A snapshot of the project’s source code.
    • GitLab Package Registry: Allows using GitLab as a public or private software package registry, supporting various package managers (e.g., Maven). Packages are associated with projects and groups. Can be used from within CI/CD pipelines.
    • GitLab Container Registry: Private container registry for Docker images, associated with projects and groups. Container images can be used in and published from GitLab pipelines.
    • GitLab Infrastructure Registry: Supports publishing and sharing Terraform modules, with a registry per project. Terraform modules can be built and published from pipelines.
    • To deploy a Maven package to the GitLab Package Registry:
    • Configure the settings.xml (e.g., ci_settings.xml) with server credentials using the predefined CI_JOB_TOKEN for authentication within a GitLab pipeline. For external authentication, a personal access token or deploy token would be needed.
    • Modify the pom.xml to:
    • Reference a GitLab environment variable (CI_COMMIT_TAG) for the <version> tag to version snapshots based on Git tags.
    • Add a <repositories> tag specifying the GitLab Maven Package Registry URL, constructed using predefined environment variables (CI_API_V4_URL, CI_PROJECT_ID).
    • Add a <distributionManagement> section to define the deployment repository as the GitLab Maven Package Registry.
    • Create a GitLab CI pipeline (.gitlab-ci.yml) that:
    • Uses a Maven Docker image.
    • Defines a maven_options variable for the local Maven repository.
    • Uses workflow: rules: to trigger the pipeline only on Git tag creation (if: $CI_COMMIT_TAG).
    • Caches the Maven repository.
    • Has a deploy job that runs mvn deploy -s ci_settings.xml to publish the package.
    • Successfully deploying a Maven package to the GitLab Package Registry makes it available for download and use as a dependency in other Maven projects.

    This detailed briefing document summarizes the key aspects of the GitLab tutorial series excerpts, providing a comprehensive overview of the topics covered, important concepts, and practical applications of GitLab’s features.

    Understanding GitLab: Core Concepts and Workflow

    What is GitLab and how does it differ from Git?

    GitLab is an open-source software development platform that, at its core, is a source code management system. However, it offers additional functionality such as CI/CD (Continuous Integration and Continuous Delivery/Deployment) capabilities. GitLab describes itself as a DevOps platform, providing tools for the entire software development lifecycle.

    Git, on the other hand, is a version control system used to track changes made to source code files within a codebase. GitLab is a source code management system that you would use to host Git repositories so that they can be shared and collaborated on with a team, similar to how you might share files using services like Dropbox or Google Drive. GitLab enables collaboration, has built-in CI/CD, and is highly interoperable with other tools.

    What is the basic GitLab workflow, also known as the GitLab Flow?

    The GitLab Flow is a branching strategy and software development workflow within the context of Git and GitLab. It describes how a development team will create, collaborate on, and merge branches of source code.

    There are two main variations of the GitLab Flow:

    1. Environment Branches: This workflow uses long-lived environment branches such as production. Feature branches are created off of the main branch and are then merged into environment branches in a specific order (e.g., main $\rightarrow$ staging $\rightarrow$ production). Hotfixes are created off main and merged back into main and pre-production branches before production.
    2. Release Branches: This variation is used when releasing software to the outside world. Similar to the first, feature branches are created off main and merged back. Release branches are created as late as possible, and only major bug fixes are applied to them. Bug fixes follow an upstream-first policy.

    The GitLab Flow aims to be simpler than Gitflow but more structured than GitHub Flow.

    What are some key components of the GitLab interface that a user should be familiar with?

    Key components of the GitLab interface include:

    • Dashboard: Provides an overview of projects, starred projects, and the ability to create new projects.
    • Navigation Bar: Located at the top, it includes access to profile settings, support/help, to-do lists, merge requests, issues, and a top-level search function. It also has a “+” icon to create new projects, groups, or snippets.
    • Left-hand Side Menu: Offers access to various dashboards such as Projects, Groups, Security, and Environments.
    • Project Home Page: Displays the code repository, commits, branches, tags, and provides options to create/upload files, create branches/tags, use the Web IDE, download the repository, and clone it.
    • Issues: A system for tracking work, reporting bugs, requesting features, and managing tasks related to a project.
    • Merge Requests: A place to propose changes, have discussions about branch changes, perform code reviews, and merge branches.
    • CI/CD: Section for configuring and managing Continuous Integration and Continuous Delivery/Deployment pipelines, including pipelines, jobs, and schedules.
    • Packages & Registries: Where software packages, container images (Docker), and infrastructure modules (Terraform) can be published and managed.
    • Settings: Project and group settings allow configuration of various aspects like visibility, permissions, merge request behavior, and protected branches.

    How can you initiate a new project and manage access for team members in GitLab?

    To initiate a new project in GitLab, you can click the “New project” button on your dashboard or by using the “+” icon in the navigation bar. You can choose to create a blank project, create from a template, or import a project from another source. When creating a blank project, you’ll need to provide a project name (or slug), choose a visibility level (private, internal, or public), and can initialize it with a README.

    To manage access for team members, you can add members to a GitLab project or a group. Navigate to the project’s or group’s “Members” section in the left-hand menu under “Manage”. From there, you can invite users by their email or GitLab username and assign them a role (e.g., Guest, Reporter, Developer, Maintainer, Owner) which determines their permissions within the project or group.

    What are merge requests in GitLab and why are they important for collaboration?

    A merge request in GitLab is a request to merge one branch into another. It serves as a central place for team members to discuss, review, and verify the changes proposed on a feature branch before they are integrated into the main codebase.

    Merge requests are crucial for collaboration because they:

    • Provide a dedicated space for code review and feedback through inline comments on the diff.
    • Allow for automated testing to be triggered to ensure the changes do not introduce regressions.
    • Keep a record of the discussions and decisions made regarding the proposed changes.
    • Enable the use of approvals to enforce that changes are reviewed by authorized team members before merging.
    • Facilitate continuous integration practices by ensuring that changes are reviewed and tested frequently.

    How can you create and utilize GitLab CI/CD pipelines for your projects?

    To create a GitLab CI/CD pipeline, you need to define a YAML file named .gitlab-ci.yml at the root of your project’s repository. This file outlines the pipeline’s structure, including stages (e.g., build, test, deploy) and jobs within those stages.

    You can use the Pipeline Editor in GitLab (under CI/CD $\rightarrow$ Editor) to create or modify this file. The editor provides syntax validation and visualization of the pipeline.

    In the .gitlab-ci.yml file, you can specify:

    • image: The Docker image to be used for the pipeline’s jobs, providing the necessary environment and dependencies.
    • stages: An array defining the different stages of your pipeline, which will be executed in order.
    • Jobs: Under each stage, you define jobs with scripts to be executed by GitLab Runners. Jobs can include commands to compile code, run tests, build artifacts, and deploy applications.
    • variables: Custom environment variables that can be used throughout the pipeline.
    • cache: Defines directories to be cached between pipeline runs to speed up execution (e.g., dependencies).
    • artifacts: Specifies files or directories produced by a job that should be stored and can be downloaded or used by subsequent jobs.
    • reports: Configures specific reports, such as JUnit test reports, to be collected and displayed in the GitLab UI.
    • environment: Associates a deploy job with a specific environment (e.g., staging, production), which is tracked by GitLab.
    • workflow: Controls when a pipeline should run based on rules (e.g., only on tag creation).

    Once the .gitlab-ci.yml file is committed to your repository, GitLab will automatically trigger pipelines based on the defined rules (e.g., on every push or merge request). You can monitor the status of your pipelines under the CI/CD $\rightarrow$ Pipelines section of your project.

    What is the GitLab Package Registry and how can you use it to manage software packages?

    The GitLab Package Registry is a feature that allows you to use GitLab as a private or public registry for various software package formats (e.g., Maven, npm, PyPI, NuGet, Conan, Go modules). It enables you to publish, share, and consume packages directly within your GitLab projects and groups.

    To use the Package Registry:

    1. Configure your project: Ensure your project’s build configuration (e.g., pom.xml for Maven, package.json for npm) is set up to interact with the GitLab Package Registry. This typically involves specifying the registry URL as a repository and configuring authentication.
    2. Authenticate: You’ll need to authenticate to publish and consume packages. This can be done using:
    • A GitLab personal access token with the api scope.
    • A deploy token created within the project.
    • The CI_JOB_TOKEN within a GitLab CI/CD pipeline for automated publishing.
    1. Publish packages: You can publish packages from your local machine using the respective package manager’s commands (e.g., mvn deploy for Maven, npm publish for npm) or as part of a GitLab CI/CD pipeline.
    2. Consume packages: To use packages from the registry in another project, you need to configure that project’s package manager to point to the GitLab Package Registry and provide the necessary authentication.

    Packages in the registry are associated with a specific GitLab project and can be private or public depending on the project’s visibility. The Package Registry provides a centralized place to manage and depend on your project’s or organization’s software packages.

    What are GitLab Releases and how do they relate to tags and other project components?

    GitLab Releases provide a way to formalize and track specific versions of your software. A release in GitLab can include:

    • Software packages published to the Package Registry.
    • Release notes describing the changes in the release.
    • Release evidence, which can include links to associated issues, merge requests, milestones, and test reports.
    • A snapshot of the project’s source code at the time of the release.

    GitLab Releases are closely related to Git tags. Typically, you create a Git tag to mark a specific point in your repository’s history that corresponds to a release. When you create a release in GitLab (either manually or through a CI/CD pipeline), it is associated with an existing Git tag.

    Releases can be created from the Repository $\rightarrow$ Tags page by creating a new tag and optionally adding release notes. They can also be automated as part of your CI/CD pipeline when a specific tag is created (e.g., a version tag). The release page (Deployments $\rightarrow$ Releases) provides an overview of all releases for a project, allowing users to download assets, view release notes, and track the history of your software.

    GitLab Tutorial Series Overview

    The GitLab Tutorial Series, hosted by Moss, aims to teach users how to utilize the core features of GitLab. The series covers a range of topics, starting with the fundamentals and progressing to more advanced functionalities.

    Here’s a breakdown of the key aspects of the tutorial series as discussed in the sources:

    Topics Covered:

    • Introduction to GitLab: Defining what GitLab is as an open-source DevOps platform and a source code management system with built-in CI/CD. The series differentiates between Git (version control) and GitLab (hosting Git repositories).
    • Basics of Git: Covering fundamental Git commands, including verifying installation (git –version), initializing a repository (git init), configuring the default branch name (git config –global init.defaultBranch main), changing directory (cd), and renaming a branch (git branch -m).
    • GitLab Interface: Familiarizing users with the major components of the GitLab interface, including the login page, projects dashboard, navigation bar (profile, support, to-do list, merge requests, issues), top-level search, and left-hand side menu (dashboards for projects, groups, security, environments).
    • GitLab Terminology: Introducing important GitLab terms such as Group, Project, Members, Merge Request (the equivalent of a pull request in GitHub), and Issue.
    • GitLab Flow: Presenting GitLab’s primary branching strategy, contrasting it with GitHub Flow (simpler) and Git Flow (more complex). The series discusses two variations of GitLab Flow: one using environment branches (production, pre-production) and the other using release branches. The concept of an “upstream first policy” for hotfixes and bug fixes is also explained.
    • Applying GitLab Flow: Demonstrating the environment branches variation of GitLab Flow through a practical exercise involving creating a feature branch, modifying a file, creating merge requests (merging into main then production), protecting branches, tagging releases, and syncing local and remote repositories.
    • CI/CD in GitLab: Showing how to implement Continuous Integration and Continuous Delivery/Deployment (CI/CD) pipelines in GitLab. This includes defining pipelines using .gitlab-ci.yml files, understanding stages and jobs, and utilizing GitLab Runners. The series covers writing pipelines that produce artifacts, cache dependencies, and use variables. The Pipeline Editor with its validation and visualization features is also introduced.
    • Migrating Jenkins Pipelines to GitLab CI/CD: Explaining the key differences between Jenkins Pipelines and GitLab CI/CD and guiding users through the migration process. This involves mapping Jenkins terminology (agent, stages, steps, environment, tools) to GitLab equivalents (runner, stages, script, variables, Docker images). The series also demonstrates using GitLab pipelines to run tests on the LambdaTest Selenium automation grid.
    • GitLab Packaging and Releasing Features: Introducing GitLab’s package registry (for various software packages like Maven), container registry (for Docker images), and infrastructure registry (for Terraform modules). The series demonstrates deploying artifacts to the GitLab package registry from a CI/CD pipeline and describes GitLab releases.
    • Integrating LambdaTest Platform with GitLab CI: While listed in the roadmap, the practical steps for this integration are shown within the “Migrating Jenkins Pipelines to GitLab CI/CD” module, where LambdaTest is used as the test execution platform.

    Learning Objectives:

    Upon completing the tutorial series, learners should be able to:

    • Understand and implement GitLab CI.
    • Know the fundamental commands of Git.
    • Work in GitLab using the GitLab flow.
    • Understand and perform CI/CD in GitLab.
    • Migrate Jenkins pipelines to GitLab.
    • Deploy software using GitLab’s packaging and releasing features.
    • Sync changes between local and remote Git repositories.
    • Create merge requests and understand their components.
    • Implement GitLab pipelines in their own projects.
    • Write GitLab pipelines that produce artifacts, cache dependencies, and use variables.
    • Describe the anatomy of a GitLab pipeline.
    • Explain the differences between Jenkins Pipelines and GitLab CI/CD.
    • Use GitLab pipelines to run tests on the LambdaTest Selenium automation grid.
    • Deploy artifacts from a GitLab pipeline to the GitLab package registry.
    • Describe GitLab releases and its registries (package, container, infrastructure).

    Target Audience:

    The course is designed for:

    • DevOps engineers.
    • Software teams looking to migrate from Jenkins to GitLab.
    • Developers whose teams use GitLab.

    Prerequisites:

    To follow along with the tutorials, users need:

    • Access to a GitLab instance (either gitlab.com or a private instance) with a user account.
    • A recent version of Git installed on their machine.

    Key Concepts and Features Highlighted:

    • Open-Source DevOps Platform: GitLab is presented as a comprehensive platform beyond just source code management.
    • Collaboration: GitLab facilitates collaboration among software teams.
    • Built-in CI/CD: GitLab offers integrated continuous integration and continuous delivery/deployment capabilities.
    • Interoperability: GitLab can integrate with other tools.
    • GitLab Flow: A structured yet simpler branching strategy compared to Git Flow.
    • Merge Requests: Central to the GitLab workflow for code review and discussion.
    • GitLab Issues: Used for tracking work, reporting bugs, and requesting features.
    • GitLab Pipelines: Defined in YAML (.gitlab-ci.yml), they automate the software development lifecycle.
    • GitLab Runners: Execute the instructions defined in pipeline jobs.
    • Pipeline Editor: Provides a user-friendly interface for creating and validating GitLab CI/CD pipelines.
    • Artifacts and Caching: Mechanisms for managing build outputs and improving pipeline performance.
    • Pipeline Variables: For configuring pipeline behavior and storing sensitive information securely (protected and masked variables).
    • GitLab Registries: For managing software packages, container images, and infrastructure modules.
    • Releases: GitLab provides features for creating and managing software releases.
    • SSH Key Authentication: A secure method for authenticating with GitLab, especially for command-line interactions.

    The GitLab Tutorial Series appears to be a comprehensive guide for individuals and teams looking to leverage GitLab’s core functionalities, from basic version control with Git to advanced CI/CD and release management practices.

    GitLab Core Features: Development and DevOps Platform

    Drawing on the GitLab Tutorial Series and our previous discussion, the core features of GitLab revolve around its identity as an open source software development platform and a DevOps platform. At its core, GitLab is a source code management system that hosts Git repositories, enabling collaboration among software teams. However, its core functionality extends significantly beyond this.

    Here’s a breakdown of the core GitLab features discussed in the sources:

    • Source Code Management with Git: GitLab allows users to host Git repositories, similar to how files are shared on platforms like Dropbox or Google Drive, but specifically for source code. It supports fundamental Git commands such as initializing repositories (git init), managing branches (git branch, git checkout), adding files to the staging area (git add), committing changes (git commit), and syncing local and remote repositories (git push, git pull).
    • Branching Strategies and GitLab Flow: GitLab emphasizes the use of branching for feature development, advocating against committing directly to the main branch. The tutorial series introduces the GitLab Flow, a structured branching strategy that is simpler than Git Flow but more organized than GitHub Flow. It includes variations using environment branches (like production) and release branches. The core principle involves creating feature branches off the main branch, merging into pre-production branches, and finally into production.
    • Merge Requests: Merge requests are a central collaboration feature in GitLab, serving as requests to merge one branch into another. They provide a space for discussion, code review, and verification of changes. Opening a merge request can trigger automated testing.
    • Issue Tracking: GitLab Issues are used to track work related to a project, including bug reports, tasks, and feature requests. The software development workflow is suggested to begin with the creation of an issue. Issues can have subtasks defined using markdown checkboxes.
    • Continuous Integration and Continuous Delivery/Deployment (CI/CD): GitLab has built-in CI/CD functionality. This core feature allows users to automate the building, testing, and deployment of their software through GitLab Pipelines defined in .gitlab-ci.yml files. Pipelines consist of stages (defining the order of execution) and jobs (defining the actual tasks) that are executed by GitLab Runners. Pipelines can produce artifacts and utilize caching to optimize performance. Pipeline variables can be defined for configuration.
    • GitLab Interface: The GitLab interface provides dashboards for projects and groups, navigation for key features like merge requests and issues, and tools for creating new projects, groups, and snippets. The Pipeline Editor offers a user-friendly way to create and validate CI/CD configurations.
    • Groups and Projects: GitLab uses groups to manage settings across multiple projects and categorize users, while projects serve as containers for Git repositories and also include built-in CI/CD and issue tracking.
    • Members and Roles: Access to GitLab projects and groups is managed through members who are assigned specific roles with defined permissions.
    • Registries: GitLab provides several registries as core features:
    • Package Registry: For publishing and sharing software packages (e.g., Maven packages).
    • Container Registry: A private registry for Docker images.
    • Infrastructure Registry: For publishing and sharing Terraform modules.
    • Releases: GitLab offers features for managing software releases, including associating tags, release notes, and assets with specific versions of the code.

    These features collectively make GitLab a comprehensive platform supporting the entire software development lifecycle, from planning and coding to building, testing, and deploying applications.

    GitLab CI/CD: Core Concepts and Pipeline Automation

    Drawing on the GitLab Tutorial Series and our previous discussion [Your previous turn], GitLab CI/CD is a core feature of the GitLab platform that provides built-in Continuous Integration and Continuous Delivery/Deployment capabilities. It allows users to automate the software development lifecycle, from building and testing to deploying applications.

    Here’s a detailed discussion of GitLab CI/CD based on the sources:

    • GitLab Pipelines: At the heart of GitLab CI/CD are pipelines, which are top-level components used to define the entire CI/CD process for a GitLab project. Pipelines are configured using a YAML file named .gitlab-ci.yml located at the root directory of the project. This file defines the stages and jobs that will be executed. Pipelines are triggered by various events, such as code commits and pushes, merge requests, and the creation of Git tags.
    • Stages: Within a pipeline, stages define the chronological order in which jobs are executed. You can define multiple stages (e.g., build, test, deploy) to structure your CI/CD process. Jobs within the same stage are executed in parallel by default.
    • Jobs: Jobs are associated with specific stages in a pipeline and define the actual steps to be executed. These steps typically involve running shell scripts to compile code, execute tests, and deploy applications. The script keyword is the only required property of a job and contains the shell commands.
    • GitLab Runners: GitLab Runners are open-source applications that execute the instructions defined within the jobs in a pipeline. Runners can be installed on various infrastructure, including local machines, cloud servers, or on-premises environments. GitLab also offers shared runners hosted by GitLab. You can also register your own runners if you prefer to manage the execution environment.
    • .gitlab-ci.yml Configuration: The .gitlab-ci.yml file is where you define your pipeline’s structure and behavior. Key keywords used in this file include:
    • stages: To define the different stages in the pipeline.
    • image: To specify a Docker image that the GitLab Runner should use to execute the job, providing a consistent and reproducible environment with necessary dependencies. You can define a default image for the entire pipeline or specific images for individual jobs. For example, the tutorial uses the maven image for a Maven project.
    • variables: To define environment variables that can be used during pipeline runtime. Variables can be defined at the pipeline level or job level. GitLab also provides predefined environment variables like CI_PROJECT_DIRECTORY, CI_JOB_TOKEN, and CI_COMMIT_TAG. Variables can be marked as protected (available only to protected branches or tags) and masked (hidden in job logs) for sensitive information like passwords.
    • cache: To specify directories that should be cached between pipeline runs to improve performance by avoiding the need to re-download dependencies (e.g., Maven dependencies in .m2/repository).
    • artifacts: To define files or directories that should be persisted after a job completes and can be downloaded or used by subsequent jobs in the pipeline. You can specify when artifacts should be generated using the when condition (e.g., always). For test reports, you can use the reports keyword (e.g., junit) to have GitLab render them in the UI.
    • workflow: To control when a pipeline will run using rules. For example, you can configure a pipeline to run only when a Git tag is created.
    • environment: To associate a deploy job with a specific environment (e.g., staging), which can be tracked in GitLab’s Environments dashboard.
    • Pipeline Editor: GitLab provides a Pipeline Editor within the GitLab interface, which helps users create and validate their .gitlab-ci.yml files. The editor offers syntax highlighting, validation to ensure the configuration is valid, and visualization of the pipeline structure. It also provides linting information and the ability to view the merged YAML configuration. GitLab also offers a library of CI/CD templates for various technologies and frameworks.
    • Testing and Reporting: GitLab CI/CD integrates with testing frameworks. By specifying JUnit test reports as artifacts, GitLab can parse these reports and display a summary of the test results within the pipeline view, including the number of tests passed, failed, and the duration.
    • Deployment: GitLab CI/CD can be used to automate the deployment of applications to various environments. The environment keyword allows you to track deployments in GitLab. While the tutorial demonstrates a basic deployment step using Maven, in real-world scenarios, this could involve deploying to cloud platforms, servers, or container orchestration systems.
    • Comparison with Jenkins: As discussed previously [Your previous turn] and highlighted in the tutorial, there are key differences between GitLab CI/CD and Jenkins. GitLab CI/CD configuration is done through a single YAML file in the repository, while Jenkins pipelines can be defined in Groovy. GitLab recommends using Docker images for environment consistency, whereas Jenkins relies on agents with pre-installed tools or the tools directive (which has no direct GitLab equivalent). GitLab emphasizes a fresh environment for each job (requiring explicit artifact sharing), while Jenkins uses a shared workspace by default.

    In summary, GitLab CI/CD is a powerful and integrated toolset within GitLab that enables teams to automate their software development processes, ensuring faster feedback loops, consistent builds and tests, and streamlined deployments. The declarative YAML configuration, the flexibility of using Docker images, and the integrated UI for pipeline management and reporting make it a central component of the GitLab DevOps platform.

    Git Basics: Core Concepts and Commands

    Drawing on the GitLab Tutorial Series and our conversation history, here’s a discussion of Git basics:

    At its core, Git is a distributed version control system. You use Git to track changes made to source code files within a codebase.

    Here are some fundamental Git concepts and commands covered in the tutorial:

    • Verifying Git Installation: You can check if Git is installed on your machine and see its version by opening your terminal and running the command git –version.
    • Initializing a Git Repository (git init): The git init command is used to initialize a new Git repository.
    • If you run git init without any parameters, it will initialize the current directory as a Git repository. This is useful for adding version control to an existing codebase.
    • To create a new Git repository in a subdirectory, you can pass the desired name of the repository as an argument to git init, for example, git init test-project. This creates a subdirectory with the specified name and initializes a Git repository inside it.
    • Default Branch Name: When initializing a new Git repository, Git might create a default branch with a name other than main. The tutorial shows how to rename the default branch to main using the following commands:
    • git config –global init.defaultBranch main (to set the default name for all new repositories)
    • cd test-project (to navigate into the newly created repository)
    • git branch -m main (to rename the current branch to main)
    • The .git Directory: After initializing a Git repository, a hidden folder called .git is created within the repository’s directory. This .git directory is what makes the folder a Git repository; without it, it’s just a regular folder. The .git directory tracks all the changes and the history of the repository. You can verify if a directory is a Git repository by checking for the existence of the .git folder or by running git status inside the directory.
    • Checking Repository Status (git status): The git status command tells you the current state of your Git repository. It shows information about the current branch, whether there are any uncommitted changes, files in the staging area, or untracked files. You’ll find yourself running git status very frequently when working with Git.
    • Branching: Branching is one of Git’s most powerful features. The main branch (or sometimes called pristine or stable branch) is intended to contain code that is bug-free and deployable.
    • Developers should never commit changes directly to the main branch when working on new features. Doing so risks breaking the codebase for everyone.
    • Instead, you should create a dedicated branch for each new feature. A branch in Git can be thought of as an entirely separate copy of the codebase where you can work and experiment without affecting the main branch.
    • Creating a Branch (git branch <branch_name>): You can create a new branch using the git branch command followed by the desired name of the branch. For example, git branch my-feature creates a branch named my-feature.
    • Switching Between Branches (git checkout <branch_name>): To work on a specific branch, you need to check out to that branch using the git checkout command followed by the branch name. For example, git checkout my-feature switches your working directory to the my-feature branch.
    • The Two-Stage Commit Process: Git uses a two-stage commit process.
    • Working Directory: This is where you make changes to your files.
    • Staging Area: Before committing your changes to the repository’s history, you need to add those changes to the staging area. The staging area allows you to group logically related changes together before creating a commit. Changes in the working directory appear in red in the git status output.
    • Commit History: A commit object tracks a specific set of changes to the repository.
    • Adding Files to the Staging Area (git add <file_name>): To move changes from the working directory to the staging area, you use the git add command followed by the name of the file or directory you want to add. For example, git add hello.txt stages the hello.txt file. Changes in the staging area appear in green in the git status output.
    • Committing Changes (git commit -m “commit message”): To create a commit object and save the staged changes to the repository’s history, you use the git commit command.
    • You should provide a concise commit message that describes the changes you’re making. This metadata helps in reviewing the history later.
    • You can use git commit by itself, which will open a text editor to write your commit message.
    • Alternatively, you can use the -m option followed by your commit message in quotes, for example, git commit -m “Adding initial hello.txt file”.
    • The first time you use git commit on a machine, Git might prompt you to configure your username and email address using git config –global user.name “Your Name” and git config –global user.email “your.email@example.com”. This information is associated with your commits.
    • Viewing Commit History (git log): The git log command displays a list of all the commits that have been made in the repository. It shows the commit hash (a unique identifier), the author, the date, and the commit message. The HEAD pointer indicates the current commit you’re checked out to. git log –all –oneline provides a concise one-line summary of the commit history for all branches.
    • Modifying Files: After creating a branch and making changes to files, git status will show the modified files in the working directory. You need to use git add to stage these changes and then git commit to save them to the current branch.
    • Merging Branches (git merge <branch_name>): Once you have completed a feature on a dedicated branch and are confident in your changes, you can merge that branch into another branch (e.g., main).
    • To merge a branch, you first need to check out to the target branch (the branch you want to merge into).
    • Then, you use the git merge command followed by the name of the source branch (the branch containing the changes you want to merge). For example, if you’re on the main branch and want to merge the my-feature branch, you would run git merge my-feature.
    • The tutorial mentions a fast-forward merge, which occurs when the target branch has not diverged from the source branch since the creation of the source branch. In this case, Git simply moves the target branch pointer to the latest commit of the source branch.
    • Deleting a Branch (git branch -d <branch_name>): After a feature branch has been successfully merged into the main branch, it is often no longer needed and can be deleted using the git branch -d command followed by the branch name. For example, git branch -d my-feature would delete the my-feature branch.
    • Stashing Changes (git stash): The git stash command allows you to temporarily save changes you’ve made in your working directory without committing them. This is useful when you need to switch to another branch quickly or want to experiment without affecting your current working state.
    • git stash or git stash push will stash your uncommitted changes.
    • git stash list shows a list of your stashed changes.
    • git stash apply reapplies the stashed changes to your working directory but keeps the stash entry.
    • git stash pop reapplies the stashed changes and removes the stash entry from the list.
    • git stash clear removes all entries in the stash.

    These basic Git commands and concepts are fundamental for using GitLab as a source code management system and for participating in collaborative software development workflows. The tutorial series builds upon these basics to introduce more advanced features within the GitLab platform.

    GitLab Interface: Components and Navigation

    Based on the GitLab Tutorial Series provided in the sources, here’s a discussion of the GitLab interface:

    The GitLab interface offers a wide array of features for software development, making it a comprehensive DevOps platform. Navigating this interface is a key skill for utilizing GitLab effectively.

    Key Components and Navigation:

    • Top Navigation Bar: After logging into your GitLab account, you’ll typically land on a projects dashboard. The top navigation bar provides access to several important areas:
    • Profile Menu (Far Right): Allows you to access your GitLab profile, set a status, upgrade your subscription, edit your profile, view account preferences (including access tokens and SSH keys), and sign out.
    • Support/Help Menu: Provides access to GitLab documentation and support resources.
    • To-Do List: Shows a list of items requiring your attention, such as assigned issues or merge requests.
    • Merge Requests Dropdown: Allows you to access merge requests assigned to you or where you are a reviewer. A merge request is a request to merge one branch into another and is a central place for code review and discussion.
    • Issues Page: Enables you to query issues across projects based on various parameters, such as issues assigned to you or where you were mentioned. A GitLab issue is a way to track work related to a GitLab project, including bug reports, tasks, and feature requests. Your software development workflow should often begin with the creation of an issue.
    • New Item Creation: A “+” icon allows you to create new projects, repositories, groups, and snippets. A snippet is a small piece of code that can be shared.
    • Top-Level Search Bar: Enables searching across GitLab.
    • Left-Hand Side Menu: This menu provides access to various dashboards and project/group-specific sections:
    • Projects Dashboard: Lists all your GitLab projects, starred projects, and allows you to explore public projects. A GitLab project is essentially a container for a Git repository with built-in CI/CD functionality and issue tracking. It has a one-to-one mapping to a Git repository. As we discussed previously, Git is a version control system, while GitLab is a source code management system that hosts Git repositories.
    • Groups Dashboard: Shows the GitLab groups you are a member of and allows you to create new groups. A group allows you to manage settings across multiple projects and provides logical categorization of users or projects.
    • Security Dashboard: Provides security-related information.
    • Environments Dashboard: Shows information about project environments.
    • Within a project, the left-hand menu provides access to sections like:
    • Overview: Displays commits, branches, tags, and the project’s code. You can switch between branches (separate lines of development) using the branch specifier. The main branch is often the default branch.
    • Issues: For managing and creating issues related to the project. Projects also have agile boards for issue tracking.
    • Merge Requests: For managing and viewing merge requests related to the project.
    • CI/CD: For configuring continuous integration and continuous delivery/deployment pipelines. A GitLab pipeline is a top-level component defining the CI/CD process for a project.
    • Packages & Registries: For publishing and managing software packages, container images (Docker), and Terraform modules. The GitLab Package Registry allows GitLab to act as a public or private software package registry. The GitLab Container Registry is a private registry for Docker images. The Infrastructure Registry supports Terraform modules.
    • Deployments: For viewing releases and environments. A release in GitLab can include packages, release notes, evidence, and a snapshot of the source code.
    • Analytics: Provides project analytics.
    • Wiki: For publishing project documentation.
    • Snippets: For creating and managing code snippets within the project.
    • Settings: Allows you to configure various project settings, similar to group settings but at the project level. This includes settings for merge requests (e.g., merge methods, merge checks), repository (e.g., default branch, protected branches), and CI/CD. Protected branches, like main and often production, restrict who can push changes to them.
    • Within a group, the left-hand menu provides access to:
    • Group Information: Includes activity, labels, and members. Members are GitLab users or groups with access to the group or its projects. Members are assigned roles with specific permissions.
    • Issues: Shows all issues associated with projects in the group.
    • Boards: Provides a lightweight agile board for group-level issue management.
    • Merge Requests: Shows merge requests across all projects within the group.
    • Settings: Allows you to manage general settings (name, ID, visibility), integrations with external tools, group-level CI/CD, and runners (processes that execute CI/CD jobs).

    Key Terminology:

    • Group: Manages settings across multiple projects, enables logical categorization of users or projects.
    • Project: A container for a Git repository with built-in CI/CD and issue tracking.
    • Member: A GitLab user or group with access to a project or group, assigned a role with permissions.
    • Merge Request: A request to merge one branch into another, facilitating code review and discussion.
    • Issue: A way to track work related to a GitLab project (bugs, tasks, features).
    • Snippet: A small, shareable piece of code.

    Account Settings:

    You can access your account settings from the profile dropdown menu by selecting “Edit profile”. This area allows you to manage:

    • General Information: Your name, email, etc..
    • Access Tokens: For authenticating with GitLab via the command line or APIs. Similar to uploading files to Dropbox or Google Drive, GitLab hosts Git repositories for sharing, and access tokens can be used instead of passwords for command-line interactions.
    • SSH Keys: Another method for command-line authentication. You generate an SSH key pair and add the public key to your GitLab profile. This allows your Git client to communicate securely with your GitLab account. The tutorial demonstrates how to generate an SSH key pair using ssh-keygen and add the public key to GitLab.

    Project Homepage:

    The project homepage provides a quick overview of the project’s activity and code. Key elements include:

    • Codebase View: Displays the files and directories in the repository, with the README file often rendered at the bottom.
    • Branch Specifier: Shows the currently checked-out branch and allows you to switch between branches or tags.
    • New File/Upload: Options to create new files or upload existing ones directly through the web interface.
    • Web IDE: A browser-based integrated development environment for modifying the codebase.
    • Clone Repository: Provides the SSH or HTTPS URL to clone the repository to your local machine using the git clone command. Cloning downloads the project as a Git repository to your local machine, allowing you to make commits and push them back to GitLab.
    • Quick access to create new branches or tags.

    Understanding these components and how to navigate them is crucial for effectively using GitLab for source code management, collaboration, and CI/CD workflows.

    Learn GitLab in 3 Hours | GitLab Complete Tutorial For Beginners

    The Original Text

    Hey what’s up everybody my name is Moss and  Welcome to this tutorial series on GitLab in   this tutorial series I’m going to teach you how  to utilize some of the core features of GitLab   in this video I’m going to walk you through the  road map of topics that we’re going to cover   in the tutorial series as well as the learning  objectives and who this course is made for   I’ll also cover the first and  second topics of this course   in this video as well so without  further delay let’s dive right in   so for the first topic of this video we are going  to be answering the question what is GitLab and   then I’ll walk you through the basics of git but  before i do that let me introduce myself my name   is moss and I’m an experienced devOps engineer  with over six years of experience in industry   now let’s take a look at the roadmap of topics  that we’ll be covering in this tutorial series   as I already mentioned our first topic will  be answering the question What is GitLab then   I’m going to walk you through the basics of Git  and after that I’ll introduce you to the major   components of the GitLab interface once you’re  comfortable with the GitLab interface I’m going   to introduce you to the basic workflow in GitLab  called the git lab flow and once I’ve given a high   level overview of the GitLab flow we’ll perform  hands-on activities that utilize the GitLab flow   we’ll then dive into more advanced  topics starting with how to do CI/CD   in GitLab I’ll then show you how to  migrate your Jenkins pipelines to GitLab CI after that we’ll explore get lab’s packaging  and releasing features and finally I’ll show   you how to integrate the lambda test platform  with GitLab ci to perform cross browser testing   now let’s quickly go over our learning  objectives the first learning objective   is to give you an introduction to GitLab CI you  will also learn the fundamental commands of Git   you will know how to work in GitLab using the  GitLab flow you will also understand and be   able to perform CI/CD in GitLab you will know  how to migrate your Jenkins pipelines to GitLab   and you’ll learn how to deploy software using  GitLab’s packaging and releasing features   so who is this course for this  course is made for devOps engineers   software teams who want to migrate from Jenkins  to GitLab and developers whose team uses GitLab the prerequisite to this course  is that you have access to GitLab   or a private instance of GitLab so you  should have a user account on whatever GitLab   instance that you’re using you will also need a  recent version of git installed on your machine now let’s get into the first topic what is git lab   get lab is an open source software development  platform and at its core it is a source code   management system but it also offers additional  functionality like CI/CD on top of being a source   code management system and as you can see on the  right GitLab describes itself as a DevOps platform   so what’s the difference between Git and  GitLab Git is a version control system   you use Git to keep track of changes  made to source code files in a codebase in contrast GitLab is a source code management  system so you would use GitLab to host Git   repositories so that they can be shared with  other people on your team similar to how you might   upload and share files to dropbox or google  drive you’re doing the same with GitLab   but for source code so why use GitLab GitLab  enables collaboration among software teams with   the GitLab flow git lab also has built-in CI CD  functionality GitLab is highly interoperable and   it can integrate with other tools these are just  some of the features that gitlab offers but there   are plethora of other features now that we have  a high level understanding of What GitLab is ?  let’s dive into our next topic okay let’s get  ourselves up to speed with some basic git commands   this might be a refresher if you’re already  familiar with git but if you’re not let’s go   ahead and verify that you have git installed on  your machine i currently have my terminal open   and to verify my installation of git i’m going to  invoke git and i’m going to pass in the dash dash   version option and this will return the version  of git that is installed on my machine if it’s   installed and as you can see i have get version  2.33.0 installed so the first git command that   we should cover is the git init command and if you  don’t pass any parameters to the get init command   it will initialize the current directory as a git  repository so if i were to execute the get init   command as it is right now it would initialize  the lambda test folder as a new git repository   and it makes sense to use git init like  that if you have a pre-existing code base   that you want to check into version control and  initialize as a new git repository however if   you didn’t have a pre-existing code base that  you wanted to initialize as a git repository   then what you would do is you would pass  the desired name of the git repository   that you want to initialize as an argument to the  get init command and that’s the option that we’re   going to use and let’s call our git repository  test hyphen project okay and what this is   going to do is it will create a subdirectory  under the lambda test directory and uh that   directory will be called test hyphen project and  that will be our git repository so i’m going to go   ahead and execute the get init command and you can  see at the bottom here it says initialize empty   git repository in and then it specifies the path  of the git repository and then directly above that   it gives us an informational message about the  name of the default branch in our git repositories   and it gives us a couple of commands to rename  the default branch in our repositories so let’s   go ahead and do that let’s execute the first  command here which is git config dash dash global init dot default branch and then let’s set the  the default name to the default branch of our   Git repositories as main and then to execute the  second command we need to change directories into   the Git repository that we just created so I’ll  go ahead and cd into the test project directory   and then to rename the branch  of our current git repository   to main we will execute git branch dash  m and then the target name which is main   so if we take a look at the test project directory  it should be empty let me clear the terminal real   quick and if I do an ls there’s nothing in the  test project directory if I do an ls-al to see   the hidden folders in the directory you’ll see  a hidden folder called dot git and the dot get   directory is what makes the test project directory  a Git repository without the dot git directory the   test project folder is just a folder it is not  a Git repository it’s not tracking changes to   any of the files within the test project folder  so if i wanted to verify that the test project   directory is a git repository i can look for  the dot get folder and i can also do a git   status inside of the directory and this will also  tell me whether or not i’m in a git repository   and as you can see from the output of the git  status command we are currently on branch main   which is the default branch that was created  when we use the get init command and then   it says that there are no commits yet so  we haven’t generated any new git commits   uh in this repository so this repository  is essentially a blank slate it has zero   git history uh yet and then in the last line of  the output it says that there’s nothing to commit   so the git status command essentially  uh tells us what the current state   of our git repository is so if we’ve added any  new files to the test project directory or changed   existing files within the test project directory  when we perform a git status git status will show   all of those changes in the output git status is  a very useful command and you’ll find yourself   running git status very frequently when you’re  using git and working inside of a Git repository   now this git repository isn’t very useful to us  if there aren’t any files in the project that   we want to version control so what we can do  is add new files to the test project directory   and commit those new files on the main branch but  if we do that then we’re not taking advantage of   of git’s one of git’s most powerful features  which is branching so what’s important to know   is that the main branch in a git repository and  it’s not always called the main branch sometimes   it’s called the pristine branch or the stable  branch is a branch that is supposed to be code   that has no bugs in it it’s supposed to be  deployable code that can go to the customer and   because of that if you’re a developer who wants to  work on a new feature working on the main branch   is dangerous because while you’re working on that  feature you might commit some changes that cause   other parts of the code base to break and if the  main branch breaks that breaks the code base for   everyone not just for you so it’s really important  to understand as a developer you should never be   committing changes directly to the main branch  of your git repository so if you’re developing a   new feature in a git repository you always want to  make sure that you’re not developing that feature   on the main branch you should be developing it  on its own dedicated branch and you can think of   a branch in git as an entirely separate copy of  the code base it’s a copy of the code base that   you can work on and experiment in and develop your  feature in without having any impact on the main   branch of the code base so you would develop your  feature in this branch or this copy of the code   base and once you’ve completed your feature and  you’re confident you’ve ran tests on the feature   and you’re confident it can be merged into the  primary version of the code base you can actually   use git to merge your branch and your feature  into the main version of the code base   so we are going to follow the best practice  of creating a branch in order to make changes   in our repository but before we can actually  create a branch we do have to generate some   history so right now in our git repository  we have no commits made in the repository   so we have to make at least one commit in the  repository so that we can create a new branch   and use that branch to develop a feature so what  I’m going to do is create a new file in this git   repository I’m going to create it using vim so  I’m going to invoke vim and feel free to use a   different uh editor if you prefer a different  editor but I’ll call the file hello dot text okay and inside of this file  we’ll simply say hello okay so if I do an ls you can see that we now have  the hello text file in our Git repository   and how does adding the hello.txt file change  the state of our git repository well we can   check the state of our repository using the get  status command so I’m going to do git status   and you can see now we have a little bit  different output in the git status command   it says that we’re on branch main there have been  no commits made to this repository and then there   is one untracked file and that is the hello.txt  file and git is really useful in that it suggests   commands to use to progress through the the  workflow and it says use Git add and then the   file name to include in what will be committed  so what is the get ad command and what does it   mean to add a file to a git repository and to  answer that question I’m going to pull up uh the   get documentation in my browser and I’m already  at the target page and just for reference I’m at   get hyphen sem.com so in git there is a concept  of a two-stage commit and what that means is that   in order to commit changes to git repositories  history which means make a commit object a commit   object is what tracks a particular set of changes  to a git repository we have to use this two-stage   commit and it this two-stage commit process and  it begins with adding changes from what’s called   the working directory into the staging area and if  you’re wondering well how do i know whether or not   my changes are in the working directory or if  they’re in the staging area it’s pretty easy to   check where your change is at using the get status  command if I take a look back at my terminal   from the get status output we can tell whether  or not a change is in the working directory or if   it’s in the staging area if it’s in the working  directory those changes will show up in red as   they do here so this change is currently in the  working directory and we want to move it to the   staging area and what the staging area allows you  to do is commit changes that are logically related   to each other so in the working directory you can  make changes to whatever files and you can make   unrelated changes within files but you don’t have  to commit all of those changes at once what you   can do is you can stage those changes the changes  that are logically related to each other before   actually creating a commit object and creating  well-formed commits is actually very important   to do so in order to move changes from the working  directory to the staging area we have to use the   get add command followed by uh the the argument of  the get add command are the name of the files or   the directories that we want to add to the staging  area and then from the staging area in order to   to create a commit object we would use the git  commit command so let’s go back to our terminal   and add our hello.txt file to the staging area so  to do that i’m going to say git add hello dot text   okay and it doesn’t provide any  output from that command but it did   execute successfully and we can confirm  that by running the git status command again   so now our hello.txt file is in the staging area  and how do we know it’s in the staging area well   any changes that are currently staged will show up  in green and you can see here that git recognizes   that it’s a new file that has just been added to  the repository and in order to commit this file   to git’s history we would use the git commit  command and we can also unstage this file as   well and move it back to the working directory  and git tells us how to do that we can use Git   rm dash dash cache and then the file name  to unstage this file but we won’t unstage   this file we are going to follow through with  the two stage commit process and to do so we will   invoke git commit okay and if we invoke git commit  by itself what it will do is it will bring up the   default uh git editor and prompt us for a commit  message and a git commit message is essentially   a description a very concise description of  the change that we’re making so it’s metadata   on uh the changes that we’re making to the git  repository so that later on if someone wants to   review the changes made to the repository they  have concise summaries of all the changes that   have been that have been made within a single  commit so we can use git commit without any   arguments or options and if so it’ll invoke your  default editor and in my case it should pull up   vim and it does and it prompts me to enter a  commit message but the other option without   using without pulling up the default  editor so if I exit vim and do git commit   I can use the dash m option and this way I can  actually pass my commit message in inline with the   invocation of the git commit command so if I wrap  it in quotes I can then pass in my commit message   directly at the command line and this is the  option that I’m going to use and in my commit   message I’m simply going to say adding hello dot  text okay and end quotes and then I hit enter   and in my case it does give me a confirmation  message that a commit object was created   but if this is the first time that you’re  using this git installation on your machine   then when you invoke the git commit  command it will probably ask you to   enter a username and email address for your  git configuration so that when it creates the   commit object the metadata related to who  committed uh those changes and who was the   author of those changes will show up with your  information in my case I already have those   configurations set but if you want to set those  configurations now you would simply say Git config   and pass pass in the dash dash global option and  then we’ll say user.name and then you can pass in   your name so in my case I would just say moss and  then we can also say Git config dash dash global   user dot email and then you would  pass in your email same as the   username I won’t reset my email since I have it  uh already configured so I’m gonna do a control c   and now that we have a commit generated  in the repository we might want to review   the history of the repository at a later  date so how would we do that how would we   take a look at the commit that we just created  well to do that we can use the get log command   so if I invoke git log just like this it will show  a list of all of the commits that have been made   in the git repository so I’m going to go ahead  go ahead and hit enter and in the output of the   git log command we can see all of the metadata  related to a commit so in our case we only have   a single commit and on the first line here you can  see the hash of the commit object and the hash is   the unique identifier of that commit object  and directly next to the hash of the commit   is this head and then the arrow pointing to  main what this is saying is that the main branch   is um pointing to this commit so essentially the  main branch is up to date with the latest version   of the repository and then head and the arrow  here pointing to main this simply means that   we’re currently checked out to the main branch if  we made changes to the repository if we made new   commits on the repository it would be associated  with the main branch and then below that line we   have information on the author and this is  information that would have been configured   using these commands so I have the author name and  the author email address and then below that is   the uh creation date of the commit okay and then  directly below that is the actual commit message   that uh we provided in line when invoking the git  commit command okay so now that we’ve created our   first commit in the Git repository when I perform  a Git status we shouldn’t see the no commits   message in the output of the Git status command  and as you can see we don’t see it all we see   is that we’re currently on branch main and that  there is nothing to commit and the working tree   is clean meaning there’s no changes in the working  directory that could be added to the staging area   now if we want to make any additional changes  to the Git repository what we should do first   is create a branch so that we can work on those  changes on a separate branch that isn’t the main   branch so let’s say I want to modify the hello.txt  file and I also want to add a new file to the Git   repository so the first thing that I want to  do is create a branch and to create a branch   I can use the get branch command and then as an  argument to the get branch command I can pass   in the name of the the desired name of the branch  that I want to create so let’s call this branch   my hyphen feature and then I’ll hit enter so  that just created the branch and I can confirm   that it created the branch uh by just invoking  git branch without any arguments and you can   see that two branches are listed in the output the  main branch and then the my hyphen feature branch   but notice something is that the main branch is  highlighted green with this asterisk and what   uh that means is that we’re still working on the  main branch if we were to make any changes right   now add any new files those changes would be added  to the main branch if we were to commit them so   what we need to do is we need to check out to the  my feature branch so in git we call it checking   out to a branch and before we check out the my  feature branch let’s run uh git log one time   and you can see in the git log output  now uh not only is the main branch listed   in the output but also the my feature branch and  both of the branches are pointing to the same   commit right now but also notice that the head  pointer is currently pointing to main which means   that we’re uh that we’re working on the main  branch and any changes that we make would be   committed to the main branch and they would not  be applied to the my feature branch so let’s go   ahead and check out to the my feature branch and  to do that we can use git checkout and then the   name of the branch that we want to check out to so  I’m going to say git check out my hyphen feature   and it says in the output that we switch to branch  whoops that we switch to branch my hyphen feature   and if I run a git status you can see that  we’re currently on branch my hyphen feature   and that there is no changes to commit  and also if we run git log again   not much has changed but notice uh that  the head pointer is now pointing to the   my feature branch and it’s no longer pointing  to the main branch which further confirms that   we’re checked out to the my feature branch  and any changes that we make at this point   and commits that we create will be associated  with the my feature branch and they will not be   applied to the main branch so let’s add  some new changes to our git repository   and practice the two-stage commit process so the  first change that I’m going to make is to the   hello.txt file so I’m going to open it in vim but  feel free to use whatever editor editor you prefer and I will simply add world so be hello world   and we’ll save it and I’ll do a git status  and you can see in the working directory Git   prompts me it says changes not staged for  commit and then it shows here instead of new   file it shows that the hello.txt file was  modified and we can either use git add to   add those changes uh to the staging area or we  can use git restore and then the name of the file   to discard the changes in this case it would  remove the word world from the hello.txt file that   we just added if I were to run git restore and  then the file name git restore and then hello.txt   and then for the next change I’m going to create  a new file and I’m going to call it tess.txt and I’m going to say unrelated change and I’ll save it and then I’ll do git status again so the test.txt file is a new file which means  it’s going to be listed under untracked files   Git currently is not tracking the test.txt file in  order to track this file we have to add it to this   Git repository’s history and to add it to the the  history we have to make a new commit now remember   that I mentioned that we can stage logically  related changes together so that we can create   well-formed commits and it might be the case that  the change that I made to the hello.txt file is   unrelated to the addition of the test.txt file so  I might want to commit these changes separately so   let’s exercise the staging area and commit just  the changes made to the hello.txt file and then   we’ll create a separate commit for the addition of  the tests.txt file so to add the changes made to   the hello.txt file to the staging area I would say  git add and then hello dot text and then if I do   a git status we can see that the changes made to  the hello.txt file are in are now in the staging   area and then when we invoke git commit it will  only commit the changes that have been staged   to get’s history so if I say git commit dash  m and then we’ll say modifying hello.txt   I’ll hit enter so it created a new  commit and then if I do git status   the change where we added the test.txt file to  the repository is still in the working directory   and if I perform a Git log we should see  two commits in this git repository’s history   and we do and notice now that the branches are  not pointing to the same commit anymore the   main branch is pointing to our first commit in  the repository and the my feature branch is now   pointing to the latest commit uh made in  the repository where we’re modifying the   hello.txt file so now let’s add and commit the  test.txt file so I will say Git add test.text   and then git commit dash m and  we’ll say adding the test.txt file okay so that created a second commit so now  we have two commits on the my feature branch so if   we list the files in the test project directory we  can see the hello.txt file and the test.txt file   and if I were to cat the hello.txt file we can see  the contents of the file and it says hello world   so this is the latest version of the hello.txt  file on the my hyphen feature branch if we were to   check out back to the main branch you’ll notice  that the directory structure gets updated   as well as the hello.txt file so let’s  go ahead and check out to the main branch   so I’m going to use Git checkout and then main   and it says that we’ve switched to the the main  branch and I’ll confirm with Git status as well   and it says that we’re on the main  branch here as well so if I do an ls   you can see that the test.txt file is no longer  listed in the directory because on the main   branch the test.txt file hasn’t been created  yet and similarly if we can’t the hello.txt file   you can see that it’s the older version of  the hello hello.txt file that doesn’t say   hello world so while we made changes on  the my hyphen feature branch the history of   the main branch has remained intact nothing  has changed on the main branch we’ve only made   changes on a separate branch which remember we can  kind of consider almost like an entirely separate   copy of the code base it’s an isolated uh space  where we can make changes and experiment without   impacting um the main version of the of the git  repository but if we’re satisfied with the changes   that we’ve made on that branch then what we can do  is we can merge those uh changes and those commits   into the main branch and to merge a branch in  git we want to be checked out to the target   branch and then we’ll specify the source branch  that contains changes that we want to merge   so currently we’re checked out to the main  branch if I do a git status we can see that   we’re checked out to the main branch and the main  branch is the branch that we want to merge changes   into so now that we’re checked out to the main  branch we can use the git merge command to merge   the changes from the my hyphen feature branch  into the main branch and the only argument   that we need to pass to the git merge command is  the name of the source branch which in our case   is the my hyphen feature branch and real  quickly before we execute the command I want to   do a git log and I’m actually going to pass in  the dash dash all option in the dash dash one line   and dash dash all will show us the history  of all branches in the git repository and   dash dash one line will give us a nice uh one line  summary for each uh each commit in the repository   okay so there’s three commits in total  and you can see that the my feature branch   is pointing to this commit and then the main  branch is uh kind of behind it’s pointing to   the first commit that was made in the repository  so when we perform a get log after executing   the merge we should expect to see the  main branch pointing to this commit so   let’s go ahead and execute the merge  I’m going to say git merge my feature   and it gives us kind of a confirmation message  saying that it performed a fast forward merge   and we have a new file test.text and a couple  of lines were changed in the hello.txt file   and we can also confirm this by performing an ls  and we can see that the test.txt file now exists   in the test project directory and if i cat  hello.txt we can see it’s the latest version   of hello.txt which includes hello world and  finally let’s check the history using git log   actually i meant to use git log with the  one line option in dash dash all okay   so now we can see in the output here that the  main branch is pointing to the same commit   that the my feature branch is pointing  to so it includes the changes contained   in both of these commits and since the changes  that were made on the my hyphen feature branch   are now merged into the main branch there’s really  no need for the my hyphen feature branch so i’m   going to delete that branch and i can do so using  git branch dash d and then my hyphen feature now   there is one more very useful git command that  i’d like to show you so i’m going to go ahead and   clear the terminal and the command is git stash  the git stash command allows you to experiment   with various changes and save those changes  without actually committing them to the get   the git repositories history as a  commit so to show you what I mean   I’m going to make a modification uh to the  hello.txt file so I’m going to open it up   I’m going to add a word here I’m just going to  say hello world testing and I will save the file   so I’m going to run git status and we can  see in the output that the hello.txt file   has been modified and we can stage that change  to be committed or we can actually stash   that change where we added the word testing  to the hello.txt file when I use the git   stash command I’m taking all the changes that  are in my working directory and I’m stashing   stashing them away in a reserved location that’s  outside of git’s history and to stash our changes   to the hello.txt file i can either say git  stash or git stash push and that will stash   the changes made to the hello.txt file away so  I hit enter and it says saved working directory   and index state work in progress on the main  branch and if I perform a git status you’ll notice   that I have a clean working  directory that change no longer   no longer exists in the working  directory and if I cat the hello.txt file   the testing word is not included in the hello.txt  file that change that I made still exists but it’s   been stashed away and to see that change I can use  Git stash list and it will list all of the changes   that have been stashed by Git so i can see my  change in the stash at position zero and if i   want to reapply that change to the working  directory i can either use get stash apply   or i can use git stash pop and there is a  difference between these two commands when i   use git stash apply it will reapply the changes  to the working directory and you can see here that   now hello.txt is modified if I cat hello.txt i can  see the word testing in the file now but it will   not remove this change from the stash so if I do  get stash list again I can still see that change   listed in position 0. the only difference between  Git stash apply and get stash pop is that Git   stash pop will actually pop this uh change off of  the stash so that it is no longer listed in the   stash so if i perform a git stash clear it will  clear all of the entries in the stash i still   have my change applied in the working directory  i’ll stash this change again but i’m going to use   pop instead of apply so get stash push and then  get stash list okay and then i’ll do git stash pop   okay and it shows that the hello.txt file  has been uh modified and if i do git stash   list you can see that that stash entry has  been removed in later videos we’ll explore   a few more git commands but at this point  you’re ready to start working with git lab   and in the next video we’ll introduce you to some  of the major components of the gitlab interface hey what’s up everybody my name is moss and  welcome back to this tutorial series on gitlab   in this video we are going to focus on  getting you familiar with the gitlab interface   with the overwhelming number of features  that gitlab offers it’s easy to get lost in   the interface so we’re going to complete  some basic tasks so that you feel more   comfortable working in gitlab and prepare  you for the activities in upcoming videos   let’s take a closer look at  what we’re going to cover our first objective is to provide an  introduction to get lab terminology   our next objective is to become  familiar with the gitlab interface first let’s explain some important git  lab terms starting with group a group   will allow you to manage settings across  multiple projects at the same time a group   enables logical categorization of  multiple users or gitlab projects   a gitlab group can also provide a cross-project  view of things like issues and merge requests   the next important term is project a GitLab  project is essentially a container for a git   repository similar to GitHub repositories there  is a one-to-one mapping from a GitLab project   to a git repository a GitLab project  also has built-in ci cd functionality   you can also perform issue tracking inside of  a GitLab project in addition GitLab projects   provide collaboration tools like merge requests  these are just some of the features that a GitLab   project offers but they are the features  that we will primarily focus on in this   series another important term to be familiar  with are members members are GitLab users or   groups that have access to a GitLab project  in addition members are assigned to roles   and these roles include permissions to  perform actions on GitLab projects or groups one of the most important concepts  to be familiar with in GitLab   are merge requests a merge request  is a request to merge one branch   into another merge requests provide a space  to have a conversation with the team about the   changes on a branch and it is the central place  through which changes are reviewed and verified and finally we have git lab issues a GitLab issue  is a way to track work related to a GitLab project   and we can use GitLab issues to report bugs  track tasks request new features and much   more your software development workflow  should begin with the creation of an issue   a quick note as I said in the previous video  GitLab has a plethora of features so for the sake   of time I will only be covering the most relevant  features in the GitLab interface for this series   if I skip over a particular feature in the  interface it doesn’t necessarily mean that   the feature isn’t useful or important it  is only to keep our conversation focused on   the topics that will most likely be covered  in this series now that we’re familiar with   some of the key terms and concepts in GitLab we  can begin our activities in the GitLab interface okay so the first thing that we  want to do is navigate to gitlab.com and from here we’ll navigate to the login page so go ahead and log in to your GitLab  account and if you don’t have a GitLab   account already then you can create one by  just coming down here to the register now link   and you can follow the steps on this page  to create a new GitLab account and so if   that is the case then just pause the video  and create an account and then we will   log into our GitLab account after after the  account has been created so I’m going to sign in and the home page once we sign in is like  a projects dashboard it lists all of our   git lab projects and then we can also see  starred projects which are like favorited   projects and then we can explore other projects  on gitlab.com and from this page we can also   create a new project as well which we will do  shortly if we take a look at the navigation bar   on the far right we have our profile and we can  access our GitLab profile from this drop-down menu   we can set a status associated with our profile um  and then it allows us to upgrade to a higher tier   GitLab subscription then we can also edit the  profile and we can view our account preferences   and then we can also sign out from here and  next to our profile menu we have a support slash   help menu and next to the help menu we also have  this to-do list page which will basically show us   a list of items in gitlab that we’re um you know  that we need to be aware of like if there’s issues   assigned to us or merge requests assigned to  our account then those will show up on this page next to the to-do list page we have a merge  request drop down menu and this is where we can   access merge requests that have been assigned to  our account or merge requests where we have been   listed as a reviewer for that merge request  and then we have an issues page where we can   query issues across projects based on query  parameters and the default query here is where   my account is the assignee on the issue but we  could have any other query parameter in here like   where my account was at mentioned  for instance in an issue comment and next to the top level search we can  create new items in GitLab so some of   those items we can create new project  a new project slash reposito repository   we can create a new group and  we can create a new snippet   which is like a code snippet it’s just a small  snippet of code it’s not a full code base   and then in the left hand side menu we can access  various dashboards so for instance we’re in   the projects dashboard right now but we could  also access the group’s dashboard or security   dashboard or the environments dashboard and that  will show related information to those features   now before we create a new project i do want to  review the account settings of uh of a GitLab   account so you can get familiar with those and  we will update some information inside of the   account settings so from the get lab profile drop  down menu we will select edit profile and that   will take us to the top level of the account user  settings and then under chat we have access tokens   so there’s a few ways that you can  authenticate with your GitLab account   the first way is the most obvious is where  you go to the sign in page and you enter your   username and password but if you’re authenticating  via the command line you’ll likely use one of two   ways you’ll either authenticate using a personal  access token or you’ll you’ll authenticate using   an SSH key pair and this page allows you to  create new tokens or delete personal access   tokens for your account and when you created  a personal access token similar to how the   applications page was set up you can select scopes  for that personal access token which will define   the level of authorization that that token has to  your account so we can give it right permission   to repositories read permission the ability to  interact with the GitLab api below notifications   are where we can manage SSH keys for our account  and this is the method that we’re going to use to   authenticate with our account we won’t be using  the personal access tokens we will create an SSH   key pair and then on this page is where we would  add the public key of our of our SSH key pair to   this page so we would create a new public key for  GitLab account so that we can authenticate with it   and our Git client can communicate with uh our  GitLab account so now that we’ve gone over the   user settings I would like to prepare our git  client to authenticate with our GitLab account   and remember that i said that we were going to  use SSH keys to authenticate with our account   so what we’re going to do now is create an SSH key  pair and then add the public key of that key pair   to our GitLab profile so I’m going to navigate to  my terminal and open up a new terminal session and   in my terminal I’m going to invoke the SSH keygen  command so that’s going to be SSH hyphen keygen   and then we’re going to use dash t option and  we’re going to specify the kind of encryption that   we want to use and we’re not going to use RSA  GitLab it advises to use this type of encryption   and suggests that it’s more  secure than RSA so ed25519 and then we will add a comment to this key  and we’ll say that it’s our get lab key pair   okay and after you’ve typed that in go ahead and  hit enter and it prompts us to enter a file name   in which to save the key in this case we’re going  to accept the default file name and notice where   this key is being saved it’s being saved in  the hidden .ssh folder and that’s going to be   the name of our key pair so the name of the  private key and the public key and only the   file extensions will be different between  the public key and the private key   so I’m going to hit enter and then it prompts  us to enter a passphrase for the key pair and   I’m going to leave it empty for no passphrase  I’ll hit enter again and then that generated the   the key pair for us in this directory in  the data ssh directory now what we’re going   to do is add the public key of that key pair  that we just generated to our GitLab profile   and to do that we’ll first cat the  public key so I’m going to cat dot ssh ed id underscore ed dot pub okay for the  public key pair or for the public key rather   and then I’m going to copy the output to my clipboard and then I’m going to  navigate back to the GitLab interface   and we’ll navigate to SSH keys  and then I’m going to paste in   in the key field I’m going to paste in the  value of the public key in the key field okay   and then the title is the comment that we added  when we generated the key so get lab key pair   and we can set an expiration date if we want to  I’m going to leave that blank so that does it   doesn’t have an expiration date and then I’m going  to select add key so now that we’ve added that   public SSH key to our account we should be  able to authenticate with our GitLab account   from the command line and to verify that we can  authenticate I’ll open up the terminal and then I   can invoke my SSH client I’ll use the dash capital  t option and then we’re going to say get at get   lab.com okay and I’m using gitlab.com uh because  my account is on gitlab.com if you’re using   uh like a private instance of GitLab then you  would want to specify uh that private instance URL   okay so if I hit enter it says welcome to GitLab  and then it specifies my username which means that   it did successfully authenticate with my account  okay so now that we know our SSH key pair is   working let’s navigate back to get lab and I want  to take a look at the groups dashboard so from the   left hand side menu I’m going to select groups and  let’s navigate to the your groups page and on the   groups dashboard you can see that I’ve already  created a group called tech with moss group but   let’s go ahead and create a new group so up in  the top right here I’m going to select new group and then I’ll select um create group and then  I can give the group a name so feel free to   to enter whatever name you prefer here I’m gonna  say moss test group okay and then I can specify a   unique group URL and then under the group URL we  can specify the visibility level of the group uh   whether it’s private or public if it’s public then  the group can be viewed without any authentication   and then they have some personalization options  for this group so I can specify what my role is   and I’m going to leave it as software developer  but you can see here that they have other roles   that you can select and then it says who will be  using this group and I’m going to specify just me   instead of my company or team but if I do select  my company or team I have this drop down where   I can specify what the group will be used for and  then I can also invite members to the group using   their email address so I’m going to select just  me and then i’m going to select create group and on the group homepage i can see subgroups  and projects that are associated with this group   so a group can have a subgroup and from this page  i can also create new subgroups or new projects   that are associated with this group this moss  test group and in the left-hand menu of the   group i can access the group information which  will include the group activity labels and then   members and from the members page we can control  the membership that people have to this group so   you can see here there’s one member and it’s  my account and i am the owner so my role is   the owner for uh this moss test group but i can  add additional members uh using their uh email   address and then i can specify the role that  that member would have when they’re added to   this group and below group information i can  select issues and if i select issues it will show   me all issues that are associated with this group  which there might be issues created in multiple   projects so this view inside of the group will  actually show issues across projects and in the   board section i can also utilize a lightweight  agile board you can see here that has issues   that have a status of open will show up  here issues that have a status of close   will show in this column and then you can also add  additional columns to match you know your desired   issue workflow and below issues we can also view  merge requests so if i select merge requests   we would get a view of merge requests that again  are across projects so it would be all projects   that this group is a member of so you can add  groups as members to gitlab projects right   so any merge requests that show up  here would be across all the projects   that this gitlab group is a member of and  then finally we have the group settings   and under the group settings we have general  settings where we can control things like the   group name the group id the description of the  group we can upload a group avatar kind of like   how we can upload an avatar for our profile and  we can also control the visibility level we can   control permissions and enable large file storage  or disable large file storage and that’s Git lfs   we can enable default branch protection which  allows us to protect certain branches so that   developers cannot push new commits to to  specific branches under the general settings   we have integrations and this is where we can set  up new integrations uh with external tools uh like   atlassian bamboo for ci cd we could integrate uh  get this GitLab group with a confluence workspace   also with JIRA we can also specify group  level ci cd and here we can define group   level variables which can be referenced  from within a GitLab ci cd pipeline   and we can also configure runners  which i won’t get into too much detail   about right now I’ll leave that for a later video  but essentially if you’re familiar with Jenkins   agents runners are are kind of a similar  concept runners are processes that pick up   and execute ci cd jobs uh for git lab so now  that we’ve explored the interface for groups   let’s navigate back to the home page and from the  home page let’s go ahead and create a new project   so I’m going to navigate up to the top  right and I’m going to select new project and on this page we can create a blank project so  we start a project from scratch or we can create a   project from a project template if we’ve created a  project template and we can also import a project   so for migrating uh you know our source code  from another source code management system uh   like GitHub we can use this import project feature  and then the last option here if we want to use   GitLab purely like as if we were uh using it like  Jenkins where it’s just performing ci cd for us   we can still utilize a external source code  management system like GitHub and then we’ll   just use the ci cd feature that GitLab offers  to facilitate CI/CD for our project but in our   case we are going to create a blank project so  I’m going to select the create blank project   and then it takes us to a page where we can  fill out uh some details on our project now   you can use the project name field to provide  a name for your project but if you leave   this field blank and you provide a  project slug which will be essentially the   the URL or part of the URL of the project  then it will automatically fill in the project   name field for you so I’m going to use that  option and I’m going to call the project moss   hyphen test hyphen project okay and notice  that the project name field automatically   gets filled in as I enter the project slug I  can also specify where this project should live   by using this drop drop down I can specify a group  or a specific user right now it’s under my account   tech with moss so I’m going to leave it under  tech with moss and then I can provide a project   description if that is applicable and under the  description i can specify a visibility level so   whether or not the project is going to be private  or public and then if it’s a public uh project the   project can be accessed without authentication to  get lab if the visibility is set to private then i   have to explicitly grant access to each user or to  a group by adding them as members to the project   and then the last setting allows us to  initialize the repository with a readme file   which we would do if we didn’t have an existing  code base locally that we would want to push   up to this gitlab project and we don’t in our  case so i’m going to leave this setting checked   and then i’m going to select create project so here we have the home page of the project and  we can access the commits the branches uh the   git tags that have been created and then it tells  us a little bit of information about the storage   that this project is utilizing and  if i exit out of these messages   we can see the actual code base and  all of the files in the code base   and right now there’s just the readme file  but you’ll notice that below the directory   the readme file is automatically rendered at the  bottom of the page and the readme file was also   automatically filled out with some resources to  help us get started with our project and above the   root directory we can also see a couple of other  things the first is the branch specifier so right   now we’re checked out to the main branch so the  version of the code base that we’re viewing here   is the version on the main branch and if we wanted  to switch branches and view a different branch   we could just select the drop down and either  search for a branch or a git tag and then check   out to that branch in our case we don’t have any  other branches in this repository so we only have   the main branch as the only option and then next  to the branch specifier we have the option to   create new files upload files we can also create  a new directory and this is all within the current   directory that we’re that we’re in  and we’re in the top level directory   right now but we can also create a new branch  or we can create a new tag a new git tag   and another really cool feature is we can use a  web ide to modify the code base in our browser   without having to download it locally and modify  it in our local ide and next to web ide we also   have the option to download the source code as a  compressed file and then finally we can clone the   repository using either ssh or https and we didn’t  cover the git clone command in the last video   but we will cover it in an upcoming video but  essentially cloning means that we are downloading   this project as a git repository to our local  machine so that we can make commits to the   repository and then push those commits back up  to git lab so the key difference between cloning   and downloading as a compressed file is that if we  were to download it as a compressed file it’s not   technically a git repository so we wouldn’t be  able to make commits inside of that compressed   file after we’ve extracted the project and then in  the issues section we can manage and create issues   related to this project and we also have agile  boards similar to what we saw in the groups   settings page as well below the issues section  we have merge requests and this is where we can   manage and view all of the merge requests related  to this project and right now there are none but   we can create a new merge request from this page  i think we can also create a merge request from   the home page of the project as well under merge  requests we can configure cicd for the project   so we can create new pipelines and we will  explore this feature in later videos but we can   create new gitlab pipelines and pipeline jobs  and we can set schedules for those pipelines   in the packages and registries section  we can publish software packages to the   package registry and then we can also utilize  the container registry to publish docker images   and in the infrastructure registry we can publish  terraform modules if we have any terraform modules   and under analytics is the project wiki and the  wiki is where you could publish documentation   related to the project so for instance  if you had architectural documentation   describing the architecture of your application  you could publish it to the project wiki and the   gitlab project can have snippets of code so you  can create a snippet of code which essentially is   a small piece of code it might not be an entire  code base or an entire file from a code base   it could be just a single function from a file  that might be useful to others so maybe you   could share that function with others and you can  use snippets to do that a snippet can be shared   with other users it can be version controlled  and downloaded now for the project settings i   won’t go into too much detail because what you’ll  notice is that a lot of the settings are similar   to group settings but they are at the project  level so if i go into settings and then general   you’ll see very similar fields to the group  settings fields with the exception of merge   requests so this is a big one you can configure  various settings related to merge requests   in your gitlab project so for instance you can  specify what kind of merge method you would   like merge requests to take so down here under the  merge method we can specify whether or not we want   a merge commit a merge commit with a semi  linear history or a fast forward merge   you can also specify the default behavior for  squashing commits when merging a merge request and   then you can enable merge checks as well so for  instance we can enforce that all pipelines must   succeed before the merge request can be merged in  the repository section we can configure various   settings related to branches in the gitlab project  so for instance we can set the default branch of   the project which means that any time we were to  open a new merge request the target branch would   automatically be selected as the default branch  another important setting are protected branches   it’s typically a best practice to protect the main  branch or your stable branch or your production   branch since protecting the branch means that only  certain people can actually push commits to it   and also you can disable force pushing to that  branch so nobody can override the history of uh   of a protected branch and under the monitoring  settings we can also control settings related to   gitlab pages which is a pretty cool feature pages  essentially allows you to host a static website   off of this gitlab project so now that we’re  familiar with the project’s features and settings   i’d like us to go ahead and create a new issue  in this project so i’m going to select the issues   section and in the issues section i’m going to  scroll down and i’m going to select new issue so for the title of the issue i’m  going to say modify the project readme and for the issue type i’m going to leave it  as issue but if we select this drop down we   can also select a different issue type in this  case we have the incident issue type as well   in the description of the issue i’m going  to say modify the project readme file to practice the get lab flow and what’s cool  about the description field is that we can utilize   markdown syntax in this field and then we can  preview that syntax in the preview field so   right now we’re not using any markdown but i can  modify this and i’ll add a hashtag and a single   hashtag will give me a header so if i preview  that i now have um a markdown header i’m going   to change this hashtag to a bullet and then i’ll  scroll down and i’ll leave the remaining fields   empty but we can assign the issue to a specific  user in the project right now i’m the only user   in the project we can also add a due date for the  issue and we can associate it with a milestone   in the project and we can also add labels to the  issue as well and now i’ll select create issue so now that we have the issue created you may  remember at the beginning of the video when i   was discussing the definition of issues i  mentioned that your software development   workflow should always begin with the creation of  an issue and in the upcoming videos i’m going to   introduce you to the gitlab flow which is  the primary workflow that you would utilize   uh in gitlab and we’re going to  practice the gitlab flow by modifying   the readme file so thanks for watching  and i will see you in the next video hey what’s up everybody my name is moss and  welcome back to this tutorial series on gitlab   after watching the previous video you should be  comfortable navigating the gitlab interface and   this means we can start practicing our development  workflow inside of gitlab to do this i’m going   to introduce you to gitlab’s primary branching  strategy so if you haven’t already go ahead and   grab a coffee and let’s get started in this video  i’m going to introduce you to the get lab flow by the end of this video you  should be able to do the following   describe the concept of a branching  strategy describe the gitlab flow   and explain the differences between the  gitlab flow and other branching strategies what is a branching strategy a branching strategy  is a software development workflow within the   context of git or another version control  system and it describes how a development   team will create collaborate on emerge  branches of source code in a codebase   we practiced a very basic branching strategy  in the first video using a feature branch   a branching strategy takes advantage of the  branching system in a version control system   to enable concurrent development in  the code base but how do you choose   a branching strategy unfortunately  this is not a straightforward question   as it depends on multiple factors these factors  include but are not limited to team requirements   the source code management system that you’re  using the environments to which code is deployed   and how you want to manage deployment to those  environments and the structure of your code base some of the most common branching strategies  in git include the following the GitHub flow   the git flow and the git lab flow let’s take  a look at each of these branching strategies we’ll start with a GitHub flow the GitHub flow  is the simplest workflow of the three each of the   white circles in the graph represent git commits  the GitHub flow begins by creating a feature   branch off of the main branch while checked out  to the feature branch we would make some number of   changes until we feel like our feature is ready  to be reviewed by others and undergo automated   testing we would then upload our changes to  GitHub by using the get push command this action   is commonly known as pushing our commits after  pushing our changes we would open what’s known   as a pull request a pull request is the equivalent  of a merge request in git lab the opening of the   pull request should trigger automated testing  of the changes located on the feature branch   once the changes have been reviewed and verified  the feature branch is merged into the main branch now let’s take a look at the git flow the git  flow is the most complex of the three workflows   the git flow utilizes a long-lived develop branch  as the default branch from which developers create   feature branches the git flow also utilizes  release branches which are created off of   the develop branch if a bug is found during  testing on the release branch a bug fix can be   applied to the release branch and then merged  back into the develop branch release branches   are then merged into the main branch and main  is tagged with the release version using a git   tag in a perfect world the changes merge from the  release branch into the main branch are bug free   since it is unrealistic to expect that this is the  case every time a release branch is merged the git   flow includes hotfix branches the hotfix branch  is created directly off of the main branch to   quickly address issues introduced by the changes  that are now in production in addition to merging   the hotfix branch back into the main branch  it is also very important to merge the hotfix   branch into the develop branch to ensure that  any new development work incorporates the hotfix now let’s explore the GitLab flow and its  variations the GitLab flow is simpler than the   Git flow but more structured than the GitHub flow  there are two variations of the GitLab flow the   first as seen here utilizes environment branches  in this workflow we have a long-lived production   branch which represents the production environment  in which the software application is deployed   the code on the production branch should always  be deployable like the GitHub flow this workflow   utilizes feature branches off of the main branch  but main is merged into some number of environment   branches and then to the production branch you  can think of merging a feature branch into the   main branch as deploying your changes to a  staging environment once changes have been   verified in the staging environment they can be  promoted to the production branch you can have   multiple pre-production environment branches  representing various environments in which the   changes must be tested in before merging  those changes into the production branch   unlike the git flow the GitLab flow has an  upstream first policy when it comes to hot fixes   this means that if issues are found in the  production environment after a change has been   deployed the hotfix branch must be created off  of the main branch and merge back into main and   any other pre-production branches before being  merged into the production branch the feature   branching workflow is similar to the github flow  but instead of creating a pull request we create a   merge request like the github flow the opening of  a merge request should trigger automated testing the second variation of the gitlab flow utilizes  release branches instead of environment branches   you should use this variation of the workflow only  if you’re releasing software to the outside world   for instance if you are developing an open source  project this variation is similar to the first   in that you branch off of the main branch into  feature branches and then merge back into main   in this workflow you will want to wait as

    long as  possible to create a release branch after creating   the release branch apply only major bug fixes to  the release branch incrementing the versioning as   needed bug fixes are merged with the upstream  first policy release branches are long-lived   until a specific release of software is no longer  supported or maintained by the development team let’s quickly recap the GitLab flow there  are two variations of the GitLab flow   the first variation utilizes long-lived  environment branches in this workflow changes   are promoted through one or more pre-production  branches before being merged into the production   branch the second variation utilizes release  branches this variation should only be used   when releasing software to the outside world  such as an open source project the GitLab flow   is simpler than the git flow however the git lab  flow provides more structure than the GitHub flow hey what’s up everybody my name is moss and  welcome back to this tutorial series on GitLab   in the last video I introduced you  to the concept of the GitLab flow   and in this video we are going to apply the GitLab  flow to the project that we previously created I   talked about two variations of the GitLab flow  in the last video and in this demonstration   we’re going to be utilizing the environment  branches variation before we get started   let’s quickly review the learning objectives  for this module on practicing the GitLab flow after completing this video you should be able  to do the following sync changes between local   and remote git repositories create merge  requests demonstrate familiarity with   the components of a merge request and apply  the GitLab flow in your own GitLab projects   now that we’ve covered the learning  objectives let’s get started   okay so I’m on the homepage of my GitLab account  and if you haven’t already go ahead and sign into   your GitLab account and once you’ve done that  we’re going to navigate to the GitLab project   that we created in a prior video I called  when I created that project I called mine   moss test project and I’m  going to open that project up now you might remember from the previous  video when we created this project   after creating the project  we also created an issue   within the project and let’s navigate back to that  issue it’s under the issues section and then list and then we only have this  single issue in the project   called modify the project readme  so let’s open that issue up I mentioned earlier that the get lab flow  should begin with the creation of an issue   an issue helps define the scope of work  that needs to be completed and the scope of   this issue is not well defined right now it  just says modify the project readme without   any additional information to specify what what  should be modified in the readme or how it should   be modified so let’s edit this issue and update  it to be more specific on what updates should   be made to the project readme file so to edit  I’ll select the pencil and i’m going to use some   markdown syntax that we haven’t seen yet so if you  want to uh create a check box that you can check   with markdown you can use hyphen open  bracket space close bracket and then text so   temporarily i’ll just put hi and then if i  preview i can see that next to hi is rendered   a checkbox that i can select now i can’t select  it right now because i haven’t saved the changes   but you can use this in issues to create like  tasks that you can check off within the issue   and for this task i’m going to say  that we should introduce ourselves   in the project readme and i’ll select save changes  and get lab recognizes these check boxes as   subtasks within the issue and to confirm that you  can see up here it says zero of one task completed   and the single subtask that we have in this issue  is just to introduce ourselves and project readme   so GitLab will actually track uh how many subtasks  have been completed within an issue using these   markdown checkboxes now since I’ll be working  on this issue I’m going to assign it to myself   and I can just go up here select edit  and then I can search for a user but   I’m the only user in this project so  I’m going to select myself and now that   we’ve further defined the issue let’s  navigate back to the project homepage and from the home page we’re  going to create a new branch   remember at the beginning of the video  I said that we’re going to be using   the environment branches variation of the GitLab  flow so to create a new branch I’m going to   come down to this drop down here and I’m going  to select new branch and this branch is being   created from the main branch and we will call  it production okay so this will represent   uh pushing changes to this branch will represent  pushing changes to the production environment for   for our code base even though this isn’t really  a code base right now it’s just the readme file   so i’ll select create branch and you’ll notice that after creating the  branch we’re automatically checked out to the   production branch or the branch that we created  so i’m going to switch back to the main branch let’s take a look at the branches  that we have in the repository now   so i’m going to navigate to  this subsection here branches   and we have two active branches we have the main  branch which is listed as the default branch   and the production branch in addition to being the  default branch the main branch is also protected   which means that only people in the project with  certain permissions can push changes to that   branch and since we’re utilizing the environment  branches variation of the GitLab flow I think it’s   a good idea to also protect the production branch  since that represents our production environment   and I don’t think people should be able to push  changes to the production branch unless they   are a specific role in the project and as it  says up here we can control protected branches   in the project settings so I’m going to click  on that and navigate to the project settings   and then scroll down to the protected  branches section and we’ll expand that and says by default protected branches restrict  who can modify the branch so under the branch   selection I’m going to select the production  branch and then we’ll select who is allowed to   merge into the production branch and we’ll say  only people with the maintainers role can merge   into the production branch and uh we’ll also set  the same for people who are allowed to push to the   production branch only maintainers can do so  and the last setting lets us allow or disable   force pushing to this branch and it is a best  practice to not enable force pushing on shared   branches the only time it’s it might be okay  to force push as if you’re on an isolated   branch and you know that you’re the only person  working on that branch so now let’s click protect and now that branch is a protected branch should  be listed under here yep we can see it listed   along with the main branch as a protected branch  so let’s navigate back to the project homepage although we can modify the readme file in get  lab what we’re going to do is modify it locally   on our machine so we have to get the the project  onto our machine and to do that we have to clone   uh the project so even though it’s called  a project uh behind the scenes it is a git   repository and when you want to um begin working  on a git repository locally on your machine   you have to clone the repository which will  essentially download the code base to your   local machine so you can work on it and we can  clone a repository by copying the clone command   specified in this drop-down and we have two  selections or two options we can clone with ssh   or we can clone with https and if you  remember from an earlier video we set up   ssh keys for our gitlab account so we won’t  be using the clone with https method we’ll   be using the clone with ssh method instead  so i’m going to select this copy url button   and now that it’s copied to my clipboard  i’m going to open up my terminal and in my terminal i’m going to paste the command  or rather the ssh url of the repository or of the   gitlab project and then i’m going to preface  it with the command git clone so the git clone   command is what we use to copy a git repository  from the remote source down to our local machine   okay so after git clone we just specify uh  the remote repository so i’m gonna hit enter okay and it looks like it successfully  downloaded the project and it created a   directory called moss test project so if i do an  ls i have moss test project and if i cd into it   and i list uh all the files i have the dot  get folder so that tells us that we are   inside of a git repository and we have the  single readme file that’s in the repository   if I do a git status I’m checked out to the  main branch the reason I’m checked out to the   main branch after cloning is because it’s  the default branch so whatever the default   branch is when you clone that git repository  you will be checked out to that branch so to   begin the GitLab flow I have to branch off  of the main branch into a feature branch   so that’s what I’m going to do now and I’m going  to use the git checkout command to create a branch   I can use git branch to create the branch or I can  use git checkout with the dash b option which will   create the branch and check me out to that branch  in one command and we’ll call the branch readme hyphen introduction okay and you can see in the output it says switch to  a new branch read me hyphen introduction and if   I do a git status I can see that I’m on the readme  hyphen introduction branch and now that we’re on a   feature branch we can begin making modifications  uh to the code base and in this case we’re not   modifying code we’re just modifying the readme  file and to modify the readme file I’m going to   open it up in vim but feel free to use whatever  editor you prefer to to modify the file in   so i’ll say vim and then readme and under  the first header i’m going to put a subheader   okay so a markdown subheader using two  hashtags and i’m going to call it introduction and under the introduction section  feel free to put whatever you like   for me i’m going to put my  name so i’m going to say name moss and then uh activities i like to do and i like to mountain bike and play tennis okay and i’m gonna put  a new line uh between the subsection   and the bulleted list okay so now let’s save and  quit the file okay and i’m gonna do a git status   and we can see our modifications are currently  in the working directory for the readme file and   I’m going to add and commit the changes to the  readme file so I’m going to say git add readme okay so now the changes are uh staged and let’s  commit the changes so I’m going to say git commit   dash m so that I can pass in the commit message   directly at the command line I’m going to  say updating project readme with introduction   okay so now in our local copy of the repository  we have created a new feature branch and we made   a change and committed that change on the feature  branch but neither the branch or the change that   we made on that branch is currently represented  in the remote repository so if we were to navigate   to get lab and I refresh the page we’re not  going to see the branch that we just created   or the changes that we just made to the repository  and that’s because git is a distributed version   control system and we have to sync changes from  our local repository uh to the remote repository   so if we’re syncing changes from our local  repository to the remote repository we have   to push our changes and we would use the git  push command to do so however if we’re syncing   changes in the opposite direction from the  remote repository to our local repository   then we pull changes and we do that using the  get pull command now right now there’s no changes   from the remote repository that are not in our  local repository so we don’t have to do a get   pull but there are changes locally that we should  push up to the remote repository so we’re going to   use the get push command and since this branch  doesn’t exist yet on the remote repository the   readme hyphen introduction branch  we have to pass in the dash u option   and specify the name of the remote  which by default is going to be origin   and after we specify uh the name of the remote  and origin is just going is a variable that’s   resolving to gitlab essentially it’s it’s  resolving to the url of the gitlab project   after we’ve specified the name of the remote  we specify the branch name so we’re going to   say origin and then readme hyphen introduction  okay so we’re pushing our changes the commits   that we made uh from our local branch up to get  lab and we’re also passing in the dash u option   to tell git lab uh to create the readme hyphen  introduction branch so i’m going to hit enter okay and in the return message it does confirm  that the changes were pushed up and it also tells   me that i can create a merge request from  the branch that was just created in gitlab   and you can see down here as well  that it specifies that there is   a new branch okay so let’s navigate back to  gitlab and confirm that so i’ll refresh the page   and you can see in the top message here it says  you push the readme hyphen introduction branch   and in the drop down list we can  now see that branch as well and if   i were to switch to the readme introduction  branch we should be able to see the changes   in the readme file and you can see under the  project name we have the introduction section   and the bulleted list uh with my name and the  activities that i like to do okay so the changes   were pushed up successfully and from here we can  actually create a merge request and it suggests   uh creating a merge request here as well so i’m  going to go ahead and select create merge request and on the merge request creation page you can see  in the title the title is automatically generated   from the latest commit message on the readme  introduction branch so if i open the terminal   you can see where i entered git commit that  was my commit message okay and i’m going to   leave the the title as is the other thing to  notice is the branch that we’re merging into   uh it says from readme hyphen introduction into  main so why is the main branch chosen as the   branch that we’re merging into it’s the default  branch so not only does that mean when we clone   the repository that will automatically be checked  out to the main branch but also when we open new   merge requests we’ll automatically be merging our  feature branch into the default branch which in   this case is the main branch and this also follows  the guidelines of the gitlab flow as well the   feature branches are supposed to be created off  of the main branch and then promoted to maine and   then to some number of uh pre-production branches  before being merged into the production branch   and in the description of the merge request I’ll  be a little bit more descriptive about the change   I’m gonna say that i updated the readme to include  my name and activities that I like to do okay   now if we take a look at the  fields below the description field   we have an assignee field and I’m  going to assign this merge request   to myself and then we have a reviewer field and in  this case I can only assign myself as a reviewer   but ideally you would always want a second pair  of eyes to review the code changes that you’re   submitting in a merge request but in this case I  am going to select myself as the reviewer as well   and then we can specify the milestone uh if there  is a milestone associated with this merge request   and we can also add one or more labels uh  project labels to this merge requests as well   so this is labels are used as a way of kind  of categorizing changes so for instance maybe   if this merge request was changing the UI of our  application we would have like a UI project label   that we could uh add to this merge request is kind  of a tag that is searchable if someone were to   search merge requests in this GitLab project and  then the last two options are merge options and   the first of which is automatically selected which  is delete the source branch when the merge request   is accepted and in this case we are going to  leave that checked because we’re merging a feature   branch into the main branch and feature branches  are supposed to be short-lived they shouldn’t be   long-lived branches like the main branch in  the production branch so we will leave that   checked for the second option squash commits  when merge request is accepted uh this one we   will leave unchecked but squashing commits can  be a useful feature uh basically what it means   is that let’s say you had a hundred commits and  uh maybe there was a large percentage of those   commits where you were just fixing small things  like adding a semicolon to the end of a statement   or adding a comment here and there uh it would  be useful to reduce the number of commits and to   do that you can squash one two or more commits  together into a single commit if you wanted to   but I’m going to leave that option unchecked  and I’m going to select create merge request okay so after the merge request gets  created we’re brought to this page   with three tabs of information the overview  tab the commits tab and the changes tab and   in the overview tab we have kind of a summary of  all of the activities that are happening in the   merge requests the overview tab is really where  the conversation about these changes will happen   so we can see the description of the merge  requests and in addition to that we can see   other events um like where I requested a  review for myself and I also assigned this   merge request to myself we can add comments uh  from this page as well and we can also control   the workflow of the merge requests on this tab  as well so from this overview tab we can approve   the merge requests and we can also proceed with  merging the branch into the main branch as well   and the approve button is available to  me because I’m assigned as a reviewer   of this merge request so I can review it since or  I can approve it since I am a reviewer but you’ll   notice that next to it says approval is optional  and for the free tier of git lab uh you can’t   enforce approval before a branch is merged but  if you have a higher tier of GitLab then you can   there’s a setting where you can enforce that uh  one or more approvals have occurred on a merge   request before it can be emer uh before it can  be merged so this merge button would be grayed   out if uh if that feature was enabled or if  that setting was enabled and if I scroll up   and select the commits tab i get a list  of all the commits that are included   in this merge request and then if I select the  changes tab I can see a diff of the changes   that are included in this merge request  as well so in the diff that you see here   we’re comparing the main branch with the latest  version of the readme hyphen introduction branch   if we had made more than one commit on the readme  hyphen introduction branch then we could select an   older commit and compare an older commit on that  branch with the current version of the main branch   it also shows me how many files have  changed and how many lines have been added   and how many lines have been removed and  in the settings or the preferences here   i can compare changes in line or in a side-by-side  view where it will show uh the main branch version   of the readme file on one side and the uh readme  hyphen introduction branch on the other side   okay so we have this side by side diff available  to us as well I’m going to select the inline diff   now a very useful feature for reviewers and  participants of a merge request is the ability to   comment on each line of code in a diff okay so if  I hover over each line I have this comment button   and I can add a comment to that line or I  can do as it says I can drag for multiple   multiple lines so I can comment on uh all of these  lines from line three to line six okay I can add a   new comment for that entire block that we added  or I can add a comment for a single line okay   so now I’m going to act as my own reviewer and  I’m going to request some changes to line six and   I’ll ask myself to add an additional activity  to the list of activities that I like to do okay and I can either start a review or just  add a comment and since I’m requesting changes   I’m going to select start a review okay and  it says submit review and I’m going to do that and now that that review has been submitted  i should be able to see it on the not only in   the diff in the changes tab but i should also be  able to see it in the overview tab as part of the   activity so if i scroll down now i can see that  i started a thread on the diff and i can see the   comment that i added here and since i requested  changes i as the creator of the merge requests   should make those changes those requested changes  and then when i’ve done so i would select resolve   thread to indicate to the reviewer that i did make  those changes so i’m going to go ahead and make   the changes that were requested on the readme  file and i’m going to make the changes locally   and not in the editor and when we make changes  locally and we push them after a merge request   has already been created for a branch those  changes will be automatically associated with   the open merge request for my branch so let’s  go ahead and navigate back to the terminal and i’m going to open the readme file in vim and in the activities line i’m  going to add one more activity okay so i like uh mountain biking tennis and  going to the beach all right so i’ll go ahead and   save and quit and I’ll do a git status and  I’ll add the changes to the staging area and then I’ll do a git commit  well actually let me do a   get status real quick okay they’re in the staging  area and then I’ll do a git commit dash m and   I’m going to say adding third  activity to readme to project readme all right okay so if I do a git status I have  one commit so I’m ahead of the remote branch and   it says use git push to publish my changes so I’m  going to do that I’m going to say git push without   the dash up option because I’ve already created  the readme hyphen introduction branch in git lab and you’ll notice in the output of the git  push command now it says we get a message   from the remote and it says view merge  request for readme hyphen introduction   so it recognizes that we already have a merge  request open for this branch and it specifies   which merge requests merge requests number one and  then if I navigate back to the GitLab interface   we can see automatically in the merge requests  the merge request gets automatically updated   with the latest commit that was pushed  to the readme hyphen introduction branch   and in the review that I started on line six git  lab recognizes that I modified this line after the   review was started and so it adds kind of like a  reply to that thread saying that moss changed this   line in version two of the diff just now since I  made the requested changes I’m going to go ahead   and resolve the thread so I’ll select resolve  thread and now if I switch roles to the reviewer   I would just double check that the changes  that I requested were added so I would select   version two of the diff and I would review  again review line six to make sure uh those   changes were added and it looks like they have  been so I would go back to the overview page and I would select approve and then  I would merge the merge request   so I’m going to go ahead and select  merge and delete source branch   is checked so the readme hyphen introduction  branch should be deleted after we select merge   and then git lab gives us a confirmation  to say that the changes were merged   successfully and the source branch was deleted  but we do still have the option to cherry pick   our change uh into a new merge request or we if we  found like a regression for instance after merging   we can revert the change which means that it  will basically be undoing the merge into the main   branch okay so now that we’ve merged our first  merge request let’s navigate back to the home page   and we’re checked out to the main branch and if  i scroll down to review the readme file we can   see that now on the main branch the changes  that were previously on the readme hyphen   introduction branch are now included on the main  branch’s version of the of the file but remember   that since we’re using the environment branches  variation of the gitlab flow when we merge into   the main branch the main branch acts as kind of  like a pre-production branch or maybe a staging   environment so our final destination or the final  target branch is the production branch so we need   to make a second merge request to merge the  main branch now into the production branch   so to create that second merge request i’m  going to navigate to the merge requests   section and i’m going to select new merge  request and on this page i have to select   the source branch and then the target branch okay  our source branch our target branch is listed as   main right now but we know that’s not the case  our target branch should be the production branch   and our source branch should be the main branch  so here we’re saying we want to merge main   into the production branch okay so i’m going  to select compare branches and continue   and for the title i’ll provide something  similar to the original commit message i’ll just   say that we’re updating we updated  the readme file with an introduction i’ll leave the description blank  and then i will assign it to myself   and i will uh also assign myself as a reviewer  okay and then i will select create merge request and for this merge request i’m just going to  directly approve and merge it but it is important   to note that for every merge request that we  create between two branches that when a merge   request is opened it should trigger automated  testing and as changes are promoted up through   pre-production branches and they get uh closer to  being merged into the production branch the scope   of testing broadens at each level so when we have  a merge request open from the feature branch into   the main branch our scope of testing will likely  be smaller than when we’re merging the main branch   into the production branch so we’ll go ahead and  assume that automated testing was completed when   this merge request was opened and i’m going to  select approve and then i’m going to select merge and merging these changes into the production  branch essentially means that we are cutting   a release and to formalize that release  what we can do is create a git tag so   i’m going to navigate to the repository  section and then i’m going to go to tags and currently we don’t have any tags in this  repository and i’m going to select new tag to   create one and i’m going to use semantic  versioning and i’m just going to save   v1.0 for the tag name and then we are going to  create the tag not from the main branch but from   the production branch since it represents our  uh our production environment and the tag can   optionally have a message associated with it  i’m just going to say this is the first release and under the release notes i’ll  simply say updated the project   read me with uh introduction section  okay and then i will select create tag and now that we’ve created this  tag it should have also generated   a release in the deployment section so if  i navigate to deployments and then releases   we will see version 1.0 and it  gives us the ability to download   assets of the source code okay so now we’ve  completed the gitlab flow but there is one   last step that we need to do and that is to sync  our local repository with the remote repository   right now the remote repository has commits that  the local repository on my machine doesn’t have so   we need to do a get pull from the command line  to sync the remote repository with our local   repository so i’ll navigate to the terminal and  in my project i’m going to first do a git status   and when i do a git status you’ll notice that  i’m currently checked out to the readme hyphen   introduction branch and that branch no longer  exists in the remote repository it was deleted   after we merged the branch into the main branch so  what i want to do here is first do a git checkout   to the main branch and now that i’m checked out  to the main branch i’m going to do a git pull okay and so the get pole worked successfully  you’ll notice that it also uh recognized that   there was a new tag so it created  a new tag in my local repository   but if i were to execute  the get branch command here you’ll see that the main branch exists but also  the readme hyphen introduction branch still exists   locally so we need to delete the local  version of that branch and to do so we can   say git branch dash d readme hyphen  introduction and that deleted   the local version of the readme branch now that  command deleted the local version of the branch   but my local repository still thinks that  there is a remote branch called readme hyphen   introduction so if i do get branch dash dash  all it’ll show me not only the local branches   but also the remote branches and the  remote branches are listed in red and   you see here that the readme hyphen introduction  branch is still listed and to update this list   of remote branches we can pass in an option to  the get pull command called dash dash prune and   this will prune remote branches from this list  so i’m going to say i’m going to hit enter here and in the output it says that it deleted the  readme hyphen introduction branch and i’ll   run the get branch command again and we can  see it’s no longer listed in the local branches   or in the remote branches list anymore and there  is one last thing that we should do that i forgot   about before wrapping up the gitlab flow and that  is to close out the original issue that we created   in the gitlab project now that we’ve synced  the local repository with the remote repository   let’s go ahead and close out that issue so i’m  going to navigate back to gitlab and then i’ll   navigate to the issues section and then list  and then i’ll select the issue that i created and then i’m going to indicate in the check box  that we completed this subtask which updates the   count of tasks completed and once we’ve done  that we can go ahead and close this issue   so i’m going to select close issue in addition  to closing the issue i’m also going to mark it as   done and now that we’ve closed the issue that  we created for this change to the readme file   we’ve come full circle with the gitlab flow so  congratulations on completing your first round   of the gitlab flow and in the next video we are  going to explore the ci cd features of git lab hey what’s up everybody my name is moss and  welcome back to this tutorial series on gitlab   we have familiarized ourselves with the gitlab  interface and are now comfortable using the   gitlab flow but we’re still not using some of the  most important features the gitlab platform offers   in today’s world just about every code base  is supported by a continuous integration   and continuous delivery slash  deployment pipeline let’s take   a look at the features related to cicd  that gitlab offers for its projects in this video i’m going to show  you how to perform ci cd in gitlab let’s review our learning objectives for this  module after completing this module you should   be able to do the following demonstrate  an understanding of how gitlab pipelines   integrate with a gitlab project implement  gitlab pipelines in your own gitlab projects   write a gitlab pipeline that produces artifacts  write a gitlab pipeline that caches dependencies   write a gitlab pipeline that uses variables and  finally describe the anatomy of a gitlab pipeline let’s define some important gitlab cicd  terminology the first is a gitlab pipeline a   pipeline is a top level component used to define  a cicd process for a gitlab project and within   a pipeline we can define stages and jobs next we  have jobs jobs are associated with stages within   a pipeline and they define the actual steps to be  executed such as running commands to compile code then we have stages pipeline stages define the  chronological order of jobs and finally we have   get lab runners gitlab runner is an open source  application that executes the instructions defined   within jobs it’s a program that can be installed  on your local machine a cloud server or on-prem   shared runners are hosted by gitlab now that  we’ve defined these concepts let’s get started okay so the first thing that i want to do is  quickly walk through the code base that we’re   going to be using i’ve created this gitlab project  called sample maven project and it’s a very simple   maven code base and it’s actually based off of  a code base that you can automatically generate   from a tutorial by maven and if you simply  search maven in five minutes in your browser   the first link here maven in five minutes   has a tutorial where you can generate the same  project so under this section creating a project   i generated this code base using  this command so it creates a project   of maven archetype quickstart so let’s  take a look at that code base in gitlab   if i exit out of the tutorial and then in the  root directory of the gitlab project i’ll go into   source so we have a pom file and then we have the  source directory and in the source directory we   have it separated out by the test directory and  then the main directory which contains a single   a single class called app and  inside of the app class it simply   prints out hello world to the console and  then if i go back to the test directory we have a single test class called  app test and if i open that up   we have a junit test test class and then  a single test case within that test class   in this test case asserts true on a condition  that is always true so this test case will always   pass unless we were to change this condition  to false and that’s pretty much all there is   to this code base it’s very straightforward so  let’s navigate back to the project home page so to define a ci cd pipeline for a gitlab  project you need to create a yaml file at   the root directory of your project  and its name should be dot get lab   hyphen ci dot yaml and there’s two ways that you  can create that file you can either create it from   the drop down here so you could just create a new  file and call it dot get lab hyphen ci dot yaml   or you can create it from the ci cd section  and they have a dedicated section for   editing and creating a GitLab pipelines i think  that the best method for creating pipelines is   using the pipeline editor and I’ll show  you why so I’m going to select editor so this will create a pipeline on the  main branch and it says create a new   dot get lab hyphen ci dot yaml file at  the root of the repository to get started   and then i can create a new ci cd pipeline by  clicking this button so I’m going to select it and on the right hand side we have this help  section for getting started with GitLab CI cd   I’m going to close this out but before I do I’m  going to open up this link for viewing its syntax   reference document that GitLab provides I’m just  going to open that up in a new tab because we may   refer to it later on okay and after I’ve opened  that up I’m going to go ahead and collapse that   help section and when we open  the editor for the first time   GitLab automatically fills out this template  pipeline definition for a very basic uh pipeline   and in the um pipeline definition there’s  three stages defined so here’s a stages   statement and then under that statement we  have a build stage a test stage and then   a deploy stage and this isn’t the only pipeline  template that GitLab offers you can see up here   it says browse templates but if I were to  open the pipeline section up in a new tab if i scroll down you can see a list of ci  cd templates based on your technology stack   or the framework that you’re using and there is a  template for just about every technology that you   can think of you have go golang templates flutter  templates gradle there’s even a maven template for   maven projects but we aren’t going to use this  template and we also are not going to use the   template defined here as well but the pipeline  definition that we’ll write is going to be from   scratch and it will closely match this structure  where we have a build test and deploy stage   and under the stages definition they define the  jobs and the first job here is the build job   and you don’t necessarily have to follow this  naming convention you could just call this build   if you wanted to but it does make it clear  when you add the hyphen job to the statement   and then to associate a job with a particular  stage we would just use the stage keyword here   and then the value would be the  name of the stage that we want this   job associated with and after associating the job  with a particular stage we can then uh specify a   shell script that would be executed by the GitLab  runner so if I hover over any of these keywords it   gives me a definition of the keyword and you can  see here for the script it says um these would be   shell scripts executed by the GitLab runner  and it is the only required property of of jobs   so under the script statement they have  two echo statements where they’re kind of   you know pretending to compile a code base right  and then under the build job they’ve defined   a unit test job which is associated with the test  stage where they run unit tests and then a lint   test job that is also associated with the test  stage so there are two jobs associated with the   test stage and then the last job is the deploy  job which is associated with the deploy stage   now one thing I want you to notice about using  the pipeline editor is at the top it says this   GitLab CI configuration is valid so when we use  the editor one of the benefits of using the editor   is that GitLab while we’re editing the pipeline  GitLab is checking and validating the pipeline   syntax and the configuration to make sure that it  is actually going to run in addition we also get   these tabs here the visualize lint and view merged  yml tabs if we look at the visualize tab it gives   us a graphical version of our pipeline so now you  can see that the build job is associated with the   build stage the test jobs the unit tests and the  lin test jobs are associated with the test stage   and the deploy job is uh associated with the  deploy stage by default when multiple jobs are   associated with a single stage those jobs will be  executed in parallel and then on the lin tab it   gives us some information on the parameters  and properties of our pipeline definition   so here it also says that the syntax is correct  and the configuration is valid and then for each   of the parameters in the pipeline it gives us some  properties associated with that parameter so you   can see here all of the jobs are listed and then  in the value we have these properties also listed   under the value so for this first property we have  the only policy and its value is branches and tags   and this policy is referring to the only keyword  that you can use in a pipeline definition so   if we take a look at the keyword reference we  have only and accept keywords and you can use   only an accept with four other keywords references  like get references pipeline variables changes and   also kubernetes and these keywords control if  a job will run in a pipeline because if i use   references for instance i can specify specific  git references so only run this job when   the the get reference matches a particular regular  expression so for instance only run this job if   it’s a particular branch like the main branch  or a feature branch and that can be a regular   expression as well so it can be like any branch  that starts with a JIRA issue id for instance   so going back to that pipeline definition if we  take a look at the value here branches and tags   this job will run when the get reference  for the pipeline is a get branch or a git   tag and then below the only policy we have  the when keyword and its value is on success   and this specifies a conditional the one keyword  specifies a conditional so when should you run   this job only on success of previous stages so  this is only going to run if prior stages have   have passed successfully otherwise if a  earlier stage has failed this job will not   run and this is the default value for this  condition so we didn’t have to explicitly set   the only keyword or the when keyword in that  pipeline definition because this is the default   behavior jobs will only run when when prior jobs  have passed successfully unless you change that   you explicitly change that condition  okay and then on the last tab we can   uh view the merged yaml file okay  so this is just a view-only version   of the yaml file without the comments and then  I’m going to navigate back to the edit tab okay so let’s start editing this template so that   it works for our maven project so i’m  going to delete all of these comments here and like i said before our pipeline definition  is going to be similar to this one so i am   going to keep the stages definition and i  also will keep these jobs but i’m going to   clean it clean it up a little bit so i’m  going to delete the extra echo statement we’ll keep that lint test job and then  we’ll keep the unit test job as well so now we can start adding our own  stuff to this pipeline definition   and the first thing that i want to specify is the  docker image that the git lab runners should use   when they execute jobs in this pipeline so if your  project has dependencies that need to be installed   for it to be able to compile the code base  and run tests on the code base you can specify   a docker image that includes those dependencies  and the gitlab runner will use that image to   actually execute all of the steps defined  within your jobs and to specify the image   it’s very simple at the top here i’m going to use  the image keyword and then after uh specifying   the image keyword we would uh provide a value  and that value is going to be the name of the   docker image that we want the runner to use and in  this case we’re going to use the maven image which   includes all the dependencies that we’re going to  need in order to uh to compile and test and run   our maven project so after specifying uh  the maven image name i’m going to specify   the version and in this case i’m just going to say  latest so i’m going to use the latest docker image   for maven so now that we’ve defined  the docker image that should be used   when running these jobs the next thing that i  want to define are variables so we can define   pipeline variables inside  of our pipeline definition   using the variables keyword and what should  be our variables for this pipeline definition   well we’re going to have two variables the first  is going to be called maven cli options and the   only option that we’re going to pass to maven  is the dash dash batch mode and this will run   maven in a non interactive mode so that it won’t  prompt interactive mode so that and now that i’ve   defined this variable i can reference it in  any of the shell scripts that i define in any   job in the pipeline definition and i said that  there were two variables and the second variable   is going to be called maven options  okay and we’re only going to pass in   one option and that’s the location of the local  repo so i’m going to say dash d maven dot repo   dot local is equal to and then i’m actually  going to reference an environment variable   that is known as a predefined environment variable  and it’s called ci underscore project directory so   we don’t have to explicitly define the value of  this environment variable because it’s predefined   by gitlab okay so within the project directory we  will specify the dot m2 folder and then repository   and i think the git lab configuration validator  is complaining about a couple of things that   i’ve done here if i hover over the yellow squiggly  line it says cannot read an implicit mapping pair   i think the first thing is this extra quotation  mark but also i’ve used tabs here and i think   i should be using two spaces so i’m gonna do  one two and one two so now that we’ve defined   those variables uh the next thing i wanna do is  utilize gitlab’s built-in caching feature for   uh its pipelines we can actually cache artifacts  of a pipeline run to speed up future runs of   the pipeline and to do that we simply need to  specify the cache keyword and then after using   the cache keyword we would then specify the paths  within the project that we would like to cache   and in our case it’s really important to cache  the maven dependencies because otherwise um the   maven dependencies will be re-downloaded for  each job that runs so below the paths keyword   we can specify the path that we want to cache  which is going to be dot m2 and then repository   okay so now that we’re caching the maven  dependencies we can move on to the build job   and under the build job i’m going to leave this  echo statement but i’m going to add one more   shell script and here i’m going to  invoke maven with the cli options   so i’m going to reference the variable that  we defined so maven cli options and then   we will use the compile command and that’s  all we need for the build job so i’ll scroll   down to the unit test job and again i’m  going to add one more shell statement   and this is going to be maven and then  we will use the maven cli options again   and then we’ll invoke the test command now one  thing that is really cool is that gitlab can   actually render junit test reports and all we  have to do is generate those test reports as   artifacts so under the script statements  i’m going to specify the artifacts keyword when we use the artifacts keyword we can specify a  path that contains artifacts that we would like to   persist beyond uh the the length of the job so  after the pipeline has finished any paths that   were specified as containing artifacts those  items will be persisted after the the pipeline   finishes so we can access those artifacts and  download them from get lab and in this case   we’re storing junit reports so the first thing  that i want to specify is when these artifacts   should be generated and i’m going to say that  they should always be generated and then under the   when condition i’m going to specify the reports  keyword and under reports i’m going to specify junit okay and under the junit keyword we can specify  the path to our junit reports and that’s going   to be under the target directory surefire  reports test and then any file with test hyphen   dot xml okay i’ll also do this for  the reports so i’m going to do this okay so now that we’re generating those junit reports  as artifacts we should be able to download those   reports after the pipeline finishes and then also  we will be able to though those reports will be   rendered within the gitlab ui so you’ll  be able to see what that looks like   once we run this pipeline okay so after the unit  test job there is the lint test job and it’s   not doing anything but i am going to leave  this job there so that you can kind of see   that the job runs in parallel and then after  the lint test job we have the deploy job for   the deploy job we won’t actually deploy the  application to some remote server what we’ll   do is we’ll just use maven package and then  we’ll run the application locally so under the   echo statement what we’re going to do is invoke  maven and we will invoke it with our cli options and we’re going to invoke the package command  and finally after invoking package we will   run the program using maven and then we’ll  invoke we’ll reference the cli options   and then we’ll say exec java dash d exec dot  main class is equal to com dot my company dot app   and then the app class so when this command is  ran we should see hello world printed out to   the console of the job now the last thing  that i want to add to the deploy job   is the environment keyword so we’re going  to add an environment to the deploy job and it will be the staging environment so this  deploy job is going to you know quote unquote   deploy our maven application to the station  environment and what that is going to do is   it’s going to create a deployment under the  deployment section in the environments page   so we’ll take a look at that after the pipeline  has ran other than that i think that’s pretty much   everything that we need for this pipeline so all  we need to do is uh commit our changes to the file   and as soon as we make a new commit the pipeline  will run so i’m going to select commit changes and it says that it’s currently checking the  pipeline status and in a little bit it should   show up with a link to the running pipeline  there we have the link to the running pipeline   so i’m going to go ahead and select that and  open it up in a new tab so when we navigate   to the pipeline tab we can see the status  of each job in each stage and you can see   under the testing stage that both test jobs are  currently running in parallel and the build job   has already completed uh successfully so now  the deploy job has begun now that both of the   test jobs have completed successfully and this  pipeline run should take a little bit longer   because on the first run the cache is empty so  we haven’t cached any of the maven dependencies   so after this run the pipeline run should be a  lot quicker and now it looks like the deploy job   has completed successfully so the whole pipeline  has completed successfully let’s take a look at   each one of these jobs so i’m going to open up the  build job in a new link and when we open up a job   we get the console output of all of the commands  that we defined uh in the uh in the job definition   so up at the top we can see that the job begins  it’s running with get lab runner and then after   that you can see that the gitlab runner is  pulling the maven the latest maven docker image   so that it can run the the job within that  docker image and it’s really convenient next to   each of these statements you get the time  duration for how long that that particular   command took and in the top right you can also see  the total duration of the job which was 44 seconds   the timeout period is set to one hour and  that simply means that if at one hour the   job is still running uh that the job will  automatically be cancelled because maybe it’s   you know the job is hanging for some unknown  reason and you don’t want to utilize all of your   runner you know resources on a job that’s hanging  so if we scroll down further in the console   we can see that the runner clones down the code  base and then on line 30 it checks the cache   for default and it says fatal file does not  exist so there is no cache because this is   the first run of the pipeline and then on line  36 we can see the first shell statement of our   build job where we say we’re compiling the code  we invoke maven compile with the cli options and   then starting at line 44 you can see where maven  begins downloading all of the maven dependencies   so what i’m going to do is  i’m going to scroll down below   where it’s downloading those dependencies and  after it downloads the dependencies it uh compiles   the module and we can see that it was a successful  build and it took a total of eight seconds   and since the job was successful you can  see it line 393 it’s saving a new cache   and on uh 395 we can see that it caches the dot  m2 repository directory and then it uploads that   zip file to essentially back to gitlab so now  that we’ve seen the build job let’s take a look   at the test job not the lint test job but the  unit test job so i’ll open that in a new tab   and here if i scroll up to the top i’m going to  use this button to scroll to the top we can see   roughly the same output as we did in the build  job where it’s um you know preparing the maven   docker image to execute our maven commands  on so starting on line 36 we can see   where our unit test job begins and then we  invoke uh maven with our cli options and then   the test command okay and then it downloads  the necessary uh dependencies and if i scroll   down below that we can see where the tests  actually get executed and one test ran and there   were zero failures so we get a successful build  and then below that you can see here it saves the   cache for the successful job and we can see that  it’s caching the m2 repository directory now   below the cache creation starting at line 211 you  can see where it’s uploading the artifacts that   we created and the artifacts that we generated and  remember in the pipeline definition were theJUnit   test reports and it identifies the surefire  reports directory on line 211 there were no   fail-safe reports so it says that there were  no matching files there but it did find uh one   matching file for the surefire test reports and  it uploaded those artifacts as a JUnit test report   so we’ll take a look at those test reports in  a little bit but first I’m going to go back to   the pipeline and let’s open up the deploy job and  let’s quickly walk through the deploy job as well   so I’m going to scroll up using the scroll to  top button and then again we have a very similar   you know output as we did in the previous jobs  it clones down the code base uh it recognizes   that there is a cache and it downloads that  cache and it extracts it and then the uh maven   package command is invoked so it then downloads  the necessary dependencies and if I scroll down   you can see it’s running running the  tests again and then if I scroll further it packages the program as a jar file and then it  runs the exec command on the main class okay and   then it downloads uh more dependencies and then  if I scroll down to the bottom you can see where   on line 250 it actually runs uh the app class  and it prints to the console hello world   and then finally on line 258 you can see that  it creates a new cache okay and then the job   finishes so now that we’ve looked at the job  output let’s navigate back to the pipeline tab   and on the pipeline tab we can see several tabs  here the pipeline the overall graphical view of   the pipeline but then we also have this test tab  and we can see that there is one test so if I open   this test tab we can get a summary of the tests  that pass and this is generated from the JUunit   reports and it shows us which jobs are associated  with those tests and if I wanted to download   these test reports I could simply go back to the  pipelines page which will show me a list of all   the pipelines that have ran and you can see the  pipeline that we just ran over here on the right   it shows me uh an artifacts drop down and I can  actually download artifacts that were generated by   the pipeline and in this case there was only one  set of artifacts which uh which was the JUnit test   reports and in addition to the artifacts I also  specified an environment in the deploy job and   that should have showed up under the deployment  so if I go to deployments and then environments we can see that there is one environment the  staging environment and there is one deployment   to the staging environment and it shows the job  associated with that deployment and the commit as   well if I wanted to see a list of all of the jobs  that have ran I can go back to the CI/CD section   and select jobs and this will show me  a list of all of the jobs that have ran   in the past you’ll also notice  that under the jobs section   we can create schedules for our pipeline so if I  select new schedule I can specify an interval uh   pattern or a cron pattern to run our pipeline  uh maybe like every uh every day or every week   so that’s also a useful feature that we can  use for our GitLab pipelines now a few more   things that I’d like to show you are in the  settings so if I go to settings and then CI/CD we can update various settings related to our  pipeline so if I expand the general pipeline   configuration I can actually specify a custom path  for the pipeline definition within the project   I can also change the timeout period for jobs that  fail if they run longer than the timeout period   and then below the general settings if I collapse  this you have auto DevOps and if you turn this on   GitLab will attempt to like automatically  set up a pipeline based off of your project’s   technology like whether it’s a maven project or  you know a going project or something like that   under auto DevOps we can configure GitLab runners   and right now I’ve been using shared runners  which are hosted by GitLab and under shared   runners I can see all of the runners  that are available to me that I can use however if I didn’t want to use shared runners  and instead I wanted to host the GitLab runner   program on my own infrastructure I could register  my own runners with this GitLab project so that   in the pipeline definition I can assign jobs to  those runners that are hosted on my infrastructure   and then under the runners configuration  we can also define pipeline variables   at the project level uh here as well so  if I expand uh the variables section now   i can create new variables and these variables  that i uh create can be protected and masked   so protected variables can only be exposed  to protected branches or tags and then masked   variables are hidden in job logs and this is  useful in the event that you have um environment   variables that would be passwords so for instance  if you’re trying to upload um an artifact to   artifactory and you need to authenticate with your  artifactory instance you would need the username   and password for the artifactory user in that case  you can use a masked variable for the artifactory   password so if i were to select add variable i can  specify the key of the variable like r to factory   password and then the value of the  password and i can specify the type   and the environment scope and i can protect it  and then i can also mask the variable as well if i   added that variable you can see that the value  here is um is not visible but if i need to i can   still reveal the value using this button and the  last thing that i’d like to mention are pipeline   triggers so you can trigger a pipeline for a  branch or a tag by using a trigger token so   a use case a sample use case for this might  be if you wanted to trigger a pipeline to run   via a slack command so if you you know had a slack  bot you can provide the token to the slackbot so   that the slackbot can communicate with the gitlab  api and trigger this pipeline to run so i’m going   to navigate back to the cicd pipeline section and  i think that wraps up everything that i wanted to   cover in this particular module so i hope you  enjoyed this video and i will see you in the next hey what’s up everybody my name is moss and  welcome back to this tutorial series on gitlab   we’ve introduced you to gitlab pipelines but you  may be in a situation where your team is already   using jenkins pipelines to facilitate ci cd this  means that you already have pipelines defined   and you need to convert those jenkins  pipelines into their gitlab pipeline equivalent   while taking advantage of gitlab’s most useful  ci cd features before migrating any of your   pipelines you’ll need to understand some key  differences in how these tools facilitate ci cd   in this video i’ll show you how to migrate  to gitlab ci cd from jenkins pipelines   let’s quickly review our learning objectives  after completing this video you should be able   to do the following explain the differences  between jenkins pipelines and gitlab cicd   migrate your own jenkins  pipelines to gitlab pipelines   and use gitlab pipelines to run tests on  the lambda test selenium automation grid there are a couple of prerequisites that you will  need before starting the tutorial the first is   familiarity with jenkins pipelines and terminology  you will also need a lambdatest account let’s begin the tutorial by performing  a comparison of terminology and concepts   between the two tools we will define a mapping  of terms between declarative jenkins pipelines   and gitlab cicd we will refer to jenkins  terms and draw a mapping from that term   to its equivalent in gitlab the first is  the agent section of a jenkins pipeline   this section of the pipeline specifies which  jenkins agents should execute the pipeline in git   lab we use what are known as git lab runners which  execute jobs in a gitlab pipeline jenkins agents   and get lab runners do have differences but they  serve very similar roles in their respective tools   both are software applications that  execute tasks defined in a pipeline next is the jenkins stages section jenkins and  gitlab share the concept of stages in their   pipelines in both tools stages define the  chronological order of a pipeline execution   however in gitlab stages are enumerated in  a list at the top of the pipeline definition   we then have the steps section of a jenkins  pipeline the steps section of a jenkins pipeline   allows you to define commands to  be executed by the jenkins agent   in gitlab we can use the equivalent  script section in a gitlab pipeline there are two important jenkins directives  which you have likely used in the past   the first is the environment directive this  jenkins directive will allow you to define   environment variables that will be available  during pipeline runtime the variables keyword   in gitlab provides equivalent functionality  as the jenkins environment directive   the last one is the jenkins tools directive this  jenkins directive allows you to install tools   that are necessary to execute the pipeline steps  however there is no equivalent in gitlab for the   tools directive and instead git lab recommends  using pre-built docker container images now that   we’ve defined this mapping between the tools  let’s move on to the next part of the tutorial okay so the first thing that I want to do is walk  through the code base that we’re going to be using   and once we’ve reviewed the code base then we’ll  walk through the Jenkins pipeline that tests this   code base and after walking through the Jenkins  pipeline code we’ll then write an equivalent   GitLab pipeline together this will demonstrate  a simple migration from a Jenkins pipeline   to a GitLab pipeline but hopefully it will  highlight some of the key differences that i   mentioned during the presentation in the pipeline  that we write we’ll utilize the lambda test   selenium automation grid in order to execute  the tests that are defined in this codebase   so let’s start walking through the code

    base I’m  in git lab and I’m currently inside of the project   which is called automation demo automation  demo is a maven project that utilizes   test ng as its testing framework and what  you’ll notice under the source directory   is that there is only a test directory there is  no main directory and that’s because there’s no   application for this code base this is just a  test class this code base contains only a single   java test class and it’s testing a website  a public website out on the internet   so let’s navigate into the test directory  and take a look at the test class   so we have a single test class called automation  demo test dot java so I’ll open up that file and   in this test class we have three methods the first  is the test setup method and then we have a single   test method called test element addition and then  we have a teardown method so the test setup method   is parameterized and it runs before our single  test method the first thing that the test setup   method does is set the desired capabilities  and the capabilities are parameterized   so the we get these uh parameters from the  testing configuration xml configuration and   after setting the desired capabilities it then  generates a new remote web driver and it uses   the lambda test hub URL now these capabilities  are parameterized but if you wanted to you could   actually use the capabilities generator that is  provided by LambdaTest to automatically generate   this code that sets the desired capabilities I  have the capabilities generator already pulled up   in a tab but you can google search lambda test  capabilities generator and it should pull up as   the first result you can see here the capabilities  generator right here so I’ll navigate to this page   and from here I can select which language I want  the capabilities to be generated in in our case   it would be java and then I can select selenium or  selenium and then I could just copy this code into   my test class so I’ll navigate back to the test  class and under where the capabilities are defined   you can see that it creates a new remote  web driver and here it lists the hub URL   and it concatenates it using the username and  access key variables which are defined up here   so my username for my lambda test account is tech  with moss and then I have my account access key   defined as an environment variable which is called  uh lt underscore access under underscore token   now you might be wondering where does this hub  URL come from and we can actually retrieve the   full URL which includes the username and access  key from the capabilities generator so if I   navigate back to the generator you can see in the  right hand side here access key if I click that   we have the username the access key and then we  have the full hub URL which includes username and   access key and i could copy that to my clipboard  and I could also use that here with everything   hard coded and once the test setup method  completes we then run this single test method   called test element edition which connects to  using the driver connects to this test URL and   the test URL is just to do mvc.com so if I were to  navigate to this URL let’s open it up in a new tab it’s a very simple to do uh to-do list  application so if I just say to do one   and hit enter it adds this  as an item in a to-do list   and then i can check each item and mark it  as completed and i can clear those items okay so if i scroll back to our test method  after connecting to the test url   the test method then adds  five items to the to-do list   so you can see here in the in the for loop it adds  uh five items and then after it adds those items   it iterates over that list and for each item it  marks it as complete and then deletes that item   from the list and once our test method completes  the teardown method is called and it quits the   driver okay so now that we’ve walked through the  test class let’s navigate back to the repository and real quickly i want to show the test ng  xml file so that we can see how our tests are   configured and inside of our xml file you can  see that there are three test scenarios defined   in the first test scenario we’re using chrome on  windows and in the second scenario we’re using   firefox on windows and then in the last  one we’re using microsoft edge on mac os   so now let’s navigate back to the repository and you can see in the repository I have  a Jenkins file defined a Jenkins pipeline   so let’s take a look at this Jenkins pipeline so  this is a pretty simple pipeline it has a single   stage called the test stage and it  begins by specifying that it can run   on any available Jenkins agent and then  after the agent section it then defines   an environment section and in the environment  section we define two environment variables   lambda test underscore username which is  my my lambda test account username and then   maven cli options okay and we run it in batch  mode and then we specify the testing configuration   xml file and inside of the test stage  we have the steps section and inside of   steps we invoke the pipeline maven integration  plugin using the with maven statement   and with that invocation we specify the maven  installation and we also specify some additional   options in this case we specify the JUnit  publisher so this is also using the JUnit   Jenkins plugin in order to publish JUnit reports  visually in the Jenkins UI and inside of the with   maven statement we invoke with credentials  and specify some Jenkins credentials called   lambda tests access token so I created a Jenkins  credential to store my lambda test account access   token and we’re making that access token value  available as an environment variable called lt   underscore access underscore token so now  in the innermost statement we invoke shell   and call maven with our maven cli options and  then the test command and if i navigate to jenkins   i actually ran this pipeline already if i take  a look at the console output you can see where   it invokes the pipeline maven integration plugin  it masks the environment variable lt access token   and then it calls maven with those  cli options and the test command   and if i scroll down we can see where the tests  begin and you can see it makes a connection to   the remote web driver and it runs those tests  on the lambda test selenium automation grid   and finishes successfully after the build  finishes you can see where the pipeline   maven integration plugin then publishes the junit  reports and if we navigate back to the build page you can see here that we can review  the test results of the pipeline   so if i navigate into the test class  we have the same test method but it was   executed three times for each of our test  scenarios and if i navigate to the pipeline page   we also get the test result trend graph here as  well in this case we only have one one result   okay so now let’s navigate back to git lab and  we’ve walked through the jenkins pipeline so now   let’s write an equivalent gitlab pipeline so from  here i’m going to navigate to the ci cd section   of the repository and we’ll open up the pipeline  editor so i’m going to open this up in a new tab   then i’ll select create new ci cd pipeline and  we’ll start with this basic template but we are   going to delete the majority of it so we’ll only  have one stage so i’m going to go ahead and delete   these comments and i’ll leave only the test stage  so i’ll delete the build stage the deploy stage okay and we’ll leave one uh test job okay now when the gitlab runner executes this test job   it’s going to invoke maven commands so maven needs  to be available somehow to the get lab runner   and in the case of jenkins i installed maven on  the jenkins controller node so when we invoked   the pipeline maven integration plugin we were  able to reference a specific maven installation   that the plug-in could use to execute maven  commands but as i mentioned in the presentation   gitlab recommends using pre-built docker container  images that have all of your dependencies already   installed on the image and we can specify a  docker image in a pipeline that a gitlab runner   should use to execute the script section of a  particular job and not only that we can specify   default images at the top level of the pipeline  but we can also specify docker images per job   in the pipeline so if a particular job needs  to utilize a specific docker image we can also   specify those images under a job definition within  the pipeline and that will override any of the   images that are defined as the default  images in the top level of the pipeline   in our case we’re going to use the default  image throughout the whole pipeline   and to define an image we use the image keyword  and then we specify the name of the image   as well as the tag in this case we’re going to use  the maven image and we’ll use the latest version   of the maven image now after specifying the  docker image we need to replicate the environment   directive in our jenkins pipeline so if we go back  to the jenkins pipeline we have this environment   directive where we define two variables  the username and then the maven cli options   so we need to replicate that in our pipeline  and to do that we can use the variables keyword   so following the image definition i  will use the variables keyword and   under variables we’ll first define  the username environment variable and after the username variable  we have maven cli options   and here we’re just going to specify batch  mode so it will run maven in a non-interactive   mode so that it doesn’t prompt the user for input  and then we will specify the testng xml file and finally we’ll specify a maven options variable and here we’re going to specify the local repo is equal to and then i’m going to use  a predefined environment variable ci   underscore project directory dot m2 and then repository and similar to the  last video we need to cache the dependencies   that maven downloads when running maven  commands so that when the pipeline runs   it’s running in a fresh environment we don’t want  it to re-download all of the maven dependencies   so we want to make sure that we cache  that and that is a difference between   the gitlab pipeline in the jenkins pipeline  the jenkins pipeline is a shared workspace   that’s going to exist between pipeline runs unless  we explicitly tell it to utilize a fresh a fresh   workspace so below the variables section i’m going  to use the cache keyword and below cache i’ll   specify the file paths that i want to cache so  i’m going to use the paths keyword and below paths   i will specify dot m2 and then repository okay  so now that we’re caching the maven dependencies   let’s move to the test job and i’m going to remove  these scripts i’ll also remove these comments and under the script section we’re going to invoke  maven so here i’ll say maven and then dollar sign   and we’ll reference our environment variable maven  cli options and then the test command so that’s   everything that we need for the script section but  we do also want to produce junit reports as well   and we can generate those reports using the  artifacts keyword so below the script section i’m   going to use the artifacts keyword and then under  artifacts we’ll first specify the when condition   when these artifacts should be produced and we’re  going to say always and under the when condition   we’ll specify what kind of artifact and this will  be reports and under reports we’ll specify junit   as the type of report and finally under the  junit keyword we would specify the path to   our surefire reports so i’m going to say target  and then surefire reports and then test hyphen   then asterisk and then dot xml and that’s pretty  much all we need inside of the pipeline definition   but we are missing one environment variable  and that is the lambda test access token   environment variable so in jenkins we save  that value as a jenkins credential and then   in the jenkins pipeline we invoked the with  credentials function to use the access token   as an environment variable in the case  of git lab we can define those kinds of   environment variables that shouldn’t be you know  checked into version control in the settings   of the repository and then under the ci cd  section so let’s go ahead and open that up and from here let’s expand the variables section and i’ll select add variable and we’ll call it uh lt underscore access token and to retrieve the value i’ll navigate back  to the lambda test capabilities generator   and i’ll select access key and then i’m going  to copy the access key value to my clipboard   okay and i’ll navigate back to  gitlab and we’ll paste that value in and this is of type variable and we’ll keep the  environment scope the same it’s protected but   we also want this variable masked as well and  masking this variable means that it will be   it won’t its value won’t be revealed in the  pipeline job logs so i’ll go ahead and select   add variable and you can see in the ui that the  value of the variable isn’t revealed okay so now   let’s navigate back to the pipeline editor and  there’s one thing that i’d like to point out   that is a really important difference between  gitlab pipelines and jenkins pipelines so in   gitlab if we were to have multiple stages for  instance before the test stage let’s say we had a build stage where we compiled code the the  compiled program would be an artifact that   would be used during the test stage however each  of the jobs that are ran in uh their respective   stages are being ran in a fresh environment it’s  not a shared workspace so the uh the compiled   program that was generated during the build stage  would not be available to jobs in the test stage   so if artifacts are produced in a particular  stage that a downstream stage are dependent on   then what we need to do is use the artifacts  keyword to specify those artifacts that were   produced in the upstream stage what happens when  we use the artifacts keyword is the artifacts   that we specify are uploaded to gitlab by the  gitlab runner and by default those artifacts are   downloaded by downstream jobs so this default  behavior is different than in jenkins pipelines   where in a jenkins pipeline you have a jenkins  agent running all stages of the pipeline in a   shared workspace unless we explicitly state in  the pipeline to use different jenkins agents for   each of the pipeline stages and there is a nice  example of the artifacts keyword usage in the   gitlab documentation that i’d like to quickly show  so if we google search artifacts gitlab keyword and i’ll select the artifacts keyword if  i scroll down to this code snippet you can   see part of a pipeline definition here and  we have four pipeline jobs defined we have   two build jobs for two different platforms and  two test jobs for each of those platforms so for   the osx build it produces a set of binaries and  you can see here how the artifacts keyword is used   under artifacts they specify paths to produces  artifacts and then the binaries directory   and we have the same definition for the linux  platform build as well and then if we take a   look at the test jobs what you’ll notice is this  dependencies keyword so the default behavior is   all artifacts that were produced in previous  upstream stages will automatically be downloaded   by a job so for instance in the deploy stage here  this is going to download all of the artifacts   that were previously produced however if a  job only needs a subset of artifacts that   were produced in prior stages then what you  can do is specify the dependencies keyword   and so in the case of testing the osx platform all  we need are the binaries from the osx build job so   under the dependencies keyword we specify the uh  the job name that produced those artifacts and   you can see that the same thing happens for the  test job for the linux platform the dependencies   uh that are specified here are the artifacts from  the build job for the linux platform so i did   want to quickly detour because i do think this  is a very important difference between jenkins   pipelines and gitlab pipelines and it’s a use case  that you’ll likely run into if you’re if you need   to you know compile your application and even if  you didn’t have to compile your application you’re   likely producing artifacts in an upstream stage  that need to be shared with downstream uh stages   so let’s navigate back to our gitlab  pipeline and i’ll remove this build sage   since we won’t be using that and let’s  quickly do a side-by-side comparison   of our gitlab pipeline in our jenkins pipeline  just to make sure that we’ve covered everything   that was defined in the jenkins pipeline so i’ll  pull this jenkins pipeline out into a new window okay so starting from the top we have the top level  pipeline definition then we have uh agent any   so this can run on any jenkins agent uh here we  don’t have an agent section we’re just specifying   the image that should be used by  the gitlab runner when executing   the commands defined in our script  section and then for the environment   directive in our jenkins pipeline we have the  equivalent of variables keyword to define our   environment variables in the pipeline in the  jenkins pipeline we don’t need to specify   the maven options variable we’re using the  pipeline maven integration plugin here and then   we also have the cache statement in our  gitlab pipeline which we don’t have to specify   in the jenkins pipeline because it’s using you  know a shared workspace across the whole pipeline   in between pipeline runs so unless we clear that  workspace we won’t have to re-download the maven   dependencies every time we run the jenkins  pipeline so in gitlab we do need to specify   the cache keyword so that we’re not re-downloading  those maven dependencies on each run then we have   the stages definition in our jenkins pipeline  with just the single test stage here we have   the stages definition and then a single test  job that is associated with the test stage   and then in the step section we  invoke with maven and we specify the   maven installation that we want to use and we say  that we want to also utilize the junit publisher   and then we invoke the with credentials  function to specify the access token credentials   as an environment variable and  here in the gitlab pipeline   since we’re using the docker image the maven  docker image we don’t have to specify um you   know with maven or anything like that or a maven  installation this is going to be ran inside of the   maven uh docker container so here we invoke  uh the maven test command with our cli options   and finally we produce the artifacts from the  test job which in this case are junit test reports   here we’re specifying the junit publisher  using the pipeline maven integration plugin   now i do want to mention that  i wrote this jenkins pipeline   this way because we are using a maven project and  since we’re using a maven project it’s best to use   the pipeline maven integration plug-in but even  without the plug-in we could still do the same   thing that we’re doing here for our maven project  and i have an alternate version of the pipeline   in this repository as well so it’s called  jenkinsfile old and if we open this up   the first difference that you’ll see between the  two Jenkins pipelines is that here I’m using the   tools directive to specify a maven insulation  and a jdk and I’m also specifying the maven   options environment variable as well so I guess  you could say this is a more generic version   of the pipeline because maybe your your tool  dependency isn’t maven or the JDK maybe your   tool dependency is that you know Python 2.7  or Python 3 is installed for you to be able to   run the steps in the Jenkins pipeline and if you  recall from the presentation GitLab does not have   an equivalent for this tools directive where we  can specify tools that should be installed or   present on the Jenkins agent that is executing  the steps defined in the pipeline instead in   GitLab we’re using docker images to to perform the  same role as the tools directive and that gives   us somewhat similar functionality to the image  keyword in our GitLab pipeline and to show an   example if I search for docker uh pipeline plugin  and then cloud ps they actually have an article that specifies how you can run a docker image kind  of the same way you run it here you can see we’re   invoking docker.image we’re specifying the image  name in a version and then inside of this block   we download the source code and then we execute  maven so this is happening within the docker   container similar to how this script is being  executed inside of a docker container so this is   another alternative method to writing our Jenkins  pipeline but it does require the the docker   pipeline plugin to be present on the Jenkins  instance and I think that’s what I like about   GitLab in this case even though it’s it’s great  that Jenkins is an extensible tool in the case of   GitLab none of this functionality is provided  through plugins it’s just there by default   okay so let’s go ahead and commit our GitLab  pipeline I’m going to exit out of these windows and we will commit these changes which should  trigger a pipeline to automatically run so we should see it pop up here in a second and you can see that a pipeline is now  running so I can select view pipeline and we have the test job running so  I’ll go ahead and open up the test job   and let’s see where where it’s at okay so it looks  like the job has downloaded the maven dependencies   and then it started the tests and you can see down  here that it began a session with the remote web   driver to execute the tests and three tests  ran and there were zero failures so there was   a successful build and then it creates the cache  and then it also uploads artifacts which in this   case are the surefire the JUnit test reports and  in the first run of the pipeline we would expect   to see the download of maven dependencies but in  future runs of the pipeline those dependencies   should be cached so they won’t be redownloaded  so I’ll navigate to the pipelines section and the   first thing that I’d like to point out from the  pipeline page is next to our our pipeline we have   this artifacts drop down here where we can  download the artifacts that were generated   from the pipeline in this case we just have the  JUnit test reports if I navigate into the pipeline   we have the visual graphical view of the pipeline  execution but under the tests section we can also   uh see the results of the test execution in  the pipeline so we have a 100 success rate we   can see which jobs uh executed tests if I go  into the test job it tells us the test class   name as well as the name of the test and then  the status of that test and how long that test   or the duration of that particular test now this  test uses the lambda test selenium automation grid   as the remote web driver so if we  navigate to our lambda test account   we should be able to see the results of the tests  and the test scenarios in our lambda test account   and the results of those tests are recorded  and we can access those recordings from the   left-hand menu here if i select automation it’ll  take us to the automation page where we can view   a timeline of test results so  I’ll click on the most recent here   and from the automation logs tab we can view  the results of each of our test scenarios we had   three test scenarios defined one  for windows 10 on Firefox version 93   one for windows 10 on chrome version 94 and  then the final one is Mac OS and then that’s on   running on Microsoft edge version 94. so for each  of these test scenarios we can view the recording   of that scenario so if I select  play we can see the website that   we’re connecting to in our tests and  then it adds five items to the to-do list and once those five items are added it begins   checking them off and marking them as  completed which clears the item from the list okay and then that concludes the test scenario   and i won’t go through each one but you  can see in each of the test scenarios   we have the test scenario successfully  completing the the test defined in our test class and for each of these scenarios we can use the  tabs up here to dive deeper into the details of   the test scenario execution in the event that  maybe the scenario failed and we had to perform   uh debugging then we can explore all the details  of the scenarios of the scenario from these tabs so I’m going to navigate  back to our GitLab project and I think that wraps up everything  that I wanted to cover so I hope   you enjoyed this video and I  will see you in the next one hey what’s up everybody my name is moss and  welcome back to this tutorial series on git lab   in the last video we discussed  the differences between Jenkins   and GitLab CI and I walked you through a sample  migration from a Jenkins pipeline to a GitLab   pipeline in this video I’m going to introduce you  to some additional features of GitLab pipelines   we are going to cover the packaging  and releasing features of git lab let’s review our learning objectives for this  module after completing this module you should   be able to do the following deploy artifacts from  a GitLab pipeline to the GitLab package registry   describe git lab releases describe the GitLab  package container and infrastructure registries a software release in a GitLab project may include  the following generic software packages generated   from GitLab pipeline job artifacts such as  platform specific binaries release notes   release evidence that includes everything  associated with the release such as issues   milestones or test reports and by default a  snapshot of the gitlab project’s source code now let’s talk about the registries available  in gitlab starting with the package registry the   gitlab package registry allows you to use gitlab  as a public or private software package registry   and the registry supports a number of package  managers you can publish and share software   packages to the registry and these packages  can be published from within a cicd pipeline   packages are associated to a gitlab  project and groups that project is added to the gitlab container registry is a private  container registry for publishing and consuming   container images and every gitlab project  has its own container registry the container   images in the registry are associated with a  gitlab project and any groups that project has   been added to you can utilize container images  stored in the registry from a gitlab pipeline   you can also build and publish container  images to the registry from a gitlab pipeline lastly we have the infrastructure  registry which supports publishing   and sharing of terraform modules each gitlab  project has an infrastructure registry   and you can build and publish a  terraform module from a gitlab pipeline   now that we’ve defined releases and the  available registries in Gitlab let’s get started okay so the first thing that i’d like to do  is quickly walk through the code base that   we’re going to be using today and once we’ve  done that we’ll then write a gitlab pipeline   for our code base i’m currently logged into git  lab and i’m inside of the test pipeline project   the test pipeline code base should be  pretty familiar to you if you’ve been   following this series it’s a maven project  that was generated using the quick start   maven archetype if we take a look at the source  directory it has a single app.java class and it   simply prints out hello world to the console okay  and then it has one matching test class as well but what’s more relevant to review in this  video is the ci underscore settings.xml file   and the palm.xml file because inside  of these files we have configurations   that are going to be used by the Gitlab pipeline  that we write the ci underscore settings.xml file   is used as our maven settings.xml file so  let’s quickly take a look at the settings and inside of our settings we define a server  configuration with an id of gitlab hyphen maven   and then we also define a property with the name  job hyphen token and its value actually references   a predefined gitlab environment variable so this  is how we would reference a gitlab environment   variable from within a file and this variable  is called ci underscore job underscore token   this predefined environment variable is one of  several ways that we can authenticate with the get   lab package registry if we want to authenticate  with the gitlab package registry from within a   gitlab pipeline then we would want to use the  CI job token to do that however if we want to   authenticate uh with the package registry outside  of a Gitlab pipeline so we’re authenticating from   a Jenkins pipeline or even from our local machine  what we would want to use is either a personal   access token a gitlab personal access token that  we generated for our account and that has to have   the scope of api access or we need a deploy  token that’s generated from our GitLab project   and the value of either the deploy token or the  personal access token has to be used in place of   the CI job token since we’re authenticating from  outside of a GitLab pipeline but in our case we   are going to write a GitLab pipeline so I’m going  to use the ci underscore job underscore token   environment variable to authenticate with the  package registry so that we can deploy a maven   package to the projects registry and one thing  that I’d like to point out before we move on is   that this code base will be made available to you  okay so now let’s take a look at the pom.xml file inside of the palm.xml file I modified the  version tag and again I’m referencing a predefined   GitLab environment variable called ci underscore  commit underscore tag so when we write the   GitLab pipeline we’re going to configure the  pipeline to run anytime a Git tag is pushed or   created in our GitLab project so the pipeline  is going to reference the environment variable   and so will the palm.xml file when it generates a  snapshot the other thing that was modified in this   pom.xml file is the repositories tag so we added a  repositories tag and it specifies the GitLab maven   package registry url and to form the package  registry url we use GitLab predefined environment   variables again and in this case you can see  the first environment variable that’s reference   is ci underscore api underscore v4  underscore url so this is going to be the   api url for git lab followed by projects followed  by the next predefined environment variable   which is our GitLab project’s id ci underscore  project underscore id and then we specify   packages and then maven okay so now  that we’ve reviewed the palm.xml file   and the ci underscore settings.xml file we can  start writing a GitLab pipeline that deploys this   this maven codebase as a maven package  to this GitLab project’s package registry   so what I’m going to do now is navigate to the  pipeline editor so under CI/CD I’ll select editor and then I’m going to select  create new ci cd pipeline   okay and the first thing that I want to do  is delete all of the stuff that we don’t need   in this pipeline template  so I’ll delete the comments   and then I’m going to delete the build and  test stage and just leave the deploy stage so I’ll remove all of the jobs  associated with the test and build stage   so we only have the deploy stage and the single  deploy job now in this case since we’re just   demonstrating the uh the package registry feature  in GitLlab I’m removing the build and test stage   but of course in a production and pipeline  you’d want to keep the build and test stage   in addition to the deploy stage and after removing  those stages I’m going to define the docker image   that the deploy job should use when executing  these steps inside of the deploy job so at   the top of the pipeline definition I’m going to  specify the image keyword and then I’m going to   specify the maven image and the latest version of  the maven image the next thing that I want to do   is define the relevant pipeline variables so under  stages I’m going to define the variables keyword   and under variables we are going to have a  single variable called maven underscore options   and this is going to specify our local repo  so d maven repo local and then dollar sign CI   underscore project underscore directory so this is  a predefined environment variable that we can use   and then the dot m2 repository and after defining  the variables I’m going to use a keyword that we   haven’t used in previous videos and that is the  workflow keyword the workflow keyword is used to   control the behavior of the pipeline and under the  workflow keyword we can specify the rules keyword   um and rules essentially uh create conditions  on which the pipeline will either be ran   or not ran and in our case we’re going to define  a single rule so under the rules keyword I’m going   to put hyphen and then I’ll put the conditional  if statement and then I’m going to put dollar sign   ci underscore commit underscore tag so what this  rule is saying is that this pipeline will run   if it was triggered by the creation of  a Git tag either a Git tag was pushed   to the project or we created a Git tag manually  from the repository section and then the tags page   now after specifying the workflow rules  we will create a cache using the cache   keyword so under workflow i’m going to use  the cache keyword and we’ll specify paths and we’ll specify one path which will be dot  m2 repository and then the final thing that   we need to do is actually invoke maven in the  deploy job so in the deploy job i’m going to   remove this first statement here and we are going  to invoke maven and then we’ll say maven deploy   and then specify the settings file  using s ci underscore settings dot xml   and then I’ll also remove these comments here so if I wrote this pipeline correctly it should  publish the maven package to this gitlab project’s   package registry and it says that this is a  valid gitlab ci configuration so let’s go ahead   and commit this pipeline and then see if it runs  successfully so i’m going to navigate down here   and i’ll select commit changes  and based on the workflow   keyword this shouldn’t run automatically until  we’ve created a new tag in the repository   so let’s navigate to the pipelines page and  you can see that no pipeline has started yet   so i’m going to navigate to the repository  section and then to the tags page   and from here i’m going to select new tag  and we’ll give the tag name version 1.0.0   and we’ll create it from the main  branch i’ll leave the message   blank and release notes blank for  now and then i’ll select create tag   okay so that created a new git tag and if  i navigate back to ci cd and then pipelines   that triggered a new pipeline job so  i’m going to open up that pipeline job and we have the single deploy job so  i’m going to open up the deploy job   and let’s take a look at the output okay so it  looks like the pipeline completed successfully   and if we take a look at the output i  won’t go through all of the output but   starting at line 614 we can see where  maven deploy started and on line   617 we can see where it started uploading the  maven package to the url that we specified   in the palm.xml file and you can see in  the url how it expanded the predefined   environment variables like the api url for GitLab  as well as the GitLab project id right here   and then also the the tag version name as well  so it appears that it successfully uploaded   our maven package to this project’s package  registry and to verify that let’s go to the   packages and registries section and then I’m  going to navigate to the package registry page and from this page you can see  that we have one package available   and it’s associated with the v 1.0.0 tag and  it’s a maven package so if I were to open that   the package up we can see all the details  related to this maven package so on this   page it provides a history of the package  and it also shows me how to add this package   as a dependency in a downstream project  and also how to set up this registry   this GitLab projects registry in my palm.xml file  and then i can also access the files associated   with this maven package so that’s pretty much  all I had for this video I hope you enjoyed it   and I will see you in the next series if you’d  like to learn more be sure to follow our blog at   lambdatest.com forward slash blog as well as our  LambdaTest community at community.lambdatest.com you can also earn resume worthy  LambdaTest Selenium Certifications   at lambdatest.com forward slash certifications you

    By Amjad Izhar
    Contact: amjad.izhar@gmail.com
    https://amjadizhar.blog