Several articles from Al Riyadh newspaper cover a range of topics. One section details a Saudi cabinet meeting led by King Salman, discussing international issues like Ukraine and domestic matters such as industrial growth. Another highlights Saudi Arabia’s annual ” العلم” ( العلم) day, tracing the history and symbolism of the national flag. Local news features include the transformation of a building code committee into a center and development projects like the renovation of historical mosques. Other articles report on sports news, economic updates regarding oil prices, and international events including the Syrian conflict and aid efforts for Gaza. Finally, lifestyle pieces discuss Ramadan traditions, healthy fasting, and a Saudi drama series.
Study Guide: Saudi Arabia, Ukraine, and Domestic Developments
This study guide is designed to help you review the provided source material. It includes a quiz to test your understanding, essay questions to encourage deeper analysis, and a glossary of key terms.
Quiz
Answer the following questions in 2-3 sentences each.
What was the main topic of discussion during the Saudi Arabian Council of Ministers’ meeting mentioned at the beginning of the excerpts?
According to the excerpts, what is the purpose of Saudi Arabia hosting talks between the United States and Ukraine?
What is the significance of the “Saudi-Ukrainian Business Council” being re-established in 2025?
What kind of assistance has Saudi Arabia provided to Ukraine, according to the text?
What is ” يوم العلم ” (National Flag Day) in Saudi Arabia, and what does it symbolize?
Briefly describe the historical evolution of the Saudi Arabian flag as mentioned in the text.
What are some of the key objectives behind the Saudi Arabian initiative to develop historical mosques?
What was the primary focus of the “شم بصحة” (Smell Health) campaign during Ramadan?
According to the article on the industrial sector, what factors contributed to the rise in Saudi Arabia’s industrial production index?
What were the key concerns affecting global oil and gold prices as discussed in the financial news sections?
Essay Format Questions
Consider the following questions for a more in-depth analysis of the source material. Develop a well-structured essay for each, drawing evidence from the provided text.
Analyze Saudi Arabia’s role in international diplomacy, particularly concerning the Ukraine crisis, as portrayed in the excerpts. What motivations and strategies appear to guide its actions?
Discuss the significance of Saudi Arabia’s focus on both international relations (e.g., Ukraine) and domestic development (e.g., Vision 2030, National Flag Day) as reflected in the provided news articles. How do these two areas intersect or reinforce each other?
Evaluate the importance of cultural heritage and national identity in Saudi Arabia, using the examples of the historical mosque restoration project and the celebration of National Flag Day.
Based on the articles discussing the industrial sector and financial markets, assess the current economic climate in Saudi Arabia and its connections to global economic trends.
Examine the various social initiatives and cultural exchanges mentioned in the text (e.g., “شم بصحة,” cultural exchange with China). What do these initiatives reveal about Saudi Arabia’s evolving society and its engagement with the world?
Glossary of Key Terms
مجلس الوزراء (Majlis al- الوزراء): The Council of Ministers in Saudi Arabia, the main executive body responsible for policy-making.
ولي العهد (Wali al-Ahd): The Crown Prince, the designated successor to the throne. In the context of the article, refers to Prince Mohammed bin Salman bin Abdulaziz Al Saud.
خادم الحرمين الشريفين (Khadim al-Ḥaramayn al-Sharifayn): The Custodian of the Two Holy Mosques, a title held by the King of Saudi Arabia. In the context, refers to King Salman bin Abdulaziz Al Saud.
الاستراتيجية الوطنية (al-ʾistirātījiyyah al-waṭaniyyah): The national strategy, often referring to overarching plans for development in various sectors.
رؤية 2030 (Ruʾyah 2030): Vision 2030, Saudi Arabia’s ambitious plan for economic diversification, social reform, and sustainable development.
يوم العلم (Yawm al-ʿAlam): National Flag Day in Saudi Arabia, celebrated to commemorate the adoption of the country’s flag.
كود البناء السعودي (Kūd al-Bināʾ al-Saʿūdī): The Saudi Building Code, a set of regulations and standards for construction in the Kingdom.
الرقم القياسي لإلنتاج الصناعي (al-Raqm al-Qiyāsī lil-ʾIntāj al-Ṣināʿī): The industrial production index, a measure of the volume of industrial output.
التنمية المستدامة (al-Tanmiyah al-Mustadāmah): Sustainable development, development that meets the needs of the present without compromising the ability of future generations to meet their own needs.
التبادل التجاري (al-Tabādul al-Tijārī): Trade exchange, the buying and selling of goods and services between countries or entities.
Briefing Document: Analysis of News Articles from “20712.pdf” (March 12, 2025)
This briefing document summarizes the main themes, important ideas, and key facts presented in the provided excerpts from the Arabic newspaper “20712.pdf,” dated Wednesday, March 12, 2025. The articles cover a range of domestic and international topics relevant to Saudi Arabia.
1. Saudi Arabia’s Role in International Diplomacy and Conflict Resolution:
Ukraine Crisis: A significant focus is placed on Saudi Arabia’s efforts to mediate and support a peaceful resolution to the Ukraine crisis. The Council of Ministers welcomed the commencement of talks between the United States and Ukraine, hosted by the Kingdom. This initiative stems from Saudi Arabia’s balanced relationships with various parties and its leading role in promoting global security and peace.
Quote: “” (The Council welcomed the start of talks between the United States of America, which the Kingdom is hosting as part of its efforts to end the crisis, especially in light of its balanced relations with various parties and its leading role in promoting global security and peace, and establishing dialogue as part of the factors of international stability and peace.)
President Zelenskyy’s Visit: Ukrainian President Volodymyr Zelenskyy paid an official visit to Saudi Arabia on March 10, 2025. He met with Crown Prince Mohammed bin Salman, and they discussed strengthening the distinguished relations between the two friendly countries in all fields. Zelenskyy congratulated the Kingdom on Riyadh’s winning bid to host Expo 2030 and the FIFA World Cup 2034.
Quote: “” (His Excellency the President of Ukraine, Mr. Volodymyr Zelenskyy, paid an official visit to the Kingdom of Saudi Arabia on 10 Ramadan 1446 AH, corresponding to March 10, 2025… They held an official discussion session, during which they reviewed aspects of the distinguished relations between the two friendly countries and expressed their desire to strengthen them in all fields.)
Economic Ties with Ukraine: Both sides emphasized the robustness of economic ties, noting a 9% growth in trade volume in 2024. They agreed on the importance of joint work to further develop trade relations, encourage mutual visits of commercial and investment delegations, explore joint opportunities (including Vision 2030 projects and the reconstruction of Ukraine), and welcomed the re-establishment of the Saudi-Ukrainian Joint Business Council in 2025.
Quote: ..” (The two sides praised the robustness of the economic ties between the two friendly countries and noted the importance of joint work to develop the volume of trade exchange, which reached a growth rate of (9%) in 2024, and agreed on the necessity of overcoming the challenges facing the development of trade relations…)
Support for Syria’s Unity and Sovereignty: The Council of Ministers reiterated Saudi Arabia’s full support for the unity, sovereignty, and territorial integrity of Syria, emphasizing dialogue as the way to resolve crises and ensure the stability and safety of its people.
2. Domestic Developments and Initiatives in Saudi Arabia:
National Campaign for Charitable Work: The Council of Ministers praised the national campaign for charitable work, highlighting the deeply rooted values of benevolence, giving, and solidarity within Saudi society. The state’s significant care and attention to this vital sector were also noted.
Quote:” (His Excellency explained that the Council noted what the national campaign for charitable work embodied; the deeply rooted benevolence and values of giving and solidarity in Saudi society’s culture, pointing in this context to the great care and attention the state gives to this leading sector.)
Positive Economic Indicators: The Council touched upon the positive growth rates of major projects during 2024, reflecting the progress made under the Kingdom’s Vision 2030 programs and national strategies.
Quote: (The Council also touched upon the positive growth rates it achieved during the year 2024, thus reflecting the success of the wise leadership in accelerating the projects of the Kingdom’s Vision 2030…)
“Day of the Flag”: The Kingdom and its people celebrate “Day of the Flag” annually, recognizing the national flag as a historically significant symbol of the Saudi state since its establishment in 1727. The flag represents sovereignty, unity, cohesion, and national identity. Its evolution over Saudi history, from the first Saudi state onwards, was detailed, culminating in its current form adopted by King Abdulaziz.
Quote: “.” (The Kingdom of Saudi Arabia and its people celebrate “Flag Day” every year, which is considered a historically significant day marking the launch of the national flag, narrating the history extending across the history of the Saudi state since its establishment in the year 1139 AH corresponding to 1727 AD.)
Project to Renovate Historical Mosques: A project by Crown Prince Mohammed bin Salman is ongoing to develop and restore historical mosques, preserving their architectural style and Islamic heritage. The second phase includes 30 mosques across 13 regions, utilizing traditional building techniques with natural materials to reflect the local environment. The renovation of the Faydhat Athqab Mosque in Hail, dating back to 1946, was specifically mentioned.
Quote: “” (In a step reflecting the continuation of the project of Prince Mohammed bin Salman to develop historical mosques and restore their religious, cultural, and social role, to preserve their architectural styles and highlight their Islamic heritage, and to rebuild them in environmentally sustainable ways with natural elements…)
Volunteer Initiatives: Over 100 volunteers participated in an environmental campaign in the Madinah region to clean parks and valleys, supported by the “Noumou Alghita’ Alnabati” (Development of Vegetation Cover) Foundation and implemented by the “Widian” (Valleys) Association.
3. Economic News:
Increase in Industrial Production Index: The industrial production index rose by 1.3% in January 2025 compared to the same month of the previous year, driven by an increase in transformative industries and improvements in water supply, sewage, and waste management sectors. This highlights the effectiveness of the National Industry Development and Logistics Program launched in 2019 and the National Industrial Strategy adopted in October 2022, aiming to diversify the economy and increase non-oil exports.
Quote: “” (The industrial production index rose by 1.3% during the month of January 2025 compared to the same month of the previous year 2024, supported by an increase in the transformative industries activity…)
Oil Price Fluctuations: Oil prices experienced slight gains amid concerns about a potential recession in the United States and the impact of tariffs on global growth, despite OPEC+ focusing on increasing supplies.
Gold Price Increase: Gold prices rose due to a weaker US dollar and increasing concerns about a global recession.
Transformation of Building Code Committee: The Council of Ministers approved the transformation of the National Building Code Committee into the Saudi Building Code Center, aiming to enhance performance and efficiency in the construction sector, improve infrastructure quality, and promote sustainability in line with Vision 2030.
Quote: “.” (The Council of Ministers approved the transformation of the National Committee for the Saudi Building Code into the (Saudi Building Code Center), and the approval of the application of the Saudi Building Code system and the amendment of its regulatory arrangements…)
4. Regional Issues:
Gaza Water Crisis: The ongoing Israeli restrictions on the entry of aid and fuel into the Gaza Strip have exacerbated the suffering of over two million displaced Palestinians, leading to a severe water crisis. The destruction of 580 desalination plants and the disruption of electricity have severely impacted access to clean water. Bakeries are also facing closure due to fuel shortages.
Quote: “..” (The Israeli government continues to prevent the entry of aid into the stricken Gaza Strip, which has exacerbated the humanitarian suffering of more than two million displaced Palestinians who remain in the sector…)
Israeli Airstrikes in Syria: Israeli warplanes reportedly carried out airstrikes targeting radar and weapons systems in southern Syria, considering the presence of such systems a “significant threat.”
Saudi Arabia’s Position on Syria: Saudi Arabia has closely followed the developments in Syria, expressing its satisfaction with the positive steps taken to preserve the unity of the Syrian people and their capabilities. The Kingdom supports efforts to prevent Syria from sliding into chaos and division, emphasizing the importance of non-interference in its internal affairs. It also condemned Israeli airstrikes in Syria.
Quote: “” (The Kingdom of Saudi Arabia has followed the rapid developments in sisterly Syria and expresses its satisfaction with the positive steps that have been taken to ensure the safety of citizens, prevent bloodshed, and preserve the institutions and capabilities of the Syrian state.)
5. Arts, Culture, and Society:
Chinese Cinema and Cultural Exchange: The Chinese film “Ne Zha Reborn (2)” has achieved significant success in China and globally, highlighting the concept of family, which resonates with the deeply rooted importance of kinship and responsibilities in Arab culture. Cultural cooperation between China and Saudi Arabia is gaining momentum, with increasing exchanges in traditional arts, language education (Mandarin being introduced in Saudi schools), and cultural events like the “Chinese Lanterns” during Riyadh Season and the Saudi Travel Festival in Beijing.
Ramadan Preparations and Programming: Saudi television channels are preparing diverse Ramadan programming, aiming to attract viewers during the holy month.
“Sham Bi Sahha” (Scent of Health) Campaign: A health awareness campaign titled “Sham Bi Sahha” was launched during Ramadan to promote healthy habits, encouraging citizens and residents to walk daily, get enough sleep, and focus on well-being.
6. Sports:
Saudi Arabia to Host AFC U-17 Asian Cup Finals: The Saudi Arabian Football Federation announced its full readiness to host the finals of the AFC U-17 Asian Cup, following a final inspection visit by the Asian Federation.
Financial Issues in Football Clubs: Players in some football clubs are reportedly refusing to participate in training due to unpaid salaries, highlighting the financial difficulties faced by some clubs.
Al-Hilal and Al-Ahli Advance in Asian Champions League: Al-Hilal and Al-Ahli secured their places in the quarter-finals of the AFC Champions League.
This briefing provides a snapshot of the key issues and events covered in the selected articles, reflecting Saudi Arabia’s active role in regional and international affairs, its ongoing domestic development under Vision 2030, and various social, cultural, and economic activities.
Saudi Arabia: Diplomacy, Economy, Culture, and Development
Frequently Asked Questions
What was the main focus of the Saudi Council of Ministers’ meeting discussed in the article? The main focus of the Saudi Council of Ministers’ meeting was on strengthening international security and stability. This included reviewing the results of discussions with Ukraine’s President Volodymyr Zelensky, emphasizing the Kingdom’s commitment to supporting international efforts to resolve the crisis in Ukraine and achieve lasting peace. The council also welcomed the start of talks between the United States and Ukraine, highlighting the Kingdom’s role in fostering dialogue given its balanced relationships with various parties.
What economic developments were highlighted in the Saudi Council of Ministers’ report? The report highlighted a rise in the industrial production index, underscoring the effectiveness of the national industrial strategy. It also noted positive growth rates in major projects under Vision 2030 during 2024, reflecting the Kingdom’s progress in implementing national programs and strategies. Furthermore, the strong economic ties between Saudi Arabia and Ukraine were emphasized, with a 9% growth in trade volume in 2024, and both countries welcomed the re-establishment of the joint business council in 2025.
What role is Saudi Arabia playing in the Ukraine crisis, according to the article? Saudi Arabia is actively involved in seeking a resolution to the Ukraine crisis through diplomacy and humanitarian aid. The Kingdom welcomed and hosted discussions between the US and Ukraine, leveraging its balanced international relations to promote dialogue. It has also provided humanitarian assistance to Ukraine, totaling $410 million, including relief supplies and petroleum products. Notably, the mediation efforts of Crown Prince Mohammed bin Salman in 2022 led to a prisoner exchange agreement between Russia and Ukraine.
What is the significance of ” يوم العلم ” (National Flag Day) in Saudi Arabia? “يوم العلم” (National Flag Day) is a significant annual celebration in Saudi Arabia that honors the nation’s flag as a symbol of its sovereignty, unity, cohesion, and national identity. The flag’s history dates back to the establishment of the first Saudi state in 1727 and has evolved through different periods. The current form, adopted during the reign of King Abdulaziz in 1932, features the green color symbolizing Islam and peace, the شهادة التوحيد (Shahada) representing the foundation of the state’s Islamic identity, and a sword symbolizing strength, justice, and chivalry, beneath which a palm tree was later added to represent prosperity and sustainability. The day serves as a reminder of the Kingdom’s historical journey, from its unification to its modern renaissance under Vision 2030.
What initiative was launched in Saudi Arabia to promote a healthy lifestyle during Ramadan? The “شم ب صحة” (Smell Health) campaign was launched as an innovative initiative to promote a healthy lifestyle during the month of Ramadan. Collaborating with the Public Health Holding Company, the campaign aimed to shift the perception of Ramadan from just a month of abstinence to one of health and activity. It included various activities such as health checks, encouraging daily walking, ensuring adequate sleep, electronic workshops, interactive content, health challenges, and a smart application providing personalized dietary plans and exercise schedules.
What is the Saudi Building Code Center, and what is its purpose? The Saudi Building Code Center is a newly established entity, transformed from the National Committee for the Saudi Building Code. Its purpose is to enhance performance and efficiency in the building and construction sector by implementing and updating the Saudi Building Code, establishing a building code academy, and fostering research and development in this field. This initiative aims to improve the quality of construction, enhance infrastructure sustainability, and align with the goals of Vision 2030.
What challenges and developments are occurring in the digital media landscape, as mentioned in the article? The article highlights the overwhelming influx of digital content through various platforms, shaping the awareness and interests of future generations. It raises the critical question of discerning purposeful and impactful content that contributes to the building and development of societies and values.
What cultural exchanges and diplomatic anniversaries were noted between China and Saudi Arabia? The article mentions the 35th anniversary of the establishment of diplomatic relations between China and Saudi Arabia, which coincides with the cultural year between the two countries. This milestone is marked by increasing cultural exchanges, including the growing popularity of learning the Chinese language in Saudi Arabia, the inclusion of Chinese in some educational curricula, and cultural events such as the “Island of Wonders: China-Saudi Arabia Exhibition” in Beijing’s Imperial Palace Museum and Chinese performances during Riyadh Season. Additionally, the “Meeting of Artists on the Silk Road” initiative facilitates artistic exchange between Chinese and Arab artists. The legislative and advisory bodies in China emphasized the importance of deepening cultural and popular exchanges with Saudi Arabia and the world, and China proposed the Global Civilization Initiative to promote respect for diverse civilizations and enhance cultural exchanges.
Saudi Arabia’s Role in International Security and Peace
The sources highlight Saudi Arabia’s significant and active role in promoting international security through various diplomatic and political efforts.
Saudi Arabia’s Leading Role in Global Security and Peace:
The Kingdom is described as having a pioneering role in strengthening global security and peace.
This role is attributed to the directives of the Crown Prince, Mohammed bin Salman bin Abdulaziz Al Saud.
Saudi Arabia’s foreign policy is based on a clear vision and is guided by the Custodian of the Two Holy Mosques, King Salman bin Abdulaziz.
The Kingdom’s balanced relationships with friendly countries enable it to achieve common goals and interests.
Saudi Arabia works on finding just and peaceful solutions to various Arab and Islamic issues.
As a founding member of the United Nations, the Kingdom contributes to its programs and goals, aiming to achieve international peace.
Saudi Arabia is considered and continues to be part of the factors of stability and world peace.
The high international trust in the King and the Crown Prince facilitates the Kingdom’s ability to bridge the views of conflicting parties.
Efforts in Resolving International Conflicts:
The Kingdom hosted talks between the United States and Ukraine in Jeddah, demonstrating its commitment to finding a peaceful resolution to the Ukrainian crisis.
These talks reflect the Kingdom’s ongoing efforts and initiatives since the outbreak of the Ukrainian crisis, in coordination and consultation with concerned parties.
The choice of Saudi Arabia as the host reflects the international appreciation for the Custodian of the Two Holy Mosques and the Crown Prince, as well as the Kingdom’s ability to bring different viewpoints closer.
Saudi Arabia has long been a key destination for resolving global crises, having previously brought together leaders from the United States, Russia, and Ukraine to discuss peaceful solutions.
The Kingdom sees dialogue as the only way to find a peaceful solution to the Ukrainian crisis that enhances global security and stability.
The Crown Prince successfully mediated a prisoner exchange agreement between Russia and Ukraine in 2022.
Supporting Regional Stability and Sovereignty:
The Kingdom has expressed its rejection of calls for the displacement of Palestinians.
Saudi Arabia fully supports the unity, sovereignty, and territorial integrity of Syria.
The Council of Ministers commended the Syrian leadership’s steps towards achieving national reconciliation and stability.
The Kingdom welcomed the signing of an agreement regarding the integration of all civilian and military institutions in northeastern Syria into the Syrian state institutions.
The Kingdom lauded the Syrian leadership’s measures to achieve national peace in Syria and the efforts to complete the process of building state institutions.
Russia also views a united and prosperous Syria as crucial for regional stability and is in contact with other nations regarding the situation.
A statement emphasizes the need for the Syrian people to unite to face their enemies and support their government’s efforts for security, peace, and stability.
Addressing Humanitarian Concerns and Condemning Actions that Threaten Security:
The Kingdom has strongly condemned Israel’s cutting off of electricity to the Gaza Strip.
Saudi Arabia reiterated its categorical rejection of Israeli violations of international humanitarian law.
The Kingdom demands immediate international action to restore electricity and the flow of aid to the Gaza Strip without conditions.
It also calls for activating international accountability mechanisms for these serious violations.
International Cooperation for Security:
The Crown Prince reviewed strategic cooperation with the US Secretary of Defense, Lloyd Austin, and discussed regional developments and joint efforts to enhance regional and international security and stability.
The Kingdom emphasizes strengthening partnerships with international organizations in various fields.
In summary, the sources clearly illustrate Saudi Arabia’s proactive and multifaceted approach to international security. This involves high-level diplomacy in hosting crucial talks, consistent support for the sovereignty and stability of neighboring nations, strong condemnation of actions that undermine humanitarian principles and regional peace, and active engagement in international cooperation to foster a more secure global environment.
Absence of Industrial Production Discussion
Based on the sources and our conversation history, there is no direct discussion or information regarding industrial production.
Our previous conversation focused on international security and Saudi Arabia’s role in maintaining it [Me, You]. The current source material primarily deals with political discussions, humanitarian aid, international relations (specifically Saudi Arabia’s role in various global issues like the Ukrainian crisis and the situation in Gaza), the Saudi National Flag, and some local news and cultural events.
Therefore, I cannot provide a discussion on industrial production based on the information available in the sources or our prior conversation.
Saudi Arabia’s Bilateral Relations and Diplomatic Efforts
Based on the sources, Saudi Arabia engages in various bilateral relations with other countries, focusing on cooperation, mutual interests, and addressing regional and international issues.
Here are some specific examples of bilateral relations discussed in the sources:
Saudi Arabia and Ukraine: The sources highlight significant bilateral engagement between Saudi Arabia and Ukraine.
A session of talks was held with the participation of high-ranking officials from both countries, as well as the United States, to discuss resolving the crisis in Ukraine.
These talks in Jeddah reflect Saudi Arabia’s ongoing efforts and initiatives since the start of the Ukrainian crisis.
The Saudi side expressed its hope for the success of efforts to end the crisis in Ukraine, in accordance with international law and the principles of sovereignty and territorial integrity.
The Ukrainian side expressed its appreciation for Saudi Arabia’s efforts in this regard and gratitude for the aid provided by the Kingdom.
Discussions between the two countries also covered cooperation in the fields of oil, gas, petrochemicals, agriculture, food industries, and food security, with a welcome for the expansion of the private sector’s role.
Military and defense cooperation and its development were also discussed.
Saudi Arabia and the United States: Bilateral relations with the United States are also evident.
The US participated in the talks held by Saudi Arabia regarding the Ukrainian crisis. This indicates a level of coordination and shared interest in the matter.
The Crown Prince reviewed strategic cooperation with the US Secretary of Defense, Lloyd Austin, and discussed regional developments and joint efforts to enhance security and stability [Me, You].
Saudi Arabia and Turkey: The Turkish Minister of National Defense, Yasar Guler, was received in Saudi Arabia, and bilateral relations between the “brotherly countries” were reviewed. Discussions also included exploring cooperation in the military and defense fields, regional and international developments, and efforts to achieve security and stability.
Saudi Arabia and the Cooperation Council for the Arab States of the Gulf (GCC): There is a mention of a potential memorandum of understanding for cooperation in the field of knowledge and publishing between the King Fahd National Library in Saudi Arabia and the General Secretariat of the GCC. This suggests efforts to strengthen cultural and intellectual ties within the Gulf region.
Saudi Arabia and the Arab Administrative Development Organization: A potential memorandum of understanding in the field of training between the Ministry of Civil Service in Saudi Arabia and the Arab Administrative Development Organization is under discussion. This indicates a focus on developing administrative capabilities within the Arab world.
Saudi Arabia and China: The sources briefly mention cultural cooperation between China and the Arab world, noting that such exchanges contribute to the flourishing of global culture and the promotion of understanding between civilizations.
Saudi Arabia, Egypt, Jordan, and GCC Leaders: The Crown Prince’s invitation to the leaders of Egypt, Jordan, and the GCC for a meeting suggests a focus on a unified stance on regional issues and the avoidance of further conflicts.
The sources emphasize Saudi Arabia’s commitment to strengthening its relations with friendly countries in a way that contributes to achieving common goals and interests. The Kingdom’s active role in hosting talks and engaging in discussions across various sectors demonstrates its dedication to fostering positive bilateral ties for regional and international benefit.
Saudi National Campaigns: Joud Regions and Stem with Health
Based on the sources, there are mentions of at least two national campaigns in Saudi Arabia: the “Joud Regions” campaign and the “Stem with Health” campaign.
The “Joud Regions” Campaign:
This campaign was launched by His Royal Highness Prince Saud bin Nayef, the Governor of the Eastern Province, and his working team led by Fahd M. Al-Jubairi.
The campaign aimed to foster a spirit of contribution and solidarity within the Eastern Province.
Prince Saud bin Nayef called on the people of the Eastern Province to donate to the campaign based on their capabilities, whether they were affluent individuals or business leaders. He emphasized that even small contributions are significant.
The campaign aimed to achieve stability for housing and was supported by the wise leadership, various sectors (public, private, and non-profit), and individuals.
Fahd Al-Jubairi noted that this was not the first version of the campaign and that previous versions had contributed to achieving housing stability. He expressed thanks and gratitude to all sectors for their cooperation and participation in the campaign’s success in the Eastern Province.
The success of the campaign is seen as an extension of the characteristics of Saudi society, which is marked by solidarity and mutual support, as highlighted by Prince Saud bin Nayef.
The “Stem with Health” Campaign:
This campaign is described as different and unique due to its integration of digital interaction with social media.
It utilized rich visual content across platforms and reached millions of users.
Smart applications were used to measure the interactive bodily and health experience and track steps.
This made the activity engaging and motivating, with a wide impact across different age groups, including children, youth, and the elderly, who participated in the 40 health and activity challenges.
Notably, approximately 30% of the participants were over the age of 60.
While the “Joud Regions” campaign is specifically mentioned within the context of the Eastern Province, the call for contributions from all who are able and its aim to address a fundamental need like housing stability suggest a scale and impact that align with the idea of a national effort implemented regionally. The “Stem with Health” campaign, with its nationwide digital reach and impact across various demographics, is clearly a national campaign focused on health and wellness.
Saudi Arabia: Emerging Themes of a National Vision
While the provided sources do not explicitly use the term “Saudi Vision” or delve into a detailed exposition of its tenets, they offer considerable insights into the Kingdom’s current priorities, long-term objectives, and guiding principles across various domains. These elements strongly suggest the underlying themes and directions of a comprehensive national vision.
Based on the sources, key aspects that align with a potential “Saudi Vision” include:
Global Leadership in Peace and Security: Saudi Arabia actively positions itself as a pivotal player in fostering international peace and security. This is demonstrated by its hosting of high-level talks, such as the US-Ukraine discussions in Jeddah, and its consistent efforts to mediate and de-escalate conflicts. The Kingdom’s commitment to finding peaceful solutions to international crises and its high international standing are crucial components of this global leadership ambition.
Regional Stability and Unity: A strong emphasis is placed on the stability and unity of the region, particularly concerning Arab and Islamic nations. The Kingdom’s firm rejection of calls for the displacement of Palestinians and its unwavering support for the unity, sovereignty, and territorial integrity of Syria exemplify this commitment. The welcoming of the integration agreement in Syria and the commendation of Syrian leadership’s steps towards national reconciliation indicate a vision for a stable and unified regional landscape.
Upholding Humanitarian Principles and Justice: Saudi Arabia consistently voices its condemnation of actions that violate international humanitarian law, such as Israel’s blockade and cutting off of essential supplies to Gaza. The demand for immediate humanitarian access and the activation of accountability mechanisms reflect a commitment to justice and the well-being of affected populations, aligning with broader ethical and humanitarian goals likely embedded in a national vision.
Strengthening International Partnerships: The sources highlight active engagement in bilateral relations across various sectors. Discussions on strategic cooperation with the United States in defense, exploring military and defense cooperation with Turkey, and potential collaborations with the GCC and the Arab Administrative Development Organization point towards a strategy of building strong international alliances to achieve shared objectives. The Crown Prince’s engagement with leaders from Egypt, Jordan, and the GCC further underscores the importance of regional coordination.
National Identity and Heritage: The extensive coverage of “يوم العلم” (National Flag Day) and the symbolism of the Saudi flag emphasizes the significance of national identity, unity, and historical heritage. The respect accorded to the flag and its deep-rooted meaning reflect core national values that would undoubtedly form part of a long-term vision. Furthermore, initiatives like the project to renovate historical mosques indicate a commitment to preserving the Kingdom’s rich cultural and Islamic heritage.
Societal Development and Well-being: The national campaigns like “جود المناطق” (Joud Regions), aimed at fostering community solidarity and addressing social needs, and “صم بصحة” (Stem with Health), focused on promoting health and well-being through digital engagement, demonstrate national-level initiatives geared towards improving the quality of life for citizens. The emphasis on “المحتوى الهادف” (purposeful content) for building a conscious and developed society suggests a focus on intellectual and cultural growth as part of national progress.
In conclusion, while the term “Saudi Vision” is not explicitly elaborated upon in these sources, the consistent themes of international leadership, regional stability, commitment to justice and humanitarian principles, strong international partnerships, emphasis on national identity and heritage, and a focus on societal development strongly indicate the underlying directions and priorities of a comprehensive national vision aimed at positioning Saudi Arabia as a significant and influential global player while ensuring the progress and well-being of its people and the wider region.
Affiliate Disclosure: This blog may contain affiliate links, which means I may earn a small commission if you click on the link and make a purchase. This comes at no additional cost to you. I only recommend products or services that I believe will add value to my readers. Your support helps keep this blog running and allows me to continue providing you with quality content. Thank you for your support!
The new Volvo VNL aims to redefine industry standards in safety, efficiency, and driver experience. Its aerodynamic design improves fuel efficiency, while a new active safety platform integrates advanced technologies like pedestrian detection and eCall for emergency assistance. The redesigned driver environment prioritizes comfort and productivity with features like a digital display, integrated parking cooler, and versatile bunk. Volvo emphasizes connectivity for enhanced safety through eCall and operational efficiency with remote diagnostics and over-the-air updates. Ultimately, the VNL focuses on driver well-being and accident prevention through constant innovation.
Volvo VNL: Innovation, Safety, and Efficiency Study Guide
I. Study Questions
Aerodynamics: How does the redesigned cab of the Volvo VNL contribute to fuel efficiency?
Active Safety Platform: Describe the features and benefits of the Volvo proprietary active safety platform.
Driver Environment: What key improvements have been made to the Volvo VNL’s driver environment, and how do these improvements enhance the driver’s experience?
Living Environment: Detail the features of the Volvo VNL’s living environment, including the versatile bunk and integrated parking cooler, and explain their impact on driver comfort and productivity.
User Experience Design: What are the two parts (hardware and software) of the new features being introduced in the Volvo VNL from a user experience design perspective?
Safety Philosophy: Explain Volvo’s approach to safety and how it is reflected in the design and technology of the Volvo VNL.
eCall System: Describe the eCall system and its potential benefits in emergency situations.
Fuel Efficiency: What design innovations contribute to the Volvo VNL’s increased fuel efficiency, and what is the percentage improvement?
Service and Support: Outline the services and support available to Volvo VNL owners, including Volvo service contracts, remote diagnostics, and Volvo Action Service.
Connectivity: How does connectivity in the Volvo VNL go beyond just lowering costs and improving productivity?
II. Quiz
Instructions: Answer the following questions in 2-3 sentences each.
How does the aerodynamic design of the new Volvo VNL contribute to improved fuel efficiency?
What are the key features of the Volvo’s active safety platform, and how do they enhance safety?
Describe the functionality and benefits of the Volvo VNL’s versatile pull-down bunk.
How does the new stock in the Volvo VNL enhance the driving experience, and what functions does it control?
Explain Volvo’s commitment to safety in the design and development of the Volvo VNL.
What is the eCall system, and how does it benefit drivers in emergency situations?
How does the Volvo VNL improve fuel efficiency compared to previous models?
Describe the remote diagnostic capabilities offered for the Volvo VNL, and explain their benefits.
What is Volvo Connect, and what functionalities does it provide for Volvo truck owners?
What is the function of the industry-first integrated parking cooler in the all-new Volvo VNL?
III. Quiz Answer Key
The redesigned cab of the Volvo VNL is aerodynamically optimized to reduce wind resistance. This reduces drag which results in better fuel efficiency.
Volvo’s active safety platform includes features like pedestrian detection and collision mitigation systems. These technologies work together to help drivers avoid accidents and reduce injury risks.
The Volvo VNL features a Murphy bed-style bunk that can be stored into the wall. This versatile design creates a complete dinette solution and maximizes the living space inside the truck.
The new stock allows drivers to engage different Drive modes, engine braking, and paddle shift between transmission gears. This provides drivers greater control and a more seamless experience.
Volvo’s commitment to safety is central to their brand and design philosophy. They constantly strive for advancements in safety, incorporating leading-edge technologies and features in the Volvo VNL.
The eCall system automatically notifies rescue services in the event of a rollover or airbag deployment. It supplies them with GPS location and a call back number to the cab.
The aerodynamic improvements on the all-new Volvo VNL have yielded up to a 10% increase in fuel efficiency. This efficiency makes it potentially the most fuel-efficient truck on the road.
Remote diagnostics provides critical fault codes, leading to immediate communication and recommendations. These features allow for proactive maintenance, reducing downtime and potential repair costs.
Volvo Connect is an interface for all Volvo truck services and a digital key. It provides a holistic view of vehicle information and tools for managing a more productive and profitable business.
The industry-first integrated parking cooler in the all-new Volvo VNL helps keep drivers cool. This reduces or eliminates idling for up to 8 hours and also saves money on fuel.
IV. Essay Questions
Discuss the ways in which Volvo has innovated in the all-new VNL to improve the overall driver experience, focusing on both the driving and living environments.
Analyze the significance of safety as a core value for Volvo, and how this value is demonstrated through the design and technology of the Volvo VNL.
Evaluate the impact of connectivity on the efficiency, safety, and overall functionality of the Volvo VNL.
Explore the ways in which Volvo’s service and support systems, such as Volvo Action Service and remote diagnostics, contribute to the uptime and productivity of Volvo VNL trucks.
Assess the environmental and economic benefits of the Volvo VNL’s fuel-efficient design and technologies.
V. Glossary of Key Terms
Active Safety Platform: A comprehensive suite of safety technologies designed to prevent accidents and reduce injury risk, including features like pedestrian detection and collision mitigation systems.
Aerodynamic Cab: A cab design optimized to reduce wind resistance and improve fuel efficiency.
eCall: An automatic emergency notification system that alerts rescue services in the event of a rollover or airbag deployment, providing GPS location and other critical information.
Fuel Efficiency: The ability of a vehicle to travel farther on less fuel, often measured in miles per gallon (MPG).
Integrated Parking Cooler: A cooling system that allows drivers to stay comfortable in their trucks without idling the engine, saving fuel and reducing emissions.
Remote Diagnostics: The ability to monitor vehicle health and diagnose potential issues remotely, enabling proactive maintenance and reducing downtime.
Uptime: The amount of time a vehicle is operational and available for use, as opposed to being out of service for maintenance or repairs.
Volvo Action Service: A 24/7 assistance service providing support for scheduling repairs, managing service, and addressing other truck-related issues.
Volvo Connect: An interface that provides a holistic view of Volvo truck services, including digital keys and tools for managing a profitable business.
Versatile Bunk: A bunk design that can be easily configured for multiple purposes, such as sleeping or dining.
All-New Volvo VNL: Safety, Efficiency, and Driver Comfort
Okay, here is a briefing document summarizing the key themes and ideas from the provided Volvo VNL excerpts:
Briefing Document: All-New Volvo VNL
Executive Summary:
The all-new Volvo VNL (Volvo Next Generation North America Long-haul) truck is being presented as a significant advancement in the trucking industry, focusing on three core pillars: safety, fuel efficiency, and driver comfort/productivity. Volvo emphasizes its legacy of innovation and aims to set new industry standards with this model. Key features include a redesigned aerodynamic cab, enhanced safety technologies, an improved powertrain, and a driver-centric living environment. The VNL also incorporates advanced connectivity features for improved support and emergency assistance.
Key Themes and Ideas:
Fuel Efficiency: A major emphasis is placed on improved fuel efficiency, primarily through aerodynamic redesign.
“the allnew Volvo vnl carries on our Legacy of pioneering groundbreaking Technologies and setting new industry standards starting with its revolutionary aerodynamic cab that is redesigned to put the wind in your favor and improve fuel efficiency by up to 10%”
“The aerodynamic improvements on the allnew Volvo vnl will help you significantly increase fuel efficiency… our design Innovations create up to an incredible 10% Improvement in fuel efficiency.”
Safety: Volvo positions itself as an industry leader in safety, highlighting both active and passive safety technologies.
“you don’t become the brand known for safety by playing it safe safety is at the heart of all we do it pushes us to challenge ourselves our Solutions make it easier for drivers to avoid accidents reduce injury risk and make the road safer for all”
“On our latest vnl we introduced a proprietary active safety system that takes integration to a whole new level greater visibility gives drivers more time to react further reducing the risk of collision”
“the allnew Volvo vnl has more standard safety features than ever before and a range of additional premium options that redefine what safety is”
Driver Comfort and Productivity: The design of the VNL prioritizes the driver’s experience, focusing on creating a comfortable and functional living/working environment.
“drivers deserve the best that was our guiding principle in designing the living environment of the allnew Volvo vnl”
“everything is focused around the driver from not just the front of the vehicle but all the way through to the sleeper”
Features like the “versatile pulldown bunk” that stores into the wall, a dinette solution, strategic interior lighting, and consolidated controls in the bunk area contribute to this.
Connectivity: The VNL is presented as a highly connected truck, emphasizing its ability to improve safety and support through remote diagnostics, over-the-air updates, and emergency assistance.
“the allnew Volvo vnl is one of the most connected trucks in North America but to us connectivity is more than lowering costs and improving productivity it has the power to potentially help save lives”
“ecall can notify Rescue Services automatically supplying them with GPS location and other important information.”
Improved Powertrain: The powertrain is highlighted as more efficient with increased shifting speed.
“a more efficient powertrain with a 30% increased and shifting speed that ensures the Volvo ey shift remains second to nut”
Key Features/Innovations:
Aerodynamic Cab: Redesigned for significant fuel efficiency gains.
Proprietary Active Safety Platform: Advanced system with features like pedestrian detection and emergency call (eCall).
eCall System: Automatically notifies emergency services in the event of a rollover or airbag deployment, providing GPS location.
Driver-Centric Design: Focus on comfort, functionality, and ease of use in the driving and living areas. Includes features like a digital driver information display, stock-mounted shifting, and a versatile pull-down bunk.
Integrated Parking Cooler: Reduces idling time by keeping the cab cool for up to 8 hours.
Remote Diagnostics and Over-the-Air Updates: Enables proactive maintenance and reduces downtime.
Quotes Supporting the Above:
Safety Focus: “If it’s possible to own a word Volvo owns the word word safety it’s who we are built into our DNA constantly striving for the best our dedication to Innovations which Elevate safety in and around our trucks has led to our class leading proprietary active safety platform”
Driver Comfort: “you’ll appreciate the big changes providing our most comfortable ride ever and you’ll like the little things too every Improvement was made with one purpose to make a driver’s job easier”
Conclusion:
The all-new Volvo VNL is positioned as a flagship truck model that represents Volvo’s commitment to safety, efficiency, and driver satisfaction. The combination of aerodynamic improvements, advanced safety technologies, a redesigned driver environment, and enhanced connectivity aims to provide a competitive advantage in the long-haul trucking market.
Volvo VNL: Enhanced Efficiency, Safety, and Driver Experience
FAQ: The All-New Volvo VNL
How does the new Volvo VNL improve fuel efficiency?
The Volvo VNL features a redesigned aerodynamic cab that significantly reduces wind resistance. This design innovation alone results in up to a 10% improvement in fuel efficiency compared to previous models.
What are the key safety features of the new Volvo VNL?
The VNL boasts a proprietary active safety platform with several advanced features. These include pedestrian detection, enhanced visibility for increased reaction time, and Volvo’s eCall system, which automatically alerts emergency services (911) with the truck’s GPS location and other crucial information in the event of a rollover or airbag deployment. The truck also incorporates reinforced safety technologies to protect drivers in accidents.
What improvements have been made to the driver’s living environment in the new Volvo VNL?
The driver’s living environment has been completely redesigned with comfort and convenience in mind. Key improvements include a versatile pull-down bunk styled after a Murphy bed which transforms into a dinette, improved interior lighting designed for both alertness while driving and relaxation in the living area, strategically placed controls in the bunk area for easy access to features like the parking heater and radio, and a quieter, more comfortable sleeper area with ample storage. It also has an integrated parking cooler that keeps drivers cool while eliminating idling for up to 8 hours.
How does the Volvo VNL improve the driver experience from start to finish?
The Volvo VNL enhances the driver experience by focusing on intuitive design and functionality. From the moment they use the key fob, drivers experience a seamless transition to the cab. The truck features a new stalk-mounted control for drive modes, engine braking, and paddle shifting. Digital displays provide easy-to-read information tailored to specific driving tasks, like navigation. The overall aim is to provide drivers with a sense of control, awareness, and comfort.
What is Volvo’s eCall system and how does it work?
Volvo’s eCall system is a connectivity feature designed to improve safety in emergency situations. In the event of a rollover or airbag deployment, eCall automatically notifies rescue services, providing them with the truck’s GPS location and other relevant information. The system operates independently of the driver’s personal cell phone, ensuring that emergency services can be contacted even if the driver is unable to do so. A call-back number is also provided to the cab for emergency services.
How does Volvo ensure ongoing support for the VNL after purchase?
Volvo offers comprehensive support through its Volvo Service Contracts and a network of approximately 400 certified Uptime dealerships and thousands of service locations across North America. Services include remote diagnostics, over-the-air programming updates, adaptive maintenance based on actual operating conditions, and Volvo Action Service, which provides 24/7 assistance for service scheduling, repairs, and more. Volvo Connect provides a single interface for managing all Volvo Trucks services.
What new transmission features exist in the Volvo VNL?
The Volvo VNL has an efficient powertrain with a 30% increase in shifting speed. This also ensures that the Volvo I-Shift transmission remains second to none.
What are the benefits of the Volvo VNL bunk area?
The Volvo VNL bunk area is a best-in-class living environment with a versatile pull-down bunk styled after a Murphy bed. It can be stored into the wall and then offer a complete dinette solution that comes out in one motion. Also, the control panel has been consolidated down to control the parking heater, radio, and other functions.
Volvo VNL: Safety Technologies and Features
The all-new Volvo VNL incorporates safety technologies, including active and passive systems. Volvo’s dedication to innovation aims to elevate safety in and around their trucks, leading to a class-leading proprietary active safety platform.
Key safety features and technologies:
Active Safety Platform: The VNL has a proprietary active safety system that integrates various technologies to help drivers “see, stop, and stay focused”. This platform has more standard safety features than previous models, with additional premium options available.
Visibility: Enhanced visibility provides drivers with more reaction time, further reducing the risk of collisions.
Collision Mitigation: The active safety system helps drivers avoid accidents and reduce injury risk, making the road safer.
Driver Support: The system is designed to help drivers prevent accidents before they occur.
Connectivity: Connectivity can potentially help save lives; in the event of a rollover or airbag deployment, eCall can automatically notify rescue services, providing them with GPS location and other important information, and can also provide a call back number to the cab for emergency services. The system operates independently from the driver’s personal cell phone.
Structural Safety: The VNL is reinforced with safety technologies, some of which are exclusive to Volvo.
Volvo VNL: Fuel-Efficient Truck Design
The all-new Volvo VNL is designed to be a fuel-efficient truck. Here’s how:
Aerodynamic improvements to the cab improve fuel efficiency by up to 10%.
The design innovations help significantly increase fuel efficiency.
It is designed to be one of the most fuel-efficient trucks on the road.
Volvo VNL: Enhanced Driver Comfort and Living Environment
The all-new Volvo VNL is designed with driver comfort in mind, focusing on both the driving and living environments.
Key aspects of driver comfort:
Living Environment:The design principle prioritizes the driver’s well-being, providing the most comfortable ride.
Improvements are made to make a driver’s job easier.
Distinct trim levels incorporate eye-catching designs that complement the cabin’s use of space.
The sleeper area is quieter, more comfortable, and packed with amenities, with everything located conveniently.
Versatile Bunk:Modeled after a Murphy bed, the bunk can be stored into the wall to offer a complete dinette solution, providing flexibility in using the living environment.
Interior Lighting:Lighting in the driving area promotes alertness, while living area lighting is specific to the driver’s needs.
Controls and interfaces are strategically placed inside the bunk area, including a consolidated control panel for parking heater, radio, and other functions.
Drivers can close off the curtains and change the lighting inside the sleeper to create a perfect environment.
Seamless Experience:From picking up the key fob to driving, the design aims for a seamless experience, allowing drivers to feel in control, aware, and comfortable.
Driver Interface and Controls:A new stock serves multiple purposes, including engaging drive modes, engine braking, and paddle shifting between transmission gears.
Digital Displays:Digital displays cater to the driver’s tasks, such as navigation, and provide instrument views dedicated to driving.
Integrated Parking Cooler:An industry-first integrated parking cooler keeps drivers cool while eliminating idling for up to 8 hours.
Volvo VNL Active Safety System: Enhanced Visibility and Collision Mitigation
The all-new Volvo VNL has a proprietary active safety system that integrates various technologies to help drivers “see, stop, and stay focused”. Volvo’s dedication to innovation aims to elevate safety in and around their trucks, leading to a class-leading proprietary active safety platform.
Key features of the active safety system include:
Visibility: Enhanced visibility gives drivers more time to react, which reduces the risk of collisions.
Collision Mitigation: The active safety system helps drivers avoid accidents and reduce injury risk, making the road safer.
Driver Support: The system helps drivers prevent accidents before they occur.
The active safety platform has more standard safety features than previous models, with additional premium options available.
Volvo VNL: Connectivity Features for Efficiency and Safety
The all-new Volvo VNL is designed to be one of the most connected trucks in North America, with connectivity features focused on lowering costs, improving productivity, and potentially helping to save lives.
Key connectivity features:
eCall: In the event of a rollover or airbag deployment, eCall can automatically notify rescue services, supplying them with GPS location and other important information.
eCall can also provide a call back number to the cab for emergency services.
The eCall system operates independently from the driver’s personal cell phone.
Remote Diagnostics: Remote diagnostic analysis of critical fault codes leads to immediate communication and recommendations.
Remote Programming: Over-the-air updates can be handled during regular stops.
Route Adaptive Maintenance: Maintenance is based on actual operating conditions.
Volvo Connect: Volvo Connect is the interface for all services from Volvo Trucks, including a digital key. It provides a holistic view to facilitate a more productive and profitable business.
Volvo Action Service: Offers 24/7 assistance in the US and Canada; one phone call connects the user with uptime experts to manage service schedules and repairs.
Volvo Assist: Allows the user to monitor vehicle status, review estimates, approve repairs, and communicate directly with a dealer.
All New VOLVO VNL 2025 is a Luxury Hotel Room on wheels!
The Original Text
the allnew Volvo vnl carries on our Legacy of pioneering groundbreaking Technologies and setting new industry standards starting with its revolutionary aerodynamic cab that is redesigned to put the wind in your favor and improve fuel efficiency by up to 10% the new Volvo proprietary active safety platform features pedestrian detection and an industry first ecall that alerts 911 a more efficient powertrain with a 30% increased and shifting speed that ensures the Volvo ey shift remains second to nut a redesigned driver environment with push button start digital driver information display and stock mounted shifting best-in-class living environment with a versatile pul down bunk that will improve productivity and rest an industry first integrated parking cooler to keep drivers cool while eliminating idling for up to 8 hours and true to Volvo trucks promise industry best support when you’re on the [Music] road drivers deserve the best that was our guiding principle in designing the living environment of the allnew Volvo vnl you’ll appreciate the big changes providing our most comfortable ride ever and you’ll like the little things too every Improvement was made with one purpose to make a driver’s job easier our new distinct trim levels incorporate eye-catching design that complements the cabin’s Innovative use of space the sleeper area is quieter more comfortable and packed with amenities everything where you want it and how you need it because that’s how we designed it whether it’s for pre-trip inspection or general maintenance drivers love a truck that’s easy to work around just a simple operation that lets them quickly get back to logging miles the all new versatile bunk that we offer in the vnl is styled after a Murphy bed so we can actually store the bunk into the wall and then offer a complete dinette solution that comes out in one motion this gives the customer complete f ibility with how they use their living environment interior lighting is one area that we really focus on that’s not necessarily obvious aside from the driving area lighting that promotes alertness we also have living area lighting that is specific for the driver the need for the driver to be able to interface with the vehicle is not just limited to the driving area it is also very important in the living area therefore we have strategically placed controls and interfaces inside the monk area even the control panel has been Consolidated down to control parking heater radio and other functions you have the capability to completely close off the curtains change the lighting inside of the sleeper and provide an environment that’s perfect for drivers after driving with the allnew vnl everything is focused around the driver from not just the front of the vehicle but all the way through to the sleeper some of the new features that we’re introducing come in two parts from a user experience design perspective one’s hardware and one’s software from the hardware perspective we are introducing a new stock that serves three purposes one to allow you to engage the different Drive modes the other to allow you to do engine braking and the third to paddle shift between transmission gears we have digital displays that’s cater to the driver’s certain tasks like needing to navigate we have instrument views that are dedicated to allow the driver to focus a lot on driving from the moment they pick up the key fob to the moment they get behind the wheel and start driving that it’s a seamless experience that allows them to feel in control to feel aware of their surroundings and be comfortable at the end of the day with the product and operating that product on our nation’s roads you don’t become the brand known for safety by playing it safe safety is at the heart of all we do it pushes us to challenge ourselves our Solutions make it easier for drivers to avoid accidents reduce injury risk and make the road safer for all we continue to believe believe in the power of Next Generation safety Technologies and their ability to help drivers prevent accidents before they [Music] occur on our latest vnl we introduced a proprietary active safety system that takes integration to a whole new [Music] level greater visibility gives drivers more time to react further reducing the risk of collision we go to Great Lengths to advance our zero accident Vision yet recognize the need to protect drivers in the event of an accident the allnew vnl is reinforced with Leading Edge and often Volvo exclusive safety Technologies we are the pioneers of safety and this is what pioneers do Drive progress if it’s possible to own a word Volvo owns the word word safety it’s who we are built into our DNA constantly striving for the best our dedication to Innovations which Elevate safety in and around our trucks has led to our class leading proprietary active safety platform the allnew Volvo vnl has more standard safety features than ever before and a range of additional premium options that redefine what safety is here are just a few highlights of this best-in-class safety platform that helps Drive to see stop and stay focused on the [Music] [Music] road 911 what’s your emergency bottom line our proprietary active safety platform in this new vnl acts on our promise of being the industry leader in safety a promise we keep with every mile driven every load carried and every Journey taken simply put safety is one of our core values and a top priority since day one [Music] one the allnew Volvo vnl is one of the most connected trucks in North America but to us connectivity is more than lowering costs and improving productivity it has the power to potentially help save lives in the event of a rollover or airbag deployment ecall can notify Rescue Services automatically supplying them with GPS location and other important information eall can also provide a call back number to the cab for emergency services the system operates independent from the driver’s personal cell phone feel safe our connectivity is here to let you live your life freely connecting lives connecting people we Define our our eles by our ability to reach the seemingly unattainable in our most daring effort today we’ve set a new standard for what a fuel efficient truck can be the aerodynamic improvements on the allnew Volvo vnl will help you significantly increase fuel [Music] efficiency our design Innovations create up to an incredible 10% Improvement in fuel efficiency are we perhaps the most fuel efficient truck on the road in a word yes maximize your productivity with a Volvo service contract with a network of approximately 400 certified uptime dealerships and thousands of service locations across North America we’re always there when you need us the more you know about your vehicle the better decisions you can make remote diagnostic analysis the critical fault codes leads to immediate communication and recommendations Remote programming via over-the-air updates can be handled during a regular stop and Route adaptive maintenance is based on actual operating conditions with assist you have the power to monitor vehicle status review estimates approve repairs and communicate directly with a dealer Volvo Action Service offers 247 assistance in the US and Canada one phone call Connects you with uptime experts to manage service schedule repairs and more One login one holistic view Volvo connect is the interface for all your services from Volvo trucks and your digital key to a more productive more profitable business that way you can do what you do best load and [Music] deliver [Music] a [Music] yeah [Music] [Applause] [Music] a [Music] oh [Music] [Music] oh
Affiliate Disclosure: This blog may contain affiliate links, which means I may earn a small commission if you click on the link and make a purchase. This comes at no additional cost to you. I only recommend products or services that I believe will add value to my readers. Your support helps keep this blog running and allows me to continue providing you with quality content. Thank you for your support!
This text offers a detailed guide to using QuickBooks Desktop 2022. It covers a wide range of functions, including setting up company files, customizing the user environment, and managing customers and vendors. The text further explains handling inventory, creating invoices, managing bills, and generating financial reports. It also provides guidance on setting up users, managing payroll, and using the income tracker, aiming to help users optimize their QuickBooks experience. The guide walks users through detailed tasks like creating invoices from estimates, entering bills against inventory, and applying vendor credits.
QuickBooks Online/Desktop Study Guide
Quiz
Answer each question in 2-3 sentences.
What is the purpose of setting a start date when creating a new bank account in QuickBooks?
Explain the difference between a fixed asset and a liquid asset.
Why is it recommended to create “big bucket categories” for assets instead of listing each asset individually?
What is a liability in the context of accounting and QuickBooks? Give an example.
Why should a business owner think about organizing expenses into sub-accounts?
What does “net 30” mean in the context of customer invoicing?
What is a sub-customer in QuickBooks, and why might a business use them?
Why can’t you delete a customer in QuickBooks Online?
Explain the purpose of “undeposited funds” in QuickBooks.
What is a customer statement and what information does it typically include?
Quiz Answer Key
The start date is used to begin tracking the money in the account and helps when reconciling with bank statements. It’s best to choose a date that corresponds with the beginning of a bank statement period to ensure accurate records.
A fixed asset is something a business owns and plans to keep long-term, like a vehicle or property, whereas a liquid asset is something more easily converted to cash, like inventory. Liquid assets are meant to be sold or used quickly.
Creating big bucket categories for assets keeps the chart of accounts organized and prevents it from becoming overwhelming. It simplifies financial reporting and makes it easier to understand the overall asset picture of the business.
A liability is something a business owes to others, such as a loan from a bank. It represents an obligation to pay money or provide goods/services in the future.
Organizing expenses into sub-accounts allows for more detailed tracking and reporting of specific expense types, such as fuel being a sub-account of car and truck expenses. This enables better analysis of where money is being spent and helps identify areas for potential cost savings.
“Net 30” means that the full payment for an invoice is due 30 days from the invoice date. It’s a common payment term used to grant customers a specific period to settle their outstanding balance.
A sub-customer is a way of adding a level underneath a main customer; it can be a specific job, project, or location associated with that customer. Businesses use sub-customers to track revenue and expenses for individual projects or locations related to a single client.
QuickBooks Online doesn’t allow customer deletion to maintain data integrity and prevent the loss of historical transaction information. Instead of deleting, customers can be made inactive, which hides them from most lists but preserves their data in the system.
“Undeposited funds” is a temporary holding account in QuickBooks for payments that have been received but not yet deposited into a bank account. This allows you to accurately record when money was received versus when it was actually deposited, especially when combining multiple payments into a single deposit.
A customer statement is a summary of a customer’s account activity, typically sent at the end of each month, to remind them of any outstanding balances. It includes a listing of invoices, payments, credits, and the total amount due.
Essay Questions
Discuss the importance of setting up a well-structured chart of accounts in QuickBooks. How does it impact financial reporting and decision-making for a business? Provide specific examples of how different account types (assets, liabilities, equity, income, expenses) contribute to a clear financial picture.
Explain the accounts receivable workflow in QuickBooks. From creating a customer to receiving payment, outline each step and discuss the key features and reports available to manage and track customer balances effectively.
Describe the differences between using sales receipts and invoices in QuickBooks. In what scenarios would each be most appropriate, and what are the implications for accounting and financial tracking?
Discuss the process of managing vendor relationships and accounts payable in QuickBooks. How can QuickBooks help a business track bills, schedule payments, and manage vendor credits effectively?
Explain the purpose of the tags feature in QuickBooks and how it can be used to improve financial tracking and reporting. Provide specific examples of how businesses can leverage tags to gain insights into their revenue and expenses.
Glossary of Key Terms
Account Type: A classification within the chart of accounts that defines the nature of a financial element (e.g., asset, liability, equity, income, expense).
Asset: Something a business owns that has economic value (e.g., cash, accounts receivable, inventory, equipment).
Liability: Something a business owes to others (e.g., accounts payable, loans).
Equity: The owner’s stake in the business, representing the residual value of assets after deducting liabilities.
Income: Revenue generated from the sale of goods or services.
Expense: Costs incurred in the process of generating revenue.
Chart of Accounts: A structured list of all the accounts used to record financial transactions in a general ledger.
Sub-Account: An account that falls under a main account, providing more detailed categorization and tracking.
Accounts Receivable: The money owed to a business by its customers for goods or services sold on credit.
Invoice: A bill issued to a customer for goods or services, requesting payment within a specified timeframe.
Sales Receipt: A record of a sale where payment is received at the time of the transaction.
Customer: An individual or business that purchases goods or services from another business.
Vendor: An individual or business that supplies goods or services to another business.
Credit Memo: A document issued to a customer to reduce the amount they owe, often due to returns, allowances, or errors.
Undeposited Funds: An account used to temporarily hold customer payments before they are deposited into the bank.
Statement: A summary of a customer’s account activity, including invoices, payments, and outstanding balance.
Tags: Customizable labels that can be assigned to transactions to categorize and track specific aspects of a business’s finances.
Purchase Order: A document issued to a vendor, authorizing the purchase of goods or services.
Inventory Part: A physical item that a business buys, stocks, and sells.
Service Item: A non-physical service that a business provides.
Cost of Goods Sold (COGS): The direct costs associated with producing goods or services that a company sells.
General Ledger Number: A unique number assigned to each account in the chart of accounts for identification and organization.
QuickBooks Account, Customer, and Inventory Management Guide
Okay, here’s a briefing document summarizing the key themes and ideas from the provided source, including relevant quotes:
This document reviews key aspects of setting up and managing accounts, customers, and inventory within QuickBooks. It covers topics ranging from chart of accounts configuration to managing customer invoices and payments, as well as utilizing features like “Tags” for enhanced reporting. It covers both online and desktop versions of Quickbooks.
Key Themes and Ideas:
1. Chart of Accounts Setup:
Account Types: The document emphasizes selecting the appropriate account type (e.g., Bank, Asset, Liability, Equity, Income, Cost of Goods Sold, Expense) when creating new accounts. “First thing you’re going to do is pick the account type in this case it will be bank but notice all the other types we’re going to be talking about.”
Naming Conventions: Flexibility is offered in naming accounts. “When you name your account you can name it anything you want.” Examples include naming bank accounts by bank name or using descriptive names like “Operating Account” or “Payroll Account.”
Start Dates and Balances: The importance of using accurate start dates and beginning balances is highlighted to ensure accurate financial tracking. “Just try to make it correspond to the start date of your bank statement.”
Sub-Accounts: The briefing details the use of sub-accounts for more granular tracking within a main account. “See how fuel looks like it’s indented a little bit that’s a sub account and there’s going to be a lot of these you’ll want to add.” Examples include fuel and insurance under auto, and accounting/attorney fees under legal and professional fees.
Assets: Differentiates between fixed assets (long-term, like vehicles or property) and liquid assets (like inventory). Emphasizes using “big bucket categories” for assets rather than listing each item individually.
Liabilities: Differentiates between short term and long term liabilities. “When you think about your liabilities there to the new option in the top right of your screen here you are going to create a new account and it has to be the same type as the main account so that means this one has to be an expense account and in this case it is Auto and I want to name this one fuel”
2. Customer Management (Accounts Receivable):
Customer List Navigation: Explains how to access and navigate the customer list, emphasizing alphabetical organization by last name for easy searching.
Customer Information: Details the information that can be stored for each customer, including company name, contact details, billing/shipping addresses, and email addresses.
Sub-Customers (Jobs/Projects): Describes how to set up sub-customers to track different jobs or projects for the same customer. This enables detailed reporting at both the customer and project levels. “A sub customer is basically going to be a way of adding a level underneath your main customer if you have different jobs that you work on for a particular customer you can actually separate those jobs by actually creating sub customers then you can look at reports for the entire customer but also per sub customer”
Inactivating Customers: Explains how to make customers inactive rather than deleting them, particularly if they have a history of transactions. “Inactive customers actually will not show up when you’re working in other areas of QuickBooks but if you wanted to actually turn them back on you could go and activate them again”
Importing Customers: Details the process of importing customer lists from Excel or CSV files.
3. Sales and Invoicing:
Sales Receipts vs. Invoices: Clarifies the distinction between sales receipts (for immediate payment) and invoices (for future payment). “…the difference in a sales receipt and an invoice is that on a sales receipt the customer is standing right there”
Creating Sales Receipts: Explains how to create sales receipts, including selecting products/services, quantities, prices, payment methods, and deposit accounts (including “undeposited funds”).
Invoicing Customers: Details how to create and send invoices, including customizing invoice templates, adding line items, and applying customer discounts.
Receiving Payments: Describes the process of recording customer payments against open invoices. Explains the function of undeposited funds.
Deposits: “Now that we’ve made a sale for our business we’ve actually invoiced a customer in this case we got paid and now we want to take that money and put it in the bank and that’s where the make deposits option comes in”
Credit Memos: Explains how to issue credit memos to customers, either to correct errors or to provide refunds.
Statements: States that “…a statement is basically a gentle reminder to your customers that they owe you some money typically statements are sent at the end of each month and they show the activity that happened during that month you don’t have to send out statements but it’s a nice little feature to keep your customers abreast of what’s going on with their account”
Customer groups “This is a newer tool designed to help small business owners with one of the hardest problems that they have and that’s actually getting your customers to pay you”
4. Inventory Management:
Item Types: Explains the various item types available in QuickBooks (Service, Inventory Part, Non-Inventory Part, Other Charge, Subtotal, Group, Discount, Payment, Sales Tax Item, Sales Tax Group).
Inventory Parts Setup: Describes how to set up inventory parts, including defining purchase and sales descriptions, costs, prices, preferred vendors, and initial quantities on hand.
Purchase Orders: Details the process of creating purchase orders to track inventory orders.
Vendor credits “Sometimes a vendor will issue you a credit that you need to apply to a bill or it could be you want to apply it to your account and use it in the future but I want to show you how to handle those once the vendor sends them to you”
5. Quickbooks Preferences
“The preferences in QuickBooks control how QuickBooks is set up and that way it matches your business needs” *You can change how different features are set up in the Quickbooks Desktop Version.
6. Users
You do not have to set up users in Quickbooks but it is higly suggested.
Setting up users will allow you to set up different permissions for different people who use Quickbooks
7. Tags:
A brand new feature in Quickbooks online that will allow you to create words that will appear on a drop down list when you are in different transactions.
8. Jobs
“For example, you might want to keep track of how profitable it was to remodel the kitchen versus building an addition”
Conclusion:
This document provides a high-level overview of setting up and managing critical business functions within QuickBooks. By following these guidelines, businesses can establish a solid foundation for accurate financial tracking and efficient customer relationship management.
QuickBooks Online and Desktop: Common Accounting Tasks
QuickBooks Online and Desktop FAQ
How do I set up a new bank account in QuickBooks? First, select the bank account type. Then, name the account (e.g., “Checking,” “Operating Account”). An optional description can be added. Select a start date for tracking money; using the beginning of the fiscal year or month is recommended. Enter the ending balance from the prior period’s bank statement as the starting balance. Remember this impacts opening balance equity. Bank accounts can also be set up for PayPal, Square, etc.
What are assets in QuickBooks, and how should I categorize them? Assets are items your business owns that add value. They are divided into fixed assets (long-term, like vehicles or property) and other current assets (more liquid, like inventory). Create broad categories for assets (e.g., “Vehicles,” “Furniture and Fixtures,” “Equipment”) rather than listing each item separately. Aim for 7-10 categories. An accountant can help you decide categories and values, including depreciation (which QuickBooks does not automatically handle).
How do I create sub-accounts in QuickBooks, and what are they used for? When creating a new account, choose the “sub-account of” option and select the main account. Sub-accounts provide more detail within a broader category, such as “Fuel” as a sub-account of “Car and Truck.” This helps organize expenses and other categories for detailed reporting.
What is Accounts Receivable, and how do I manage customers in QuickBooks? Accounts Receivable represents money owed to your business by customers. To manage customers, navigate to the “Sales” section, then “Customers.” You can view open invoices, overdue amounts, and payment history. The customer list displays names, phone numbers, and balances. Customers can be added manually or imported from Excel.
How do I add a new customer and a sub-customer (job) in QuickBooks? To add a customer, click “New Customer” and enter company and contact details, billing/shipping addresses, and payment settings. Display names can be customized. To add a sub-customer, click “New Customer” again, enter the sub-customer’s name and details, and choose the parent customer under the “Is sub-customer” option. This allows you to track income and expenses related to specific projects or jobs.
What is the difference between a sales receipt and an invoice, and how do I create each in QuickBooks? A sales receipt is used when a customer pays immediately, while an invoice is used when a customer will pay later. To create a sales receipt, select “Sales Receipt” from the “+” menu. Choose the customer, payment method, products/services sold, and deposit account. To create an invoice, select “Invoice” from the “+” menu, enter customer details, billing address, payment terms, and products/services.
What are “undeposited funds” in QuickBooks, and how do I use them when recording payments and making deposits? “Undeposited Funds” is an account used to temporarily hold payments received before they are deposited into a bank account. When receiving a payment, deposit it to “Undeposited Funds.” When you make a bank deposit, group all the payments included in that deposit from “Undeposited Funds” into a single transaction, so the deposit matches your bank statement.
How can I track different aspects of my business using “Tags” in QuickBooks? Tags are a new feature that allows you to categorize transactions with custom labels. You can create tags for different aspects of your business, such as “Fountains,” “Landscaping,” or “Pest Control,” and assign them to invoices, expenses, and other transactions. This helps you to run reports and analyze your business performance based on specific categories.
QuickBooks Live Bookkeeping: Collaborative Accounting Solution
QuickBooks offers a Live Bookkeeping service where you can pay Intuit for a bookkeeper to manage the books. With this service, both you and the bookkeeper have logins to the company file, facilitating collaboration and the ability to have live conversations.
QuickBooks will soon offer a Cash Flow Center to provide insights into your cash accounts, such as checking accounts. The Cash Flow Center will allow you to monitor the flow of cash in and out of your accounts. You can join a wait list to receive updates and information about the Cash Flow Center when it becomes available.
The gear icon is located in the top right-hand corner of the QuickBooks dashboard and provides access to various lists and tools within QuickBooks. It allows customization of the company file.
The gear menu provides access to:
Tools
Company options (such as address)
Profile menu for the Intuit account
Chart of accounts
To customize the company file, navigate to the gear icon and then to Account and Settings. This area allows you to:
Add a company logo.
Edit the company name.
Input an Employer Identification Number (EIN) or Social Security number.
Select the tax form used when filing taxes.
Manage company and customer-facing emails.
Enter the company phone number and website.
Manage company addresses.
The gear icon should not be confused with the “new” option that is used to create a new transaction.
QuickBooks can send payment reminders to customers with outstanding invoices. If this feature is desired, it can be set up to prompt at a certain time of day, and can be set up daily or weekly.
To set up customer invoice reminders:
Go to Edit then Preferences.
Go to Payments.
Choose to send reminders to customers that have payments that are due.
Set the time of day to prompt.
Choose daily or weekly prompting.
The Chart of Accounts is a listing of different areas where you might spend or receive money and is the backbone of all accounting. It is the most important thing in QuickBooks, and every single thing in QuickBooks will flow through it. Every transaction in QuickBooks runs through one of the accounts in the Chart of Accounts.
Here’s what the Chart of Accounts includes:
Lists of different areas where you might actually spend money
Different areas where you might receive money
The Chart of Accounts screen includes:
The name of the account.
The type of account.
The detail type, giving more information about the type chosen.
The QuickBooks balance.
The bank balance, if downloading transactions from the bank.
Here are some important points regarding the Chart of Accounts:
When setting up the Chart of Accounts, name the accounts in a way that makes sense.
When looking at the list, the names are alphabetical.
If you want to close them, you have to go to the Advanced section in Account and Settings.
If you want to turn on general ledger numbers, you have the option to do that.
You can access the chart of accounts by going through the gear icon.
You can access the chart of accounts by going to the left and clicking on Accounting.
Everything in QuickBooks will run through the chart of accounts.
Make sure information goes into the correct categories so that when reports are run, the information is accurate.
There is a part two to the chart of accounts.
To customize the company file, navigate to the gear icon and then to Account and Settings.
QuickBooks for Beginners: 7.5-Hour QuickBooks Online and QuickBooks Desktop Pro Training
The Original Text
foreign [Music] says subscribe and click on the Bell icon to receive notifications we’ve made a downloadable transcript of this tutorial available as a free study tool just click the link below in the video details to get this hello welcome to QuickBooks Online my name is Cindy mceugan and I’m going to help walk you through this series of videos I wanted to take a few moments in this introduction video and just give you a little bit of information about what to expect as you go through the course and also let you know a little bit about myself and my background I’ve actually been a software trainer for about 20 years I teach a lot of different types of software QuickBooks being one of those I’m also a QuickBooks consultant I work with both the desktop and the online version I’m going to be able to relay some information about both as we go along but more importantly we’re talking about the online version here the online version has an advantage and that is that you’ll be able to access your company file from anywhere you happen to be that has internet access if you work out in the field for example you might have your laptop with you or need to access it through your phone and these are great reasons to sign up with a subscription for the QuickBooks online service we’re going to actually take this from the very beginning I’m not going to assume you know anything I’m going to actually start off showing you how to go online and pick the correct subscription we’ll talk a little bit about working with QuickBooks online and mobile devices and then we’ll take it from there and start actually setting up our company file we’re going to work with customers that side of things is called accounts receivable you’ll need to know how to create invoices what to do when the customer pays you how to actually put that money in the bank we’re also going to talk about the flip side which is your accounts payable that’s anything that has to do with the bills that come in the mail that you have to pay you’ll want to track those so that you know at any time how much you owe we’re going to be going through and talking a little bit about products and services those are physical things and sometimes they’re a service you provide but sometimes you buy products and you also sell products and we need to know how to set those up properly we’ll be looking quickly at payroll there’s a lot of other things we’ll look at but I want you to have a really good foundation for building your company file from the very beginning when you watch these videos make sure you watch them in order they won’t make very much sense if you get them out of order make sure you watch them all and make sure you watch them till the very end because a lot of times there’s a really good piece of information right at the very end of one of these videos if you have questions as we go along feel free to shoot us an email and we’ll get right back to you and answer your questions with that being said let’s go ahead and get started we’re going to flip over to section two and talk about the different subscriptions that you have available when you decide to go with QuickBooks Online thanks for coming back we are just getting started and the very first thing you’ll need to do before you start using QuickBooks is actually sign up with one of their subscription Services it is not free to use the QuickBooks Online you do have to pay for it monthly and they have four different subscriptions that you can subscribe to I want to go ahead and pull up the website so that you can actually look and compare the different ones with me as we go through this all right what you’ll want to do is make sure you navigate to quickbooks.intuit.com and you’ll see this screen right here which will give you a lot of different information on their subscription services one thing I want to point out is that they do have a try it free for 30 days for each of these that way you can see if you like it before you actually jump in and make a purchase I want to scroll down here because you’ll notice that this is where they have the different subscriptions listed and I want to compare these with you the pricing that you see here may change depending on when you watch this video you’ll notice for each of these that they have a monthly fee and right now you can save 50 off if you sign up for this three months at a time The Simple Start is the really basic one that you’ll want to start with if you’re a brand new business and you don’t have a lot of customers maybe not a lot going on with your business yet it’s great to start with this one and then you can always upgrade as you need more of the features let me go ahead and go through some of these features with you you’ll notice for The Simple Start that you can only have one user and that basically means that only one person can log in with that username and password if you need more you’ll notice that if you look at the essentials you can have up to three users uh the plus has up to five and then you can have up to 25 if you’re actually using the advanced let me scroll down a second because I want to show you that there are a lot of really good things about the Simple Start you’ll see for example that in addition to multiple users you can track your income expenses they’ll all let you do that all of these will help you maximize your tax deductions they’ll all have mileage invoicing and accepting payments they’re all going to have reports as well but you’ll notice that the Simple Start and the one that’s to the right of it only give you the basic reports where if you need some more advanced reporting then you’ll want to go with these two over here which are the plus and the advanced and even with the basic reports you’re going to have a lot of reports QuickBooks is really good about reporting you can do estimates in all of these if you think about a construction company you would want to actually estimate a job before you actually start doing the work and getting paid for it and that’s a really great feature they all have let me see scrolling down here you’ll notice that you can track sales and sales tax in all of these you can capture and organize your receipts meaning you can scan them in you can also manage 1099 contractors in all of these and that’s real important because those are what you call your subcontractors and you need to know how to properly handle those you’re going to be able to actually manage the bills if you go with the essentials the Plus or the advanced so notice that if you have the Simple Start you can’t handle the bills in QuickBooks and to me that’s really important because you want to track everything you owe because you’re going to need to really stick to your budget when you first start your business you cannot track time if you have Simple Start or you can’t do what they call tracking job profitability in The Simple Start even though like I said this is great for a new business starting out you’ll quickly want to upgrade to one of the others moving on to the essentials the essentials just doesn’t have the inventory feature the job tracking profitability you can kind of see the things here it does not have you can’t send batch invoices and really if you don’t do a lot of these it may not be important to you the one that sometimes people want is the inventory feature and you would have to upgrade to the Plus or the advanced to get that you’ll notice that the plus is the most popular one and it does not have the options for importing and sending batch invoices it doesn’t have the business analytics and insights options but you may not need these if you do want all of that you’ll want to go up to the advanced subscription here the other thing I want you to know is that you may go back and forth with these subscriptions because it could be that you add users like we saw earlier and you need to have maybe four people accessing your QuickBooks so there’s a lot of different things you need to think about and you can always call them if you’re really not sure which one would work best for your business I wanted you to be aware of that because the next thing you’ll have to do is actually start with one of these and you can actually choose to go ahead and buy now or like I showed you earlier you can go up to the top of the screen and you can actually try free for 30 days and that’s where I want to stop right now we’re going to go through and talk about setting up your company file after you pick a subscription over module two but before we do that I want to briefly talk to you a little bit about how QuickBooks Online works with mobile devices over in section three I’ll see you there shortly before we go ahead and wrap up module one I wanted to briefly talk to you a little bit about some things QuickBooks does with the online version and how mobile devices actually work with QuickBooks Online the first thing I want you to know is that your online version is constantly evolving you might log in one time and be used to where a particular option happens to be and the next time you log in it may not be there at all or maybe in a different location or look different when they roll out the changes they do not roll them out across the board for everyone at the same time that’s why your friend may have a version and he or she does not see the updates that you see but you’ll eventually get there the other thing I wanted to mention is that you have the ability to use different mobile apps for QuickBooks and they sync with QuickBooks depending on if you have an Android or an iPhone you can just go into your Play Store or go into the App Store in your iPhone and just look for the QuickBooks apps and you’ll notice that if you download one of them you’ll be able to take it out in the field with you and for example if you need to create an invoice out in the field you can do it right there on your phone and it will sync with your actual online version now the apps will not have all of the options that are available but they will have the basic most common ones that you would want to use go ahead also and look in your app store and see if there are other software apps that aren’t made by Intuit but would work well with QuickBooks and it might be something that you might need in your business so there’s all kind of apps that work with QuickBooks I just want to make you aware of that and if you ever wanted to just get a handle on what some of the changes are that they’re making in your version you can actually log into quickbooks.intuit.com forward slash blog and then you’ll be able to see all of the changes and stay on top of what’s new that’s going to wrap up module one let’s go ahead and jump into module two and start talking a little bit about working with the company files that you have to set up in QuickBooks we’re starting to work in module two now and this is where we’re going to talk a little bit about the QuickBooks company files anytime you create a file in QuickBooks it is called a company you can have as many companies as you would like often what you will see is a small business owner might set up one company with his personal information and another company that has the business information Company files do not talk to each other you don’t have to worry about the data getting mixed up and like I said you can have as many as you’d like in QuickBooks you have to either create a brand new one which we’ll do in section three you might already have the desktop version of QuickBooks and you want to upload your file to the online version we’ll look at that in section two but real quick what I want to talk to you about here is the fact that you do have the availability of a sample file that you can go in and play with as much as you want you do not have to sign up with a subscription or anything like that to access the sample file all you have to do is head on over to Qbo Dot intuit.com forward slash r-e-d-r forward slash test drive and the company file you’re going to be working with here is Craig’s design and landscaping services it is a service based business let’s go ahead and head on over there and we’ll check out Craig’s design and landscaping services the first thing you will have to do is just verify your real person I’m going to check the box I’m not a robot and then in this case it asks me to pick all the bicycles I’m going to go through the list and make sure I got them all and I’ll hit verify now it knows a real person and I can access Craig’s design and landscaping services now while this is pulling up let me just mention a couple of other things about the sample file every day they update the date so you’ll see that date change you’re actually going to be working in the year 2021 in the practice exercise so just kind of know that when you go in here and this is what QuickBooks looks like when you first open it up now I do want to go through the screen and get you familiar with everything but I want to do that over in section four before we actually go through this I want to show you how to upload your data if you wanted to bring your desktop file over or in Section 3 talk about creating a new file so we’ll come back to Craig’s design and landscaping services a little bit later right now I want you to head over to section two and let me show you how to upload your QuickBooks desktop files to the online version okay we’ve already looked at some of the sample files that QuickBooks has we now need to talk about how do you actually go through and set up your new QuickBooks online account all we’re going to do is navigate to where we looked at the different online subscriptions over module one and then we’ll go ahead and decide which of those subscriptions we’d like to sign up with or we can sign up with the 30-day free trial let me go ahead and flip over and we will sign up for that 30-day free trial to show you how this works all right as you can see I’ve navigated back to quickbooks.intuit.com I think I’ll take advantage of the free 30-day trial I’m going to use this link right here if you see a pop-up that asks you if you’d like to sign up with the 30-day free trial you can use that option as well what I’m going to have to do here is create an Intuit account and I can do this a couple different ways I can actually sign up with Google and if I have multiple Intuit products then I can go ahead and use that one login for all of them I might also choose to use an email address and sign up that way I think I’ll go ahead and do that and use one that I just set up for this and it’s got my name here also it asks you to plug in your mobile number that’s recommended but you don’t have to and the reason that you may want to go ahead and do that is if you happen to forget your username or your login information and to it can help you recover that information by looking at your telephone number that you’ve plugged in you will want to create a password and make sure it’s something that you’re going to remember but hard enough that someone else can’t actually try to get into your account remember a good password anywhere in your computer will have at least eight to twelve characters you’re going to have a combination of capital letters small letters you might have special characters I’m going to go ahead and choose a sign up with email once I’m done and this is going to create my Intuit account I do get the option to skip the trial and go ahead and purchase my subscription but I think I’ll go ahead and continue with the trial for now what’s going to happen now is it’s going to start setting up what we call our company file each file in QuickBooks is called a company you can have as many companies as you like in QuickBooks you this is going to launch us through What’s called the easy step interview where it’s going to ask us some questions and based on how we answer those questions it will set up all the options in our company file for us first we’ll see some basic information it wants to know what is the name of your business I’m going to call mine a b c services and then the next thing it will ask us is to describe the type of business you do now if you start typing in things like Plumbing electrical things like that then it’s going to start pulling from a drop down list and looking for those first few characters there and if you see something close to what you do on the list just choose it there’s no wrong answer here just choose something close to what you do I’m going to pick for example services and you’ll notice that there are several types of services that it thinks I may want to choose from I’ll just pick Professional Services in this case another thing that you have an option to do is if you’ve been using the desktop version of QuickBooks and you’d like to bring that data into your online version you have the ability to do that you can check this box and then it will take you into your computer so you can find that file we are going to talk over in Section 3 about how to prepare that desktop file so that you can pull it into your QuickBooks Online file I’m going to leave it unchecked for now and click next the next thing you want to know is what would you like to do in QuickBooks and you might want to do a lot of these different things you might want to send and track invoices organize your expenses manage your inventory if you have retail sales you’ll want to choose this if you don’t you may want to leave that alone maybe track your bills track your sales tax pay your employees and track your hours so you can see that you can choose a few of these or all of these I’m going to click next at the bottom and now it says what is your role at the business are you an owner are you the bookkeeper maybe the employee usually the owner of the company is the one that sets up the file or it could have been that the accountant set it up for the owner whichever person set it up is actually going to be the admin or the administrator of the file meaning that you’ll actually own it I’m going to go ahead and choose owner in this case and if I scroll down a little bit you’ll see that it says do you have an accountant or bookkeeper right now and you don’t have to say yes even if you do have one it’s just asking you this because it’s going to set up some of the options as we go down the road for the accountant if you happen to have one I’m going to say right now that I’m just going to do all by myself and I’m going to say all set at the bottom and now what you have is a basic setup for your QuickBooks company file there’s still a lot we have to do because it’s really a blank company file right now but at least we have the file set up so that we can work on it if you wanted to go through a 30-second tour to help you get down to business on QuickBooks you can do that I’m going to go ahead and close that out and then that is what it looks like when you first log in you are on What’s called the dashboard right now and the dashboard is just a quick way to see an overview of how different areas of your company happen to be doing now I do want to take time and go through this whole what we call user interface here I want to do that over in section four so right now let me show you how to log out and then where to go to log back in if you notice in the top right hand corner of your screen you have a gear icon and you can use the sign out option right here and that will go ahead and take you back to the screen now let me navigate away from this and then we will come back and you will see how to log back in so I’m at Google Now and when you’re ready to log in you want to go to QuickBooks dot intuit.com which is where we were earlier when we first created our account and this time you want to go over here where it says sign in notice that you’ll sign up with the particular one that you actually subscribe to if you haven’t subscribed and you’re still using that 30 day free trial just use the QuickBooks online option which is the first one and then that’ll take you back in and you’ll be able to log in right over here and that’s all you need to know right now as far as setting up your company file well let’s go ahead now and talk real quick over in section three about how to actually go ahead and upload your QuickBooks desktop file if you wanted to bring it into your online account if you happen to have been using the QuickBooks desktop version and now you’d like to pull that data into the online version there’s a little process you need to go through and once you go through the process then you’ll want to run some reports to make sure that all your data pulled in let me go ahead and show you how this process works what you want to do is open up your company file in the desktop version and then go to your menu and click on company and down near the bottom you will see export company file to QuickBooks Online if you don’t see that option what it means is that you have some updates that you need to do in this version of QuickBooks before you can export this in order to do those updates just go to help and you’re going to see It’ll say update QuickBooks right here now go through that update process and when you’re finished close the company file then open it back up and when you do you should see the option to export your company file to QuickBooks Online now what’s going to happen here is it’s going to ask you to log into your online account once you click Start Your export here’s where you have to log in so I’ll go ahead and put in my email address again gmail.com and I’ll go ahead and put in my password and I’m going to sign in now what’s going to happen here is if it doesn’t recognize you then it’ll want to send a code to your email to confirm your account if you had plugged in your phone number when you set up your account then it could send you a text that way I’m going to go ahead and tell it to confirm and I’m going to flip over to my email and get that six digit code and I’ll be right back I’ve got my code now I’m going to go ahead and plug it in and then I’m going to hit continue at the bottom it’s going to ask me a couple of quick things before it can actually pull it up to the online version this particular one says do you want to bring over your inventory if you say yes you want to go ahead and select a date that you want to go ahead and pull it in from I’m going to go ahead and say I’ll pull it in from January of 2020 and I’ll go ahead and hit continue the next thing is it wants me to choose my existing QuickBooks Online company file now if I have more than one you’ll see them all listed here and I just choose the one that I want and then I’m going to go ahead and hit continue at the bottom and now it’s preparing my company file which is Larry’s landscaping and Garden Supply here this process could take a while what will happen is you will actually get an email from Intuit once this process is complete once it is complete what we’ll want to do is go ahead and open both the desktop version and the online version and run some reports now this one finished pretty quickly I’m going to go ahead and click on okay and now I should get an email from Intuit remember if you don’t get that email it’s not finished yet it might look like it’s frozen but it’s really not you’ll get it eventually now let me show you the reports that you want to run to compare your data in either version it doesn’t matter which one you do first you’re going to run a profit loss and a balance sheet in the desktop version you’ll go to reports company and Financial profit and loss standard now when the report comes up there’s a couple things that you need to do make sure the dates are all that way you capture everything in your company file also make sure you’re running this on an accrual basis right up here you’re going to want to run that one then you’re going to want to run the balance sheet you’re going to go back to reports company and financial and run a balance sheet standard make sure that you pick all dates you’ll have to scroll up to the top for that and make sure you’re running it on a cruel basis even if in real life use the cash basis for this report run it on accrual so that you can make sure you’ve got everything here now I’m gonna go flip over to the online version and show you where to go in there to pull those same reports back into online version the way you run your reports is if you notice on the left hand side over here you’ll see it says reports and then you’ll notice that both the balance sheet and the profit loss are already set under your favorites you can run either one first doesn’t really matter I’ll just start with the profit and loss and if you get screens like this that says customize go ahead and just close that out for now and what you’ll want to do is make sure that you choose all from the top of the list here and then make sure you hit run report and you can see there’s the data now you’ll do the same thing with that balance sheet you’ll go back to reports over here on the left you’ll run the balance sheet make sure that you’re running this on a cruel basis and also make sure you’re looking at the dates all at the top you’ll want to make sure accrual is chosen right there like I said and then run your report now print those out and compare those two if all the data is the same on both of those then you’ve got everything but if one’s different than the other then it didn’t actually export all your data and pull it up to the online version and you may want to try it again and that’s how you’re going to pull your data from your desktop version into your online version let’s go ahead now and move over into section four and I’m going to give you a quick overview of how the screen looks we call this your user interface okay what I’d like to do in section four is go ahead and give you a quick overview of what the user interface looks like basically the screen when you pull it up what is it you’re seeing and how do all those pieces and parts work let’s flip over to QuickBooks and I’ll give you a quick overview when you first log into your company file you’re going to be on what we call the dashboard right here and basically the dashboard is just a quick overview of how the different areas of your company are doing a couple things you’ll notice you have the ability to add your logo you can just click here search your computer for your logo and Pull It in there are some things over here that it asks if you’d like to start doing you can click there so if you wanted to start invoicing for example you could choose that option you can also go ahead and set up some payments send your first invoice and swipe cards in person with your mobile app if you happen to have that set up now some of these things are not free they are fee based meaning that Intuit will sell these to you so for example if you’d like to be able to accept credit cards then they can set you up with the merchant services account you’ll notice down here are all of your invoice options so think of this as your accounts receivable this would be the amount of invoices that you have outstanding they have not yet been paid over here your expenses those would be the bills that you’ve put in you would be able to see any of those you had not paid yet as well so think of this area as accounts receivable and this as accounts payable here where it says profit and loss that is the most important report in QuickBooks because it will tell you if you’ve made money or if you’ve lost money and then over here you’ll just see all of your bank accounts and if you wanted to click on one you could let’s say for example I wanted to go to my checking account this one here is called checking if I double click and open it up that’ll take me directly to the checkbook register now I’m going to close this message real quick we’re going to spend some time in the register a little bit later but that’s a quick way to get into the register now if I wanted to go back to the dashboard all I have to do is come back over to my navigation pane which is what this is called and click on the dashboard and I’ll be back to where I just was this is called the navigation pane on the left if you went down to banking you would see all of your banking options I’m going to click on banking here and what this is going to show me is how to actually set up my account so that it is connected to the bank and I can pull in my transactions now I’m going to talk about this a little bit later but you don’t always want to do this if you are actually using the invoicing feature in QuickBooks like you should and receiving payments or if you’re using the bill feature and paying the bills correctly then you don’t want to enter information twice so you can set it up but you’re not always going to want to pull in data from the bank now if you had a credit card it would work perfectly now I know that probably doesn’t make sense yet but we will talk about some of that stuff a little bit later on over under expenses here expenses are things that you have to pay out you had to enter some bills you wrote some checks to pay those maybe use the purchase order system you can kind of see all that right down here the next thing is your sales these are the things that you actually sell to your customers you’ll notice that under sales I can look at invoices I can also look at my customer list as an example and I’ll just show you what that looks like real quick this is going to be a list of all of your customers and you can see that they’re set up alphabetical in this case by last name we’re going to spend time on customers a little bit later but that’s how you get to the customer list you also have different projects that you might work on in the desktop version they call these jobs in here they’re called projects you might have a particular customer you’re working with but you might have multiple jobs or projects going for that customer under the workers option this is where you’re going to find your payroll options there’s a ton of reports we’re going to be looking at a little bit later you’ve also got a place where you can go in and look at your sales tax and also payroll taxes if you’re doing payroll through QuickBooks you’ve got an option for mileage where you can go in and plug in which vehicle you drove which job you went to which customer you worked with those types of things accounting now this is going to be the one thing you want to remember how to get to because this is very important this chart of accounts is the backbone of all accounting when you spend money what did you spend the money for if you receive money was it a sale you made from your business you’re going to have to tell QuickBooks where all the money connects to in what we call the chart of accounts we are going to be going through the chart of accounts over in module three notice also this is where you would reconcile if you wanted to reconcile your credit card accounts or your bank accounts you do that right there you’re going to see my accountant which is the next one here if you actually have signed up with Intuit to use one of their QuickBooks bookkeeper accountant people that they have available then this is where you would go to actually find that person and invite them to be a partner with you as they call it there’s also some apps that work with QuickBooks you’ll see that right down here they’ve got a list of some of the more popular ones some of the trending ones or if you wanted to look through some of the different apps down here at the bottom and pick one that you might want to use you can do that down here where it says live bookkeeping this is actually where we were just talking about a few moments ago you could actually go to my accountant and have an accountant partner with you they also have what they call live bookkeeping where you can actually pay into it and you get a bookkeeper that will take care of your books for you and both of you have logins to your company file so you can both get in there and do your work and you can also have conversations so you’ll be able to talk to them live as well and then the last thing here is what we call a cash flow this is something new that will be coming to QuickBooks very soon it’s called the cash flow Center and you can see that it’s going to let you know what all is going on with your cash accounts meaning with your checking account and things like that and you get a screenshot over here if you want you can join the wait list and when the actual cash flow Center is ready to go then they will send you an email just give you some more information about it now one thing to keep in mind I did tell you Way Back in the beginning that if you happen to log in tomorrow and let’s say for example this new button here is not there look for it or look for the options that were underneath it already because they’ve just moved them somewhere else and that’s just something they did recently actually is they used to have those options right over here and then they moved them here recently now just to tell you what the new is anytime you want to create something new like a new invoice or maybe add a bill those types of things you’ll see that in this list and you’ll notice that where it says money in all of these options are your accounts receivable anything having to do with customers wears money out is your accounts payable and then where it says other these are just some other things that you can create new transactions in QuickBooks if you click on one of these particular options the other thing I want you to be aware of is what we call the gear and you’re going to see that right over here in the top right of your screen and we’re going to go through the gear options over in module three I just want to give you a quick overview of how this whole user interface looks and how it works now that you have a quick overview let’s head over into module three and that’s where I want to spend time talking to you about customizing your QuickBooks files we’re starting module 3 now and I want to start off this module by briefly just talking to you about the gear menu there’s going to be a lot of places in QuickBooks where you can customize things to make it work a little bit better for you but let’s go ahead and start there so that you can see what some of the options are and where you would go to make some changes to your company file the gear option is in the top right hand corner of your dashboard right here and you’re going to see that when I click on this it’s going to give you some lists that you can look at it’s going to give you some of the tools that are in QuickBooks you’re going to have access to some of your company options that you might want to go in and change like maybe the address things like that and also it’s got what they call a profile menu for your Intuit account right over here when you’re looking in here you want to get really really really familiar with a couple of these specifically this chart of accounts right here we’re going to spend more time on that later in this particular module specifically in section four we’ll get started with that but everything in QuickBooks will run through the chart of accounts that’s why it’s so important that you know where that is don’t confuse the gear icon with this new Option way over here this is where you’re going to click if you want to create a new transaction maybe you need to create an invoice maybe you need to create a bill maybe you need to do something like track your mileage those are all new transactions you could create just recently they’ve moved this new button over here and renamed it it used to be right up here and everyone got confused between the gear icon and this now they’ve moved it way over here so that you don’t get confused anymore and it says new meaning new transaction that’s really all I want you to know about the gear menu right now what I want to do now is take you over into section two where we’re going to use the gear icon to go ahead and customize some of the options for your company file hey welcome back let’s go ahead and finish up talking about customizing your company file this is actually part two of section two let’s go back down to reminders I’m going to click the pencil icon there’s a couple of options that you want to be familiar with in here you have the ability to set up invoice reminder emails for an example if your customer is late paying an invoice or you just need some reason that you want to remind your customer to pay that invoice then you can go ahead and either use the standard message or you can go ahead and edit the one that you see down here and you’ll see it just basically says this is a reminder we haven’t received your payment yet you do have the ability to insert a placeholder which basically means that anywhere in here you can put in the company name or the invoice number and that would actually be a merged transaction is what they call it it’ll pull from QuickBooks that information you can email yourself a copy and you can go ahead and save this when you’re done you also have options for online delivery down here at the bottom you’ll see these are email options for all of your sales forms your options for this will be when you actually email your sales form over to your customer do you want to have a short summary show up in the email or show the full details in the email you can also attach it as a PDF right here and some additional options you have is if it’s an online invoice you can actually set it up for edge2ml or if you want it to show up in plain text but you probably want to leave it online invoice and then click save and the last one I want to mention at the bottom here are statements and I’ll go ahead and show you those options a statement goes out at the end of the month and basically it starts with the balance from the prior month it shows all of the transactions that month and then what the customer owes at that point it’s really a gentle reminder for your customer to go ahead and pay you statements are not mandatory but they certainly do help when you’re trying to collect money you’ll notice that when you print statements you have an option to list each transaction on a single line or List each one including all the details on that particular one you can also show the Aging table at the bottom of the statement and what that means is it will have a field that says one to Thirty Days another one says 31 to 60 and another says 61 to 90 and that way your customer will know where they fall in that particular aging table I’m going to go ahead and hit save and those are your options that happen to be under the sales let’s look at the options for expenses expenses are things you have to pay bills that come in the mail that you have to pay for example your options are to show item table on expense and purchase forms you can track expenses and items by customer make expenses and items billable and default bill payment terms make expenses and items billable let me just tell you what that means real quick if you have to purchase a product or service and you want to make sure that you invoice your customers so you can get reimbursed for it instead of just manually putting those receipts in the car or keeping them on your desk QuickBooks will remember those expenses and when you’re ready to invoice the customer you can just pull them in you also have the ability to use the purchase order system so if you don’t use purchase orders you might decide to turn this off also you have this option called messages at the bottom this is a default message that will be sent when you send purchase orders I’ll go ahead and open that one so you can see the default message you can also edit this to say anything you’d like and then make sure you save it when you’re done the next tab on the left says payments this would be customers paying you there’s a couple things you can do one is if you want you can sign up with QuickBooks to receive payments quicker by going through this little service QuickBooks has that’s very similar to the way PayPal or Zell might work you just sign up for it if you want to learn more you can click here and that way you can actually email an invoice to a customer for example they can click a button right there and pay you right then and there QuickBooks will automatically be updated once you’re paid from your customer if you already have some sort of existing account with Intuit for example they have something called GoPayment or merchant services you can connect it to your QuickBooks as well right here and this is where you connect and the last option says Advanced there’s several things here I’ll just mention briefly if you want to have the first month of your fiscal year start maybe in September you can actually go ahead and change this it’s going to default to January make it correspond with your real existing start of your fiscal year you might have a different date for the beginning of your income tax year so you can go in and change those your accounting method is set for accrual when you run reports you can run them on accrual basis or cash basis accrual basically means that as soon as you invoice it shows up as income where their customers paid you or not as soon as you enter a bill it will show up as an expense whether you’ve paid it or not if you change this to what they call Cash basis you will only show the income once you receive the money from the customer or you’ll only show the expense once you actually spend that money I want to mention closing the books as well because this is an option that you’ll want to think about in real life accounting what happens is you close the books at the end of the month and you close the books at the end of the year and what that means is if you want to make a change in a closed period you can’t do it you need to make an offsetting entry in the current period your books are not closed automatically in QuickBooks it doesn’t remind you anything so if you want to close them you have to come here you’ll be able to go in and you’ll be able to tell QuickBooks that you do want to close the books and then you can set a closing date let’s say you set it for December 31 of 2019 that means that after you are working in the next year but you see a change you want to make in 2019 prior to December 31 you’re not going to be able to change it I’m going to go ahead and cancel out at that we talked a little bit about tax forms earlier you want to make sure you keep that one also the chart of accounts I know you haven’t seen it yet but basically everything in QuickBooks will run through this chart of accounts and currently they do not have general ledger numbers they’re just a list alphabetical per each type if you want to turn on general ledger numbers you can turn them on right here also you have some options for the markup income account and we’ll address that a little bit later let me mention real quick the track classes and the track locations locations means if you have different physical locations for your business you can turn this option on and every transaction that you work in you can choose which location you want that to go to classes is very similar except it’s not really locations think about this let’s say that you happen to have two different divisions of your company you might use those for your class list and everything you do make sure you pick the correct option from the list there’s some things about forms where you can have it pre-filled automatically automatically apply payments things like that you might want to look through that at some point a project would be like a job related activity you’ll notice that you have the ability to organize all of those job related activities in one place and that is turned on you’re also viewing QuickBooks in the business view there is an option to go ahead and also see this in what they call the accountant View time tracking if you want to be able to track the time that you or your employees spend working on different projects or jobs you have the ability to do that you can also come down and change the currency there’s some date options and things like that all the way down here that means there’s a lot of options in here that you can go through and set you’re going to want to look through these you don’t have to get them all right away but at some point if you want to set these you just come back in and make all these changes I’m going to go ahead and close that with the X at the top right and that’s going to take me back to the dashboard let’s go ahead and now move over to section three and I’ll show you how to manage users in section one we got familiar with the gear menu there were lots of different options in there that you can use to customize how your company file works and I want to take you into the gear menu now and into some of those options and show you how to customize some of those different things so that your company file works best for you let me go ahead and flip over to QuickBooks and we will Dive Right In I’m going to click on the gear icon and I want to start over in the First Column underneath where it says your company you’ll see an option that says account and settings these are going to be like preferences or options that you can turn on or off or edit in QuickBooks as we go down the tabs on the left and we’ll start with company if you want to change anything in these different sections just go over to the right and click on the little pencil icon and that will take you into the edit option you’ll notice here that I can add my company logo just by clicking the plus sign and that will let me navigate through my computer to find my logo I could edit the name of the company if I like also you might want to put in your EIN number or your social security number you’re going to use your social if you’re a sole proprietor and you really don’t have payroll if you do have payroll you’re going to have to have your EIN number in here so that QuickBooks can use it to help you with your payroll I’m going to hit save and that’s going to save that little section the next little section says company type and you’ll notice the first thing is the tax form and then the industry I’m going to click over on the pencil icon here and I want you to notice that you have the ability to add whatever type of tax form that you actually file when you do your taxes at the end of the year you can add that here now let me just make a little Point here you do not have to pick anything here as a matter of fact if you do not file your own taxes if you have an accountant then I would pick other or none every single time your accountant will know what type of tax form that you file if you pick any of these options what will happen is when you’re working in different places in QuickBooks there will be an extra field that says Which tax line on the tax form would you like to put this on you’re not going to have a clue if you’re not an accountant and you’ll just get stuck there every single time so why see that field and get yourself stuck I’m just gonna pick not sure other or none and the other thing is you have an industry you chose when you first set up your company file I chose Professional Services you can change that if you want but I’m going to leave that and hit save the next little section you’ll see here has to do with your company email you’re going to have your customer facing email the difference is that the company email is the private email that you like different things sent to from Intuit for example the customer facing email is the one you want the customer to see and that can actually be redirected to your company email if you don’t want to have to open 15 different email accounts you can always have as many email accounts as you want and redirect them to the one you’d like to funnel everything into there’s a place to put in your company phone and your company website here again you would edit that over on the right and the company address down at the bottom same thing you can have a company address that’s seen on the back end and then one that’s called customer facing meaning that’s the one that the customer actually sees let me go down on the left here to billing and subscription this is where you’re going to be able to go in and upgrade your existing subscription if you’d like you’ll notice that you can subscribe right from here and you can see all of the options and we’ve talked a little bit about these before so I’m not going to spend much time on that the next one on the left is usage there are some limits to some of these different subscriptions for example when you’re using the QuickBooks Online plus and you need more room you’ll want to go ahead and upgrade your subscription for example the one that I’m using only allows me one user if I want to add a user I may need to upgrade my subscription there is a number of items you can put in the chart of accounts as an example it’s 250. just a little FYI the desktop version allows you to have 14 500. if I need 251 then I need to upgrade my subscription and then down here where it says classes and locations you can have up to 40 and if you need more then that’s another reason you would want to upgrade your subscription I’m going to go over to sales on the left and there’s several things in here that you may want to go ahead and work with now later on we’ll look at customizing the look and feel of your different forms for example if you have an invoice and you want to add your logo or something like that but these are some things you can turn on or off right here in your forms for example let’s say that you like to invoice your customers and you want the customers to automatically have terms of Net10 for example well currently they’re net 30. let me go ahead and click on the pencil icon and show you some of these here’s where I could change this to one of these other preferred invoice terms if you’re not familiar with the ones that say one percent Net10 like these three and there’s two percent knit ten and eight basically what that means is the customer invoice will be due in 30 days but if they paid in 10 days they can take one percent off it’s a way to get your customers to pay you early if you have a preferred delivery method you can choose it right here do you like to print things now or would you like to send things later I’m referring to an invoice as an example here where it says shipping if you don’t ship anything then you can go ahead and uncheck this and this will say off if you do ship things then you had some options down here where you can have sales reps and put in purchase order numbers things like that these are custom fields you can also turn those off if you don’t need them if you want PO number not sales rep for example just uncheck the sales rep option you can have custom transaction numbers and that basically means that if you want to put in your own transaction numbers you can go ahead and set that series up you can have a service date field you can also have fields for the discounts deposits or tips or gratuity if you use that you can just go in and turn these on or off I’m going to hit save right there the next thing you’re going to see is products and services right over here and you’ll see there are several different options related to that that you can turn on or off for example if you don’t track inventory you can actually turn off this inventory option right here I’m going to go ahead and cancel that there’s also some options for late fees there’s also some for Progress invoicing and let me just mention what that is just so you’ll know if you estimate jobs construction is a prime example you’re going to want to take that estimate and turn it into an invoice at some point so that your customer can pay you you do not have to pull everything from the estimate into an invoice you can pull in 30 percent for example or maybe you want to pull in certain items that were on that estimate into an invoice if you do estimates you will want progress invoicing there’s a couple more options here let me scroll down there is an option for messages when you actually email a form so let’s say it’s an invoice for example or what they call a sales form you have the ability to email it directly to your customer you can set the default message and I’ll go ahead and click on that one so we can see you can say dear or you can say two notice you can have a merge field if you want to have their full name their last name first name and you can use the standard message that you see right here below when you’re actually sending out that email you can also have a copy email to yourself every time if you’d like I’m going to go ahead and hit save down at the bottom a couple more things I want to point out you’ll notice there is an option for reminders and I want to briefly tell you how reminders work but before we do that I think we’ll go ahead and stop the video right here let’s go over into the next section and I’m going to talk to you a little bit more about some of these other options here for account and settings one of the things that you will want to do is make sure that each person that is using QuickBooks has their own login you’ll give each person their own username and their own password only the administrator can add edit or delete users you can have up to five users in your QuickBooks file if you need additional ones you can think about upgrading your subscription or purchasing those additional ones that you might need the reason you want to have these additional users set up is because if you want to track down errors then you’ll want to know who was logged in at the time that that particular transaction was changed you can run an audit Trail to see a report on which user was logged in with the transaction used to look like and what it looks like now you also will be able to limit the user’s access to certain areas of QuickBooks let’s go ahead and flip over and I will show you how to manage the users keep in mind that the administrator has to be logged in in order to work with the users I’m going to go up to my gear icon in the top right hand side of the screen and in the First Column you’ll see an option that says manage users the first thing I want to point out is that normally if you’re using one of the basic subscriptions then you only have five what they call billable users in your plan and like I mentioned earlier you do have to upgrade if you want to have additional users the admin is considered one of the five users and you can see here is the admin name if you wanted to edit the admin information you can choose the edit over on the right and then come over here to edit user settings typically when you first set up QuickBooks you’re going to see the email address that you signed up with right here under the first name field and there won’t be anything where it says last name you could come up here and change those like I did I’m going to go ahead and hit save at the bottom if you want to add a user just come over to the right here and click on add user your user that you’re setting up can have what they call Standard rights which means that you can choose to give them full rights but you can also choose to limit their access to certain areas or you can give them company admin rights which means they have access to everything these options up here count towards your five user limit we discussed these here don’t because you might have someone you want to be able to log in just to run reports or maybe just to do time tracking I’m going to go ahead and choose standard user at the top and click next in the bottom right hand corner of the screen the first thing it asks me is how much access do I want to give this user now if you say all notice it will include payroll and it will check that and you can see all the different things they have access to over here if I go ahead and uncheck that you’ll notice that the things they don’t have access to do or down at the bottom they can’t add or edit employees in this case or delete payroll transactions it could be that I say none I don’t want them to have rights to any of these accounting features but they can still manage other things in QuickBooks like submitting their own timesheets things like that I can also give them limited access meaning if I want them to be able to do things with customers you can see the choices here or if I want them to do things with vendors or both you can see I can check both and you’ll see all the options here as well I’m going to go ahead and click next and now it asks me do you want this user to add edit and remove users we’re going to say no and I wouldn’t change that because you want the administrator to have rights to everyone if you start giving everyone full rights to change users and that sort of thing then what’s the point in even setting those up notice that you can also give this person permission to edit the company info or if you want them to manage the subscriptions you have the ability to do that as well I’m going to go ahead and click next at the bottom then it says we’ll invite them to create a QuickBooks account and password for each to access the company file what you’re basically going to do here is the new user you’d like to set up you’re going to put in their first and last name here put in their email and they will actually get an email saying that the administrator would like them to become a user they would accept and then they can set up the username and password make sure that the administrator knows that information you wouldn’t want to have an employee that has their own username and login information and the administrator doesn’t know what it is and can’t have access to what they do that’s going to be very important for you I’m going to go ahead and just X out of this because at this point what would happen is once they accept it then you would see their name down here as a user and that’s really how the users will work I do want to show you the audit trail report but we’ll do that when we get over to reports a little bit later let’s go ahead and wrap up this section and let’s talk over in section 4 about the chart of accounts we’re getting ready now here in section four to talk about the chart of accounts the chart of accounts is probably the most important thing in QuickBooks every single thing in QuickBooks will flow through the chart of accounts somewhere the chart of accounts is basically a listing of different areas where you might actually spend money or you might receive money as well and you want to make sure that your information goes into the correct categories that way when you run reports you have accurate information there is a part two to the chart of accounts make sure you watch both parts so that you have a really good understanding of how the chart of accounts Works let’s go ahead and flip over to QuickBooks and we’ll get started there are a few different ways to get to the chart of accounts one way is to go through the gear icon and in the First Column you will see the chart of accounts another way is just go over to the left and click on accounting and there you will see the chart of accounts as well and this is what your chart of accounts looks like remember that every single thing in QuickBooks runs through one of these that’s why it’s so important that this is set up correctly but let me quickly give you a quick overview of how the screen looks you’ll notice the First Column is the name of your account when you’re setting these up you can pretty much name these accounts anything you’d like but make sure that you name them something that makes sense to you or whoever happens to be looking at your reports the second column is the type and notice that it’s currently sorted by type we’re going to be going through all the different types so that you will know which ones you need to set up to make sure that you have everything you need I want you to notice also that when you’re looking down this list and you see the type for example income notice that the names are alphabetical and that will be true of each of these types if you look at the expenses here then these are alphabetical if you want to turn on general ledger numbers you have the option to do that and let me show you what it will look like if you do you’ll actually go up to your gear icon then you’re going to go into account and settings then you’ll want to choose Advanced on the left hand side and under Advanced you’re going to see an option that says enable account numbers right here currently they’re off if I choose on just by clicking there and choose enable account numbers notice that turned it on and I can show account numbers and then save and I’ll show you what this looks like now let me go ahead and close this and you’re going to see now that you’ll have a new column at the beginning here where you can actually go through and put in your own account numbers there will not be in here automatically you have to add them the way you would add them is the way you would edit any of these accounts just come over to that line that the account you want to edit is on you’ll see a down arrow and you’ll choose edit and in this window you have the opportunity to add your new account number right here I’m going to go through this screen in a minute so I’ll just close that for now and I’m going to go back and turn those account numbers off for just a moment account and settings it’s going to be the advanced option on the left and I’m going to go ahead and turn off enable account numbers just by clicking that word on and I’ll uncheck these and save that now let’s go back and finish going down our list here the next column that you’re going to see next to the type is going to be the detail type and that’s just telling you a little bit more information about the type that you chose there’s also the QuickBooks balance and the bank balance the QuickBooks balance would be if you had entered some transactions in QuickBooks what is that balance for that account the bank balance column will allow you if you pull in your entries from the bank that’s called downloading your transactions then you’ll see what that balance is as well and you can see that you can match those up and see if you’re in sync here and we’ll be getting into that a little bit later but right now what I want to do is start talking to you about the different types of accounts you’d want to add and then we’ll add a few so you can get a feel over how this works the first type that I want to talk about are bank accounts you’ll notice there are no bank accounts at the top of this list that means that right now you do not have a checking account you don’t have a savings account any kind of bank account you have the ability to add these you’re going to be doing this by clicking over here where it says new and the
first thing you’re going to do is pick the account type in this case it will be bank but notice all the other types we’re going to be talking about where it says detail type right here just go ahead and pick the option from the list that closely matches the account type you’ve chosen over here is the name of the account when you name your account you can name it anything you want if you want to call it your bank name you could do that if you want to call it operating account and maybe another one called payroll account you can do that just whatever you like to name it I’m going to go ahead and leave it on checking for now description is totally optional and there’s not a sub account right now but later on I’ll talk to you about how sub accounts work the other thing you want to think about is a start date to start tracking the money in this checking account now what you want to do is if it’s the beginning of the year like it is currently then you probably want to go back to January 1 or the beginning of your fiscal year and put in all the entries if that’s the case then you can choose beginning of this year you might decide if it’s later in the year and you’re starting your QuickBooks file to go ahead and start with the beginning of that current month you can also pick other the reason you might want to pick other is what if you have a bank statement that cuts off in the middle of the month that would be an option where you can tell it a specific date to start with it really doesn’t matter what date you start with just try to make it correspond to the start date of your bank statement I’m going to go ahead and say beginning of this year and I’m going to go ahead and put in the balance the ending balance on 12 31 of 2019 would be the exact same number as the beginning balance of 1 1 2020. you’ll want to get that number from your bank statement I’m going to go ahead and say it was 250.62 and I’ll go ahead and hit save and close at the bottom and now you’ll see at the top you have a checking account and you have a balance of 250 and 62 cents you also have the ability to view the register I just want to show you that real quick if I come over here to view register then this is what the checkbook register actually looks like I’m going to close this this is just some information on what a register looks like and what it’s supposed to do anytime you want to go back to the chart of accounts you’ll notice right up here it says back to chart of accounts and that’s going to take me back to the screen now I also want you to notice something that happened anytime in accounting you do something like this then you will have a debit and a credit for that transaction you’ll notice that in this case the flip side the money went to an account called opening balance equity and that’s the way it should be you can’t change that just know that whenever you have a starting balance which could be a minus number if you have a loan but this is going to be an actual picture what your books look like so don’t freak out if you see a negative number there it’s an accurate picture of your books let’s go ahead and talk about some other bank accounts you obviously would have savings money market accounts things like that but think about do you have PayPal do you accept Square those are bank accounts as well so if you do accept those then you want to set this up as bank accounts what would eventually happen is you would transfer money from PayPal or square into your checking account or sometimes you go the other way but those are just Bank transfers the next type I want to talk to you about are your assets and you’ll see there are a couple of these here an asset is something that your business owns that makes it more valuable you’re going to have equipment you’re going to have chairs desk lamps Vehicles property assets fall into one of two types there are fixed assets which are things you plan to keep long term like the vehicle or the property then there are what we call more liquid assets or QuickBooks calls them other current inventory is a great example of that because I’m worth more right now because I have inventory in the back room but my goal is to sell it and get it out the door now what’s going to happen with assets is you’re going to set up big bucket categories and what I mean by that is instead of listing each vehicle the business owns you’re going to have a category called vehicles and they’ll all dump into that one category you don’t want to have a hundred different categories for your assets because no one wants to sit there and look at that just have big buckets maybe seven to ten good ones some of the common ones that I see are Vehicles you might have one called furniture and fixtures you might have one called property if you have a lot of property you might have equipment but again they’re just big buckets this is where the accountant’s going to be very helpful to you because the account is going to help you decide which categories to set up and also when you start talking about the money part then the account that’s going to help you plug in how much the vehicles were worth and depreciation and things like that QuickBooks does not do depreciation and that’s because there are multiple ways to do it if you had 10 accountants they might all tell you a different way they want it done so just know you want to have the account set up so that if you go to the bank to get a loan for example the bank will know that you do have some assets we’re going to move down the list but before we do I want to go ahead and stop the video here and have you go over into part two and we will continue talking about the different types that you’ll need to set up we’re in section four and we’ve just talked a little bit about how to start setting up some of the accounts in your chart of accounts I want to go ahead and finish telling you how to set up the rest of those accounts so let’s flip over to QuickBooks and we’ll keep going the next type that I want to talk to you about are what we call your liabilities a liability is something you owe now I’m not talking about the monthly payments you have to pay for the electric bill things like that what I’m talking about is a loan you’ve taken out of the bank when you think about your liabilities there are really two types there are what we call long term which are things that you’re going to pay on for more than 12 or 13 months and then there are short term which is under a year basically and QuickBooks calls them other current liabilities when you set up your liabilities you want to set up a separate one for each loan that you have and when I say loan it could be a car loan it could be you as the owner decide to set up a loan where you put money into the business and you want to get paid back you might have borrowed from the bank those are all different liabilities you want to set up and each one should be set up separately let’s set up a car payment so you can get an idea of how you would set up these accounts I’m going to click on new and the first thing it asks me is to pick the account type here you’re going to see your other current which is the short term and your long-term liabilities which is what I’m going to choose in this case another thing to notice is that where it says detail type if you see notes payable that’s just an accounting term for a loan basically over where the name goes this is where you’re going to pick the name that you want for your loan I’m going to call mine the bank of any City but you can call this anything you want sometimes what I’ll see people do is put the last couple digits of their account number here just so that they can see which loan they’re looking at when they have it pulled up in front of them description is totally optional this is where you might say that this is my 2019 Jeep Cherokee loan or you can leave it blank and then we’re going to have to pick a start date now remember if we’re starting our company the beginning of the year what you’ll want to do is for your beginning balance here find out what the amount you owed as of in this case January 1 was so that you can plug that in it’s not the amount you started with when you purchased the vehicle two years ago it’s the amount you owed as of the start date of your company file I’m going to put fifteen thousand dollars in here and then hit save and close at the bottom and now what you’re going to see is that you have a loan right here notice it says Bank of any City and the balance is fifteen thousand dollars every time you make a payment on the loan this is the account you want to put it to your car payment is not an expense to the business do not set it up as an expense account it is a loan and you want to know when you’ve paid off the loan you’ll also be able to on each individual check that you write you’re going to be able to put in how much is principal and how much is interest there the other thing I want you to notice here is look at your opening balance Equity it’s a negative number and that’s because I said that when you owe something it’s a negative number and this is an accurate picture of what your books look like let’s talk about credit cards a little bit and I’m talking about credit cards that your business uses to purchase items for the business this has nothing to do with accepting customer credit cards you want to set each credit card account up separately so that you can track each one I’m going to go ahead and set up a new one for you the type is going to be credit card I’m going to name my card I’ll just call it visa and I’m going to pick the start date of my company file the beginning of this year and the starting balance would be the starting balance from my January bank statement if you don’t have your January bank statement then you can grab your December of 2019 and plug that number in I’m going to say it was twenty five hundred dollars and I’ll save and close and now what you should see is that you have a credit card and you owe 2500 when you make a payment towards the credit card you’re going to actually put it to this account always do not try to break it up that so much is gas and so much was meals because there is another way to do that in QuickBooks all the payments go directly to the credit card account going down the list the next type that I want to mention is where you see Equity Equity basically means equal if you think about it you’re the owner of the company when you take money out that’s considered an owner draw when you put money in it’s considered an owner contribution now they’ve got some other terminology you’ll notice owner investment is when the owner puts money into the business and owner pay and personal expenses is when the owner takes money out of the business what I don’t want you to do is make a deposit from your personal account and consider it income it is not income to the business it’s considered equity the next type that I want to talk to you about are your income accounts I’m going to scroll down just a little bit here you’re going to see that there is one called sales income and typically when you make a sale for your business this is the account that you want your sales to dump into you can have a few extra ones added if you’d like maybe you have different areas that you do business in and what I mean by that is in your company maybe you have different things that you do and you’ll want to actually set those up so you can see how much income you’re getting from each one that’s certainly okay keep that list pretty short though no one wants to Read 50 different lines of income accounts but sales is normally where you’ll see most of that go into the next thing I want to mention is your cost of goods sold think about the things that you have to buy to make a product or service in your business you have to buy materials you’re going to need subcontractors sometimes maybe you have Freight that’s part of that anything that you have to buy to make a product or service in your business is considered a cost of goods sold and you want it to show up on a profit loss as being deducted from your total income the largest grouping that you’re going to see are your expense accounts and you’ll see there are a ton of these here you’ll also add a lot of these let me show you how to add some sub-accounts so you’ll see how this works let’s use car and truck as an example we have the main account here but I want to add a sub account called gas and maybe another one called repairs and maintenance all you’re going to do is go back up to the new option in the top right of your screen here you are going to create a new account and it has to be the same type as the main account so that means this one has to be an expense account and in this case it is Auto and I want to name this one fuel what I’m going to do is check it’s a sub account of and then from this list here what I’m going to end up picking is car and truck so see how the fuel is a sub account of car and truck and when I click save and close at the bottom I want to show you how sub-accounts look see how fuel looks like it’s indented a little bit that’s a sub account and there’s going to be a lot of these you’ll want to add when you’re going down this list think about insurance you might have general liability you might have auto insurance those would be sub-accounts of insurance going down under legal and professional fees often what I’ll see is accounting and the attorney will be sub accounts of legal and professional fees and you can just come down and make this list look any way you want there’s utilities at the bottom you’ll want to have telephone underneath gas electric any of the utilities that the business pays that’s going to give you a quick overview of setting up your chart of accounts make sure you spend some time on this and get it set up the way you want it you want it to be as detailed as you need it to be to get your numbers but don’t make it so detailed that no one wants to read it you can always set up accounts later as you go along as well because you’re not going to catch them all right away well that’s going to wrap up the chart of accounts let’s go ahead now and move over into module 4 and we’re going to talk a little bit about working with accounts receivable and we’ll start with learning how to set up customers foreign we are getting ready to start talking a little bit about the accounts receivable portion of QuickBooks if you’re not familiar with that term anything happen to do with customers in your business that’s called accounts receivable customers are people or businesses that buy from you you’re typically going to make a sale and that’s going to be income to your business we need to talk first of all about how the customer list is set up and then once we do that we’ll go into the second section and I’ll show you how to add some customers to this list the way you’re going to get to a list of your customers is to go over to your navigation bar on the left point to sales and then you’ll see customers in the list at the very top of your customer list here you can see the dollar amount for any of these categories you can see that for open invoices I have 35 810 these are invoices that have been created and the customer has not paid them even if they owe a penny if you had any of this amount that was overdue you would see that over here and overdue means that if you had set specific terms on an invoice let’s say net 30 meaning that invoice was due in 30 days and the customer has not paid you by then then some of this moves over into the overdue category you can also see how much was paid in the last 30 days in this case nothing there’s also a dollar amount for what we call unbilled activity and for estimates sometimes you will get little messages like you see here right above your actual customer list you can close this with the little X in the right hand corner and then you can actually see the list the customer list is set up alphabetical by last name you’ll notice that when you look down this list the list is set up by last name comma first name you’ll want to do that because it’s a lot easier to find someone when you’re looking for them if it’s set up that way obviously a business like Adam’s candy shop would not have a last name and a first name so it just sorts the first letter which is a in this case with the A’s in the list if you see a little envelope to the right of some of your customers that means that if you click there you can actually send that customer an email if their email address was set up when you actually set up the customer then it would pull it like you see here if not you’re going to notice that there’s no envelope meaning that you have not put in a customer email address in the customer information these are what we call sub customers sometimes they’re called jobs sometimes they’re called projects and the online version the technical term is a sub customer they’re both sub customers of in this case Mike ballick you can also see the customer’s phone number and the open balance meaning how much money that customer owes you there’s also a column for actions if you happen to be on this screen and you’d like to take one of these actions related to this customer you could do that for example you could go ahead and create a statement or an invoice right from here just know that you’re not always going to be on this screen when you want to do those things so there are other ways to do those actions notice you can also search for a customer right up here so if you want to look for Mike ballick for example you can start typing the last few characters of their name and then you’ll see that it pops up in the list over on the right hand side you have the ability to print a list of your customers you can also export this list to excel if you’d like and there’s also some settings and I want to click on those for a moment because I want to show you that these are going to be the columns that you see up here if I’d like to see their email address I just choose that and notice now I have a column for email or if I’d like to see their address I can click on that and now I see their address as well it’s just how do you want to actually look at this list another thing I’ll mention way back over on the left where it says batch actions if you have multiple customers selected from the drop down list you can actually email all three of these customers or you can make them inactive an inactive customer is a customer you’ve used in the past therefore you can’t to delete them but if you haven’t seen them in a long time and you just want to hide them from the list you can make them inactive that’s a quick overview of the screen itself here what I’d like to do now is take you over into the second section and show you how to add a customer to the customer list now that you’ve gotten a quick overview of what the customer list looks like and how it works we need to start adding some customers to this list let’s go ahead and flip over to QuickBooks and I will show you how to add your first customer if you’re in your customer list you’ll notice in the top right hand corner there’s an option to add a new customer there’s also a down arrow to the right of that because you can also choose to import customers what if you have a list of customers already set up in Excel you can just import those so that you don’t have to set them up one at a time we are going to talk about importing customers from Excel over in Section 5 of this module for now I’m going to click on new customer we’re going to put in some customer information on this screen and you’ll notice the first thing it asks for is the company name remember that a customer can be a company a customer can also be an individual that works for a company or it could just be an individual and there’s no company in that case you wouldn’t put anything in the company name field but I’m going to go ahead and put in this customer’s company it’s BRC supplies and you’ll notice that because I’ve typed a b it starts pulling down a list of different companies that it’s found and their addresses so if I want to just choose one of these I could but since I’m typing this in from scratch I’m going to go ahead and just click over to the side somewhere and then I can come back over and finish filling this out this customer’s name is Tom Allen and notice it now has a display name as Tom Allen you can change that display name to show as the company name or last name comma first name display name means the way you see it in the list over here remember we talked about the easiest way to find customers is to type it in last name comma first name just be consistent with however you decide to do it it’s also going to use the display name as the same way it’s going to put their name on checks it looks kind of funny if I write a check to Alan comma Tom I’m going to uncheck the box and then I’m going to put in Tom Allen and that way that’s how his checks will appear when we put his name in there over on the right you see I can type in an email I’ll just say Tom Yahoo you can also put in Tom’s phone number Tom’s mobile fax other end website also if this is a sub customer I can check the box and choose the customer that he is a sub customer of we’re actually going to do that over in section four now notice I’m on the address Tab and this is where I’m going to put in the billing address I’m going to go ahead and say he’s at one two three and we’re going to say this is Billings Road this is going to be in the city of Bayshore California States California and the zip code is 94326 the country we’ll just say USA now you’ll notice that the shipping address is the exact same as the billing address if you don’t ship anything you don’t have to worry about that but if you do they may want their invoices going to one address and the actual things you’re shipping going to a different address you can uncheck the box and change that information if you need to I’m going to go ahead and hit save and let’s see if Tom Allen is in the list he should be in here’s Alan comma Tom and he is you can see it right over here now I want to mention a couple of things when you’re in this list if you have a customer that you accidentally put in you cannot delete that customer the online version of QuickBooks lets you make a customer inactive meaning you hide them from the list but you cannot delete them now what you’re going to see is all the information you had set up about Tom Allen notice a couple of things you can add some notes if you want to this customer’s record I’ll just click add notes just to show you let’s just say that we can put in here this is a new customer for the year 2020. and that’s a note that’s all you have to do there and now that will show up anytime you want to see that notice also there is a tab for the transaction list down here and obviously we have no transactions yet but as we start creating invoices receiving payments that sort of thing they’ll show up right down here the projects these are going to be the jobs that we actually do for these customers there aren’t any set up yet here we can also see customer details this is where we’ve got all of the information we had just set up for this customer and then also if there were any late fees you would see that listed here as well and that’s how you set up a new customer I’m going to hit this customers link here in the top left and that’ll take me back to the list of customers and notice Tom Allen has an envelope to the right because we did put in an email address for Tom Allen that’s pretty much how you put in a new customer what I want to do now is take you over into section three and I’m going to show you how to add a sub customer now that you know how to add a customer let’s talk about how to add sub customers a sub customer is basically going to be a way of adding a level underneath your main customer if you have different jobs that you work on for a particular customer you can actually separate those jobs by actually creating sub customers then you can look at reports for the entire customer but also per sub customer in the desktop version of QuickBooks they actually call these jobs sometimes they’re called projects here like I said the terminology is a sub customer let me show you how to set those up to create a sub customer we’re still going to use the new customer button because it is a new customer we’re setting up they’re just a sub of another one we already have existing when this comes up all you want to do is put in the display name you can see in their case they’re using street addresses I’m going to make up one 124 and this will be Scottsdale Drive and the only other thing you want to do is come over here where it says is sub customer and underneath it pick the customer that you want to apply this to you can also see it says bill with parent over on the right and you do want to keep this together you can also build this customer individually we’re going to leave it on build with parent and you’ll see that it actually pre-populated all of this information from Tom Allen’s setup we don’t need to change any of this unless it happens to be different we’re just going to hit save and then that’s going to be our next level we’ve created you can see it right over here and that’s all there is to creating a sub customer now a couple things you need to know if you’re using the sub customer feature you want to use it consistently throughout QuickBooks if you don’t let’s say that you’re working on some transaction and instead of picking 124 Scottsdale I picked Tom Allen it’s still going to go to the right main customer but on reports you’ll see other and you’ll go what’s going on you want to make sure that you put it to the correct sub customer if you’re using the sub customers option not every business will use the sub customer option but it’s really great if you need to break out different projects like I said or maybe you have different locations for a particular customer things like that that’s really all there is to working with sub customers let’s go ahead now and move over into section four and I will show you how to edit an existing customer once you have a few customers set up you’re going to realize that you need to edit some of the information maybe you actually set up the information wrong or it could be that the customer information has changed maybe they’ve moved they have a new address you want to add a website you can always go in and edit the information about your customers and it’s a pretty easy process so let me show you how that works if you want to edit a customer’s information make sure you click on your customer over on the customer list and then come over here and choose this edit option let’s say that Tom’s email really includes his last name I’m going to go ahead and add Alan and let’s say that when we set this up we did not look at any of these tabs so let’s go through those real quick so that you’ll know what other types of information you might want to set up when you’re setting up your customers we’ve looked at the address tab now let’s take a peek at notes this is where you can add any notes you would like pertaining to this customer the best thing to do is just drop down after the previous node pop in the date and then pop in any notes that you might have pertaining to this particular customer the third tab over says tax info this has to do with collecting sales tax from your customer if you sell physical items and you charge sales tax you need to tell QuickBooks that this customer is taxable and what the default tax code is we’re going to talk about sales tax later on but just to give you a heads up what you’re going to have to do is where you’re setting up the items or the different things you sell they actually call them products and services in here then what’s going to happen is you’re going to set up each sales tax you need to collect and then you’ll group them together to create one big tax so that you’re charging the customer correctly and that’s where you would put that tax code let’s say that it happens to be the San Domingo we’ll choose that and now what will happen is when we invoice a customer it’ll pull that tax code automatically the next one is payment and billing a lot of this is just information for you first of all the preferred payment method does this customer usually like to pay you with cash check barter MasterCard you can kind of see the list there if you need to add a new way that the customer can pay maybe PayPal Square you can just hit add new right here and then you can actually type in the name of that new payment method and then hit save and now PayPal will be on the list since I just added it it does not mean that the customer pays you with PayPal every time it just means that’s their preferred way they typically like to pay you preferred delivery method how do they like to have their invoice sent to them do they like to have them printed and you can print them later or do they like to have them sent via email where you can send those later or do they not have a preference terms you can have different due date terms per customer if you have a really good customer you might have net 30. if you have a customer that’s brand new you might say do a pen receipt now let me tell you why you have this opening balance field here as of the start date of your company file how much money did this customer owe you let’s just say it was a thousand dollars you could plug that in there and the accounting would be correct but you would not have a way to go back and see that that was actually three separate invoices you sent them that total eight thousand dollars I’d like to go back and put each invoice in that the customer still owed me for and not fill this in But whichever way works for you would be fine as long as the numbers come out correctly and then of course if you did put an opening balance you would put in the date that that would be as of which would really be the start date of your company file the next tab over says language this will default to English but if you’d like to send your invoices to the customer in French Spanish Italian you can see the list there you can also have attachments for this customer what this would be is let’s say that you had some sort of file that you actually created or it could be a bill that you received something related to this customer instead of having to get out of QuickBooks and search your computer for that file you’ll be able to open it right here from QuickBooks because it will be attached if I click on attachments here it will allow me to search my computer and find that file and attach it here the last tab says additional info you might have different ways you like to categorize your customers you’ll notice they have commercial customers and residential customers but you can set up this list any way you like it and that’s going to be how those tabs work right there all you have to do when you’ve changed all the information is hit save and that information is now saved in QuickBooks now one quick thing I want to mention to you especially about sales tax if you have a customer you have set up where you charge a certain tax and later you go back and edit the information and change it it’s not going to go back to any prior invoices and change that sales tax it’s only for any new ones you create from this point forward well that’s how you’re going to edit your customers let’s go ahead now and move over into section five and I’ll talk to you a little bit about making customers inactive one of the things you can do in QuickBooks is with any list you’re working in whether it be customers chart of accounts vendors it doesn’t matter if you have an item on that list that you’d like to temporarily hide from the list you can make it inactive inactive customers actually will not show up when you’re working in other areas of QuickBooks but if you wanted to actually turn them back on you could go and activate them again let me go ahead and show you how to make a customer inactive and then you’ll know how to do it when you’re working in other lists and QuickBooks as well let’s say that I want to make Edward Blackwell inactive notice that Edward still owes me 1 125 I wouldn’t want to make him inactive because I’m going to be using this customer he’s going to make a payment and I’ll need to be able to apply money typically you make a customer inactive if you haven’t seen them for a while or maybe it was someone you set up and you never used those would be reasons to make a customer inactive let me show you what happens if you try to make Mr Blackwell inactive and he still owes you money I’m going to go ahead and click on edit and here’s where you go to make a customer inactive now this message is telling me that Edward still owes me money he has a non-zero balance if I say yes and make him inactive then QuickBooks will create an adjusting entry so that he doesn’t owe me money anymore I’m going to click yes and notice there’s a little error message and that’s because this customer has what’s called a recurring template that’s being used anytime you have something recurring you have to actually delete the recurrence of that whatever it is before you can go in and delete the customer in this case I’m going to go ahead and hit cancel at the bottom and let’s use a different customer Tom Allen because he doesn’t owe us any money we just set him up I’m going to go ahead and click on Tom Allen and then click edit on the right make inactive and now it tells me that Tom has a sub customers or projects making him inactive will also make all his sub customers and projects inactive and that’s what you want I’m going to click yes and now you’ll notice it says Tom Allen is deleted just a little FYI QuickBooks does not have a feature to delete anything from a list in this case a customer if you wanted to quote delete them you’d have to make them inactive and they’re still not really deleted because they show up in the list they’re just hidden so you can always activate them again now if you notice over here it says make active that’s how I go back and activate my customer again I’m going to click make active and now you’ll see over here it does not say deleted but notice the sub customer does you’ll want to repeat that process with your sub customer select them from the left and make them active and that’s a quick overview of how to make your customers inactive well let’s move on we’ve got one more thing to cover in this particular module I want to talk to you in section 6 about importing an existing list of customers that you might already have into QuickBooks thank you the last thing I want to talk to you about in module 4 is how to import your customers into QuickBooks you might already have a list of your customers in Excel for example or in a CSV file and it’d be really nice to be able just to import them into QuickBooks instead of having to enter them one at a time you will need to set them up a certain way I want to go ahead and pull up the Excel file that I have so you can see how it’s set up and even if you don’t have the feels the exact same you can map them once you go through this import process but let me go ahead and pull up the Excel file and show it to you and then I’ll pull up QuickBooks and we’ll go through and import those customers here I have a list of three customers I’d like to pull in from this Excel spreadsheet into QuickBooks and you’ll notice that I’ve got them set up by name company name email phone you can see the list here those are the names of the fields that are in QuickBooks that you want to pull the information into if you can set it up this way that’s the best way to do it but you can also map the fields if you want once you get inside QuickBooks but I wanted you to see this so that you would know exactly how to set it up and make sure you save it somewhere that you can pull it in pretty easily when you go to look for it let me go ahead and flip over to QuickBooks and we’ll pull in John Ellen and Doris all you need to do is make sure you’re on your customer list and go up to the down arrow next to new customer and choose import customers here’s where you’re going to select your Excel or your CSV file that you currently have your customers in mine’s called customer list and I’ll just choose that and you can see it brought that file in now all I have to do is Click next at the bottom and here I can map my fields to the fields that are in QuickBooks you’ll notice the First Column are the names of the QuickBooks fields and over here will be the names of the fields you had in your file if the names don’t match exactly like this one says name but maybe in your Excel spreadsheet this one said first name for example then you would choose it from the down arrow if there’s not a match like company for example when I look in my Excel sheet I did not have one called company then I’ll just say no match and it won’t be able to pull anything in the ones with the check mark that you see are the ones that actually have a match to the QuickBooks fields that sees the exact same name that it sees over in your Excel spreadsheet in this case once you’re finished going through that list you want to go ahead and click next at the bottom and you’ll see that there are three customers now that are ready to be imported if I had a lot of these I could go in and search for these by filtering and typing in the name of that customer it’s going to be pulling in the ones that have the check mark next to them I’m going to go ahead and import all three of those I’ll just choose import at the bottom and now you’ll see that it’s brought in those three customers let’s look first of all for Mr Stewart we’re going to go down the list here and you’ll notice that here is Jon Stewart he was the first one on our list it has now imported all the information within that Excel spreadsheet well that’s going to wrap up module 4 where we’ve talked a lot about customers now that we have our customers in we can move over to module 5 and talk about sales transactions using these customers hey there we’re working now in module five and in this module we’re looking at all the different types of sales transactions that can occur when working with customers these are going to be things like are you invoicing customers are you receiving payments from those customers maybe making deposits credit memos things like that before we get started over in section one I want to go through very quickly with you the sales tab that’s in QuickBooks and show you an overview of how it works and what type of information you can get out of it let’s go ahead and flip over to QuickBooks and we’ll look at the sales yeah let’s go over to our navigation bar and point to sales and then I’m going to click on overview this is just a quick overview of your income over time you can see that I’ve got 220 dollars that it looks like I made this month and I made that the week of February 16th through 22nd notice I can actually point right up here as well and see that information if I wanted to change this and see how much I’d made this month this quarter for example you can see last year this year you’ve got different choices here I’m going to go ahead and choose last month and it looks like last month we brought in over seven thousand dollars and you can see the high points of when you brought in the most money in this case January 19-25 down here I can see how many invoices are overdue and also the ones that I’ve already sent out that are not due yet and that’s 3976 dollars I might also have some money I’ve received that’s not deposited yet and you can see that here and also I can see the amount that I actually did deposit over on the right here these are some things that you can opt to set up with QuickBooks and some of these are paid subscriptions but if you want to set these up you’ve got different ways customers can pay you that would be Apple pay if you want them to be able to pay you direct deposit things like that you can set those up with into it you can also set up to get paid anywhere so if you have an app you’ve downloaded to your phone you can accept payment right there or you can send out an invoice to your customer that they can pay online they can actually click that invoice and then pay you right then and there like I said some of these are paid you will want to look into those before you sign up with one of those subscriptions if you’re interested in learning how the QuickBooks payments allow you to get paid online or in person you can watch this video here and then they have some shortcuts to some of the things we’re going to be talking about as we go along right down here but this is a quick overview of your accounts receivable now notice the next tab over will show all of your sales here you’ll see all the information on any sales transactions you can see all the transactions listed at the bottom so you’re going to see invoices payments credit memos if you look down the list here’s a Time charge there’s a sales receipt or refund any transaction that happened with your customers going to be on this list you’re going to see all the information about the transaction the balance the total and all the way over on the right you can take an action related to that particular transaction if you click the down arrow notice that you can either copy this you might want to delete it you might want to send a reminder you can kind of see your choices there the next tab at the top that you’re going to see are your invoices these are just invoices that are not yet paid you’ll notice it shows you all the information about each of the invoices the balance the total if it’s overdue maybe not sent or if it’s partially paid you can see some of these and then of course here’s your actions again if you want to take one of these actions related to one of these invoices the tab that’s next is your customer Tab and we’ve spent a lot of time on that and the last tab says products and services products and services are things that you either buy or sell to your customer they can be a physical item it could even be inventory or just a service you provide and you can see for all of these that you can look at all kinds of options related to whatever is underneath that particular tab that gives you a quick overview of how to use the sales option what I want to do now is take you back and let’s go ahead and get started looking at how to actually create sales receipts for those customers that want to go ahead and buy something and pay you at the same time foreign to a customer there are a couple different ways to record that sale one way is to create what’s called a sales receipt this is almost like point of sale if a customer comes in makes a purchase and gives you the money right then you can put all of that on one transaction and send them on their way with a receipt the other way that we’re going to talk about in section three is actually invoicing customers and that’s where you send out an invoice and the customer pays you after the fact but right now let’s focus on sales receipts let’s flip over to QuickBooks and I’ll show you how to enter sales receipt you want to start by going to your customers list look down the list and find your customer and the sub customer that you’d like to send a sales receipt to if you’re using sub customers always pick the sub customer if you just pick the main customer what will happen is you’ll look at reports and you’ll see other and you won’t know what that refers to so just make sure you always choose the sub customer notice when I go all the way to the right here and click the down arrow I have an option to create a sales receipt the first thing you’ll notice is that it brought in the customer and the sub customer you chose if you want to change those you can actually pick those from the drop down list the next thing you’ll see is a place to put in the email if you want to email this to more than one email address notice that you can type them both in here but just separate them with a comma and if you need the CC or BCC some additional email addresses you’ll see those here there’s also a check box that says send later that’s because you have the ability to set up the sales receipt and not actually send it right now it could be that you’re not really sure of the quantity but you want to go ahead and get this set up and saved you could do that you’ll see it brought in the billing address and it also has the sales receipt date which would be the current date if you want to change this date you can just click the little calendar like I did and change the date now in this case they’ve customized this sales receipt to have an additional field that says crew number if you wanted to plug something in there you would just type in the number for that crew or not use it at all now we’re going to come back to the payment method in a moment let’s go down to product service if you click your mouse in the first area there you’ll see that there’s a drop down list of all the products and services that you sell your customer if I go down this list you’ll see there are rocks these are garden rocks and if I choose that it will bring in a description and I can edit that description or add to this as much as I like I can go over to the quantity and put in how many of these the customers purchasing and the rate we’ll say we sell these for 25 dollars notice when I tab through it it’ll do the calculation three of these at 25 cost 75 dollars and this is subject to sales tax typically physical items are but Services you provide are not the little trash can that you see at the end would allow you to delete this line now let’s say I’m going to add one more to this I’m going to go down and pick a service let’s say that we have Design Services and I’ll just put this over in the description and then I’ll say the quantity is 1 and we’re going to charge a hundred dollars for this and notice this one is not automatically subject to sales tax I do have a third line available if I want to put something else in here if you don’t see an available line click where it says add lines and that’ll give you a line to type in notice you can also clear out all the lines if you wanted to do that right underneath it you have a message that will be displayed on the sales receipt it currently says thank you for your business and have a great day but you can put anything you like in there and also if you wanted to put a message and have it displayed on statements you could put that in here and just type it in over on the right you’ll see it shows us our subtotal you can see that 75 dollars of it is subject to sales tax and in this case they’re using a sales tax called California and it’s eight percent six dollars in this case if you want to give your customer a discount you can get them a percentage discount or a value meaning a dollar amount let’s say I want to give them 10 percent I’ll just type in 10 with a percent sign and notice it deducts 18 cents and if I scroll down it’ll show me the amount received and the balance due now remember because this is a sales receipt we’re going to put on this the payment amount they’re assuming we’ve received all of the payment back up here is where I can choose the payment method if they pay me with Visa or if they pay me with a check I can pick any option I like and there’s a little place for a reference number now if you had a Visa card you wouldn’t have a reference number with a check that would be a check number the next thing you’ll see is deposit two and it says undeposited funds your other choices would be to go ahead and deposit this to maybe a checking account for example you can see the list let me explain undeposited funds for just a moment in your chart of accounts you will have an account called undeposited funds this is a place where money sits that you’ve collected but you have not yet taken to the bank a customer came in paid you with a check you want to show QuickBooks the sales receipt is paid so you’ll choose deposit two undeposited funds but it might be that you’re going to collect all the monies you receive today and make them all one big deposit that’s when you choose undeposited funds if you knew this was the only thing that was going to be in this deposit you could choose checking and skip the next step of making a deposit I’m going to go ahead and leave it in undeposited funds and when you’re finished down the bottom you’ll want to either just save this or you can choose save and send which will email it or from the down arrow you can say saving close we’re going to close this and now that transaction has been completed you’re going to notice when we look at this that was Tom Allen 124 Scottsdale does not owe us any money however if we go over to the all sales and look here we should see Tom Allen’s sales receipt right here for a hundred and eighty dollars and 82 cents and it says it’s paid here’s where we could go and actually print this I want to view or edit this you’ll see here that this is your sales receipt where you can make that change if you need to I’m going to go ahead and get out of this I’m going to hit the X and cancel it and that’s how you would go in and actually create a sales receipt that money is now sitting in undeposited funds I want to take you over to your chart of accounts which happens to be over here under accounting and show you that account so that you can see the money there let’s see our chart of accounts and here’s undeposited funds now you can see it has 2 243 dollars in there but if I view the register over here you’ll see the transaction that we did and also which is right here any others that were already sitting in undeposited funds so hold that and when we talk about making deposits in section five you’ll see how this all comes into play but one way to keep a check on yourself is if you know everything’s been deposited then there shouldn’t be any money in undeposited funds okay let’s go ahead now and go over into section three and talk about invoicing customers we’ve talked a little bit about sales receipts and now let’s talk a little bit about sending out invoices to your customers remember the difference in a sales receipt and an invoice is that on a sales receipt the customer is standing right there you’re going to put in the line item they purchased you’re going to put in they made a payment the invoice is where you’re going to send an invoice to your customer and wait for payment at a later date sometimes you email these sometimes you mail them really doesn’t matter you’re going to receive the payment at a later date let me show you how to create invoices for your customers it’s very similar to the sales receipts but I want to show you where to go to get started creating invoices there are a couple different ways to get started creating an invoice you’re going to head over to the navigation Pane and point to sales you can either use the invoice option here or the customers if you choose the invoices option here you’re going to see a list of all of your invoices that you currently have and if you want to create a new one you can choose new invoice from right here if you started with the customers you would just come back to sales and go to customers this way then what would happen is if you had a customer you want to create an invoice for you could check them off head all the way to the right under the action column and then you can create an invoice this way either way would work whatever works for you ‘ll notice this way because I had a customer selected it pulled in all that customer’s information now if I wanted to change that I just click the down arrow and choose the new customer from the list remember I told you that if you’re using sub customers always always always pick the sub customer you want to go with the lowest level so that when you look at reports you don’t see other on your reports I’m going to start with Freeman Sporting Goods I’ll choose 55 twin Lane and now you’ll see it’s changed the customer email and the billing address if you didn’t have an email set up in your customer setup then you could physically type an email here you can also choose to BCC or CC someone just by choosing this option and putting their email addresses in here I’m going to hit cancel you can also send this later if you have this checked that means that you can create this invoice and then save it and then create another one and check the same box if you’ve done that you can email both of them at the same time that’s called sending a batch or emailing a batch if you happen to see that something’s changed with the billing address go ahead and change it here it’ll prompt you when you go to save it and ask if you want to save this permanently in their record because we had terms of net 30 set up in the customer setup you’ll notice the invoice date is 212 but the due date is 313 which is 30 days from the invoice date if I change this to Net10 you’ll see the due date is 10 days just make sure the due date is the date that you want the invoice to actually be due to you the crew number is a field they custom set up for this particular exercise you can go ahead and plug in some number just to keep it consistent there and then you can pick a product or service from the list to invoice your customer I’m going to choose installation and let’s say that I’m going to charge them a quantity of one at two hundred dollars remember a service is non-taxable so you will not see a green check mark there if you happen to see one just uncheck it and if you have a second line you’re just going to type on the second line whatever information you want to invoice the customer for here’s where you have your message that will appear in your invoice automatically you can type in there and change that to anything you like also when you’re sending out a statement to your customer then you can have a message on that statement appear as well whatever you typed in here you’ll notice that you can also put in an attachment let me scroll down just a little bit so you can see that if you happen to have some sort of file that was already saved in your computer you could attach it here an example might be if I’m installing landscape design I might have hired a subcontractor to do that and maybe the subcontractor has already sent me a bill and I want to attach that to this invoice over on the right if you’re going to give your customer a discount you can give them a percentage discount or a value discount I’ll choose value and give them 25 dollars and you’ll see it will deduct it from my 200 so now that the balance due is 175. couple things at the bottom you’re going to be able to do you can print or preview this right here you can also make it recurring and what recurring means is if this is something that happens on a regular basis then you can set QuickBooks up to automatically just put this in whenever you’ve told it to let’s say once a month it inserts this invoice automatically and then you can customize this a little bit if you want to do things like they added the crew number you’d be able to add Fields like that at the bottom you have an option that says save you have your save and close and if you wanted to create a new one you could click the arrow and choose save and new I’m going to choose save and close though and you’ll see now that our invoice has been completed if I wanted to go back and look at this I could actually come up here to invoices and then I can look down this list and find the one I’m looking for and that’s how you’re going to create an invoice for a customer let’s go ahead and move on into section four and I’ll show you how to record the payment once a customer actually pays you now that you’ve created your first invoice the customer is going to mail you a payment at some point now it doesn’t matter how the customer paid you you’re going to record their payment the same exact way we’re going to go in and tell QuickBooks how much the customer paid the date they paid all the pertinent information and when we’re done we’ll see that the invoice will show as being paid if they’ve paid the full amount if they haven’t it’ll show the balance or if it’s an overpayment it will show that as well let me show you how to record a customer payment for me before we receive our first payment I’d like to take you over to the reports section QuickBooks just head over to your navigation bar click on reports and the reports from the sub menu and this is all the reports that are in QuickBooks we are going to take some time in a later module and look through the different reports but right now I’d like you to head down to a section that says who owes you these are your accounts receivable reports if you head over to the second column you’ll see the second one is called the open invoices report and you can just click to run that one now these are all of the invoices that you’ve sent out that have not yet been paid even if the customer owes you a penny if you remember in the previous section we actually created an invoice for Freeman sporting goods and we did one for 25 Twin Lake and here it is right here 175 dollars I want you to notice that in any report if you want to go to that particular transaction you can see this is a link and you can just click anywhere and actually open up that invoice I wanted to show you this first because once we receive our payment this invoice will actually disappear if it’s paid in full or you will see this invoice and the balance is owed right over here not the 175 dollars that it originally was invoiced for now that you see that let’s head back over to our customers we’ll go to sales and we’ll go down to customers now let’s go down find our customer we’re going to receive the payment for and this is going to be Freeman Sporting Goods 55 Twin Lake you’ll notice over in the action column that you can receive a payment this is the receive payment window and the first thing you’ll notice is it pulled in my customer and my job I don’t need to change that unless I happen to want to pick a different customer and job I can look for an invoice by invoice number if I want to I can click there just type in the invoice number and hit find and it will search for it for me the next thing is the payment date I’m going to say this was received on February the 28th and here’s the payment method now here’s where you can pick the way that the customer actually paid you did they pay you with cash did they pay you with MasterCard Visa PayPal if you happen to take other payment methods you’d like to add like venmo or Square or even Bitcoin just come up here to add new and all you have to do is type in that new payment and then just hit save and from now on that payment method will actually be on the list there now I’m just going to say in this case it was a check though and I’ll put in the reference number that would be the check number and then just notice the money is going to go to an account called undeposited funds now hope that for a moment we’ll come back I want to finish the rest of this and explain this part to you now over here it assumed that my customer paid me the entire balance they owe for all of their invoices and we know that’s not always the case here I’m going to put in the amount that the customer did pay me let’s say the customer paid me 179 dollars now if you notice down at the bottom these are all of the invoices that are still open even if the customer owes me a penny you’ll notice what QuickBooks does is assumes the customer is paying all of the first one the rest of the money goes to the next one and then the balance of the money goes down to the next one all the way down now that’s not always how the customer has asked you to apply their payments let’s say in this case they’re not paying the first one they’re paying 175 dollars on this last one and they’re going to pay the four dollars on this one and that’s why it was 179 dollars always make sure that you have the correct invoices checked off and the correct amount over here that the customer is paying towards each of the invoices a couple of other things to notice when you look down at the bottom there’s going to be 179 dollars worth of money that’s applied and there’s no credit memos right now but if you had one you had issued for this customer then you could apply that credit memo to one of these invoices that would be open here if you want to clear the payment you could that would let you start all over filling this form out and then notice at the bottom there’s a place for a memo over on the left and there’s also a place for any attachments if you wanted to add something here and that’s all you need to do to receive payments it’s pretty easy process but I do want to go back up to this right here where it says deposit two and talk to you about what your options are currently if you receive payments the money’s going into this account called undeposited funds I’m going to go ahead and save this real quick and then I want to show you where undeposited funds is and then we’re going to come back in a second so I can show you where the other options are now if I close the receive payment window and head back to the chart of accounts I’m going to go down here to accounting chart of accounts you’ll notice that if I go down the list there’s this account called undeposited funds currently there’s two thousand two hundred and forty one dollars and fifty two cents in that account this is where money sits that you’ve received but have not yet deposited into the bank a good way to keep a check on yourself if you know that everything’s been deposited this should be zero let’s look at this for a second and see if we can figure out what money happens to be sitting in here notice you can click to view the register over on the right and what you’re going to notice is that right now it looks like there are three payments sitting in undeposited funds one of them is being the one that we just received none of these three are in the checkbook yet because they have not yet been deposited okay so let’s head back over to our payment that we just looked at I’m going to go to sales and go back to customers and let’s go down and find our customer Freeman Sporting Goods 55 Twin Lake and I’m just going to click on that for a moment and here you will see the payment that we just received I’m going to go ahead and click on that just to open it back up here were your other choices I could go ahead and put the money right in the checking account and this will skip the next step that we’re going to do but let me tell you why you may or may not want to do this if the 179 dollars is the only thing that’s going to be in that deposit then you can click checking hit save and close at the bottom and you are done with the whole process
but if you think you might receive another payment possibly from another customer and this one and the new payment are going to be together in the same deposit that’s when you want to pick undeposited funds and this will make more sense once we go through and make the deposit over in the next section but I’m going to go ahead and click save and close at the bottom here and let’s see if that invoice shows that it’s been paid when I go down the list and look at the invoice for 175 dollars it does show that it’s been paid in full if there was one penny left it would not say paid right here that’s how you receive a payment for a customer when you’ve sent out an invoice the next step in the process would be to actually take that money and make a deposit why don’t we head over into section five and I’ll show you how to make deposits foreign now that we’ve made a sale for our business we’ve actually invoiced a customer in this case we got paid and now we want to take that money and put it in the bank and that’s where the make deposits option comes in it’s always going to be the last step in this process no matter how you receive the payment whether it was a Visa card cash a check you’re going to have to make deposits and you want to make sure that your deposits in QuickBooks match what actually happened at the bank let’s go ahead and flip over to QuickBooks and talk about the make deposits option the easiest way to record that deposit is to click the new button right here and over on the right here under other you’ll see bank deposit this one down here is going to be your actual deposit slip a couple things you’ll want to double check on is make sure you have the correct bank account chosen here it’s very easy to have the bank account that you last use show up in that field and then you can’t find your deposit notice that the balance in the checking account is one thousand two hundred and one dollars and this is the date of the deposit let’s say I’m going to make this deposit on March the second now down at the bottom these are the three sets of monies that we saw that were sitting in undeposited funds what you’re going to do is check off all of the ones that are going in this deposit if all three are going to be in this deposit you check them all off if maybe the first two were going to be in that deposit and then maybe this last one was in a separate deposit do them separately because you want these to match what actually happened at the bank let’s say in this case though all three are going to be deposited a couple of things when you’re looking at this list here you can go and change the payment type if you didn’t do it when you were actually receiving the payment you’ve also got a place for a memo if you’d like to fill that in and then you can see there’s the reference number column and then the amount column over on the right my deposit will be 2241.52 cents now right down here it says cashback goes to If you happen to have a business bank account you’re not going to be able to get cash back but as a sole proprietor you could if you’re going to keep some cash then you would say cash back goes to this account and you would pick whichever account this went to you would also be able to have a memo and if you were going to keep 20 bucks you could type that in and it would deduct it from this total right up here there’s also a place to add funds to this deposit if I click this little arrow it’s going to open up this part here and I can add some additional monies now this could be something like maybe you got a rebate from Staples you could type that in if that was the situation it would say received from Staples the account would be office expenses or office supplies pick whichever account you actually use when you purchased the items for that rebate and put it back to the same account you’ve got a place for description the method and the amount of money it could be you’re also going to put some personal money into the business if that’s the case then the account you want to choose is that owner Equity if your members talking about that in an earlier module but remember that everything that goes into a deposit is not always income to the business so make sure that this goes back to the correct account if you’re adding additional funds notice if you need more than the two lines you could add additional lines and have as many lines as you’d like here you can also add a memo to this deposit if you’d like or add an attachment down at the bottom and that’s all you need to do as far as making a deposit now a couple just other options you could print this deposit out down at the bottom or make it recurring it could be that you have a customer set up on automatic draft where they actually pay you a thousand dollars a month let’s say and this would be that deposit lots of different scenarios there I’m going to go ahead and hit save and close now at the bottom and at this point the money is actually going to be in my checking account remember it’s 2 000 240 152. now that whole process of invoicing a customer receiving a payment and making that deposit has been taken care of now let’s go look in the checking account and see if we can find it it just so happens that on the overview right over here where we are the checking account is here that is one way to link to it to look at the balance another way would be going down to accounting to our chart of accounts and just opening it that way any of this would work I’m going to go ahead and view the register when I look in the register you’ll notice that there’s my deposit right there notice it says split because it’s split amongst multiple line items in this case we had three different transactions that went on the deposit itself I’m going to go ahead and cancel that and that’s the process of actually making a deposit the next thing I want to do is take you over into section 6 and show you how to set up credit memos for customers there are times when you will want to issue a credit memo to an invoice let’s say you have a customer that just isn’t happy with your services and they refuse to pay one of your invoices you can actually leave it on the books for a while if you’d like but eventually you might want to credit that off let me go ahead and show you how to create a credit memo the first thing that you want to do is look up the original invoice and see what it is that you charge them for to begin with I’m going to look at Red Rock Diner and you’ll see there’s an invoice for seventy dollars here I’ll just go ahead and open it up and you’re going to notice that we charge them for Pest Control it looks like probably two hours at 35 dollars an hour for a total of seventy dollars now let’s say the customer just wasn’t happy with us and we’re going to just credit that invoice when we create the credit memo we need to use the exact same product or service that we charge for to begin with the way you’re going to create the credit memo is come up here and click on new underneath customers you’re going to see credit memo plug in the customer’s name in this case it is Redrock Diner you’ll see it populates their email their billing address and it’s going to have the current date for the credit memo date just make sure you put the date that you would like we’re going to be looking at tags over in section 9. let’s just hold that for a moment and let’s go down to product or service here’s where we’re going to put in Pest Control remember you want to credit the same product or service that you invoice for to begin with we’re going to choose the quantity of 2 at 35 and that’s going to give us a total of 70. all we have to do now is go ahead and go down to save and close now that the credit memo has been created I want to show you two things that happen on your customer’s account notice here’s the credit memo and it says it’s closed and then there’s a payment that wasn’t there before now this payment is where you’re going to go to actually apply the 70 dollars to the correct invoice because see how the invoice is still open over here I’m going to click on the payment now if it sees an exact match it will go ahead and check it often but if it doesn’t come down and choose the correct invoice and then notice the credit memos at the bottom so you want to make sure those two are checked so they apply to each other so you’ll notice down here the amount to apply is seventy dollars and that’s all we have to do we’re going to save and close at the bottom and because this transaction is linked to others it will ask us if we’re sure we want to modify this and the answer is yes and now when we go back and look we’ll see that the credit memo is closed the payment is closed and the invoice is actually paid so remember this is a two-step process you have to create the credit memo then go back to the payment and actually apply that credit memo to the correct invoice now that you know how to create credit memos I want to show you over in section 7 how to actually give your customers a refund there are times when you want to issue a refund to a customer and that would be if a customer has purchased something and paid in full and you want to actually give them their money back that’s the difference in a refund and a credit a credit usually sits in their account until you just credit the money off of your books whereas refunds you actually give customers the money back let me go ahead and show you how to create a refund we’re going to create a refund receipt but before we can do that we need to go ahead and look up our customer and see what it is that we’re actually going to refund them for this is Duke’s basketball camp if you notice they have an invoice here for 460.40 that is paid in full I’m going to click on invoice to actually open this up and you’ll see that the second line are some Garden rocks that they purchase from us they purchased six of them at twelve dollars and let’s say that they’re going to return three of them at 12 because they didn’t need all six of these we’re going to go ahead and close this now and now we’re going to create our refund receipt we’re going to go over to the new option on the navigation pane we’re going to come down to refund receipt here we’re going to pick our customer’s name in this case it’s going to be Duke’s basketball camp you can see it pulls in their email and their billing address and we just need to make sure we have the correct refund receipt date down here where it says payment method this is how the customer actually paid you they wrote us a check let’s say and we’re going to refund them from our checking account if you’re going to actually print them a check then it’s going to let you print this later or if you want to go ahead and print it from here you could leave that check number now let’s go ahead and put in the correct product or service and we decided that this was Garden rocks and so we’re going to go ahead and choose that and remember they’re returning 3 at 12 and that makes 36 dollars if the item was subject to sales tax when the purchase was originally made then QuickBooks will automatically choose the tax option to give them their tax back as well you’ll notice this equates to 36 dollars the total amount refunding will be 38.88 I’m going to go ahead and hit save and new at the bottom and now that refund receipt is done I’m going to click OK here when it says it was successful and now let’s go back and look at their account if I’m looking at Duke’s basketball camp you’ll see here’s the refund right here and you’ll notice that it says it is paid now if you want to actually go ahead and print a check then all you have to do is over here it says print check and you’ll notice that it automatically has Duke’s basketball camp and a check waiting to be printed for 38.88 just double check that you have the correct checking account at the top and make sure you have the correct check number that your check is going to be now if you were giving them cash you would have chosen cash from the option back over in the refund receipt and then that would have just been done that’s really all you have to do at this point you can actually go down and you can preview and print this and then that will be the end of it there’s the preview you would hit print and it would print out and that’s how you create a refund for a customer it’s called a refund receipt let’s go ahead and take a peek now at creating statements for your customers one of the things you have the ability to do in QuickBooks is send statements to your customers a statement is basically a gentle reminder to your customers that they owe you some money typically statements are sent at the end of each month and they show the activity that happened during that month you don’t have to send out statements but it’s a nice little feature to keep your customers abreast of what’s going on with their account let’s go ahead and flip over to QuickBooks and we’ll see how statements are created when you’re ready to create statements for your customers just go to your navigation Pane and choose the new option over here where you see other if you look down you’ll see statement the first thing you have to do is pick a statement type you can choose a balance forward any open items for the last year basically and then you can also choose a transaction statement I’ll choose balance forward for the statement date you generally want to pick the end of the month that means that in this case the start date would be January 1 and the end date would be January 31. for the customer balance status you can choose all open or overdue I’m going to choose all and then apply and these are all of my customers who met the criteria they call this the recipients list if you’re looking down this list and you see one that you don’t want to send a statement to just come over here and uncheck the Box now when you’re ready you can print or preview these down here in the middle I’m going to click on print or preview so that we can see what a statement looks like now I’m going to have one here for each customer you’ll see as I go down the list there you go and for each one you’re going to be able to see all of the activity for January 1 through January 31. the first thing you’ll always see is the balance forward from the previous month notice up here tells you the total do and there’s a place where the customer can actually send you a check and if they want to type in the amount closed or write it in right there they can do that and then down at the bottom of the statement it shows you how much of this is in the currently due category how much is in each of these categories and then again the total over on the right and that’s what a statement looks like you can pick any date range you like when you’re creating statements if you just want to send one to one customer you can do that but that’s an overview of what statements look like and how they work I’m going to go ahead and close this and you really don’t need to save or save and send this once you’ve printed these out you can go ahead and just X out at the top and then it’s ready and you can print them the next time well let’s go ahead now and look over in section 9 and talk a little bit about how a new feature of QuickBooks called Tags is going to work there is a brand new feature in QuickBooks online that they’re rolling out right now called Tags and what tax will allow you to do is create certain words that will appear on a drop down when you’re in different transactions in QuickBooks and you can choose those and later you can use those to search for things or to run reports based on those tags if you’re familiar with Gmail we’ve got something similar in there where we can create a list and tag different emails and then search for anything that would have that particular tag we’re looking for this is still in beta right now if you happen to not see it when you open up your QuickBooks Online just know it’s coming they’re just rolling out in stages right now and you may not have all the options related to tags to get but just know it will be coming somewhere down the line let me give you an example of how you might use this feature currently we have a feature in QuickBooks called classes and we’ve kind of been using that in the way that tags will work but let’s say that you have an attorney’s office and you have four different attorneys in that office you might want to run reports on the company as a whole but you might also want to run reports on each attorney if you had this list of classes set up you could just pick from the drop down list in each transaction you’re in which attorney that this should be tagged to and we’re going to use tags in the exact same way I don’t know if they’re going to get rid of classes down the road somewhere but tags are going to be a really nice feature so let me flip over and show you what’s in there now and like I said if you don’t see these options or you see something new down the road it’s because they’re rolling it out it’s still in beta the way you’re going to access your tags option is through the gear icon on the top right hand side of your screen and underneath the list right here you’ll see tags now currently we don’t have any tags set up but once we do set up our first tag you’re going to see the top of the screen change and you’ll see a section that says money in and money out let me give you an example of what we’re going to do with our tags this is a company called Craig’s design and Landscape Services they actually do two different things they do design work and they do Pest Control and as part of their design work they have three different areas they focus on they focus on fountains landscaping and sprinklers if I wanted to put my tags in a group then I could do that as well and you will see the groups listed right here along with the tags but let’s start with just a tag I’m going to come over here and under new I’m going to choose tag and let’s say that I use fountains as my first tag if I wanted to put it in a group I could but I don’t have any groups set up yet so right now I’m just going to hit save at the bottom and now you’ll see my first tag called fountains and notice it’s ungrouped is not part of a group now here’s what I would say in a second ago about the money in and money out once I start applying these tags to different transactions I can come here and see the money in or out based on those tags I have now let me go ahead and create a group to show you how this works I’m going to create a tag group and I want to create a group called design and I’m going to create another one in a minute called Pest Control now you’ll notice that from this screen here if I wanted to add a tag I could do it right from here without having to go back out and hit new tag I’m going to add two more because I have fountains and I want to add Landscaping and we’re also going to add sprinklers and I’ll just hit add tag there now let me go ahead and click done at the bottom and now what you’re going to see is design is a group and notice the down arrow in order to see the two tags in there I click the arrow and then I see those two now fountains are still ungrouped if I wanted to add it to this Design Group I can come over here where it says edit tag and then I can select the group I’d like it to belong to in this case design and when I save it now you’ll see design has three tags underneath now let me create one more tag group I’m going to create one called Pest Control and let’s say under pest control that I’m going to create two tags I’m going to create one for residential and one for commercial customers I’ll add that one and come back and ADD commercial and then add that and now I’m going to say done at the bottom and now you can see at the bottom I’ve got two different tags I’ve got one that’s pest control and one that’s design now the other thing is notice the color blue I can leave it like this but if you want each tag group can have a different color scheme going on if I come over here to edit the group you’ll notice that here I can change the color maybe this one can be this yellow color and I can save that and notice it also made all the tags below it that same color and that way they just stand out a little bit when I’m looking at different reports of things now let me show you where you’re going to use these tags or tag groups this will be in any transaction let’s say that I go over and I create a new invoice you’ll notice that in here you have an option that says tags you can choose from the drop down list and you can choose more than one if you need to this might have fountains and it might also have some pest control and there’s your two tags that we’re going to be using now I will be able to look up reports based on the product or service but I can also look at the reports based on fountains or commercial now let me just go ahead and set something up here so you can see how this works I’ll pick Tom Allen we’re going to go down and pick for a product or service we’re going to go ahead and say this is gardening and we’re going to say one at fifty dollars and then let’s say for the next one here we pick fountains we’ll do a rock film and that one was 275. now let’s say also we’re going to do some Pest Control around this rock Fountain so that we don’t have bugs in there and now we’ve got a couple of different products and services here now when I go ahead and click save and close here we’re going to go back to our tags and see if we have any money that’s in or out of these and now you’ll see that there’s 382 dollars money in because remember QuickBooks considers at the time you create the invoice it to be part of your income when you actually write a check use your credit card that sort of thing that will be on the money outside so if you had to actually buy some materials related to one of these and you tagged those transactions you would see the money out show up over here and that’s basically how tags are going to work it’s going to be a great new feature that’s really going to help you drill down a little bit deeper than you can now to see where your money is coming in or money is going out let’s go ahead and do one more thing in this module we’re going to look at some of the reports that have to do with your customers and sales QuickBooks has a ton of different reports related to your customers and your sales and I want to take you through and show you some of these different ones that you will want to run on a regular basis to see how your company is doing in these different areas let me go ahead and flip over to QuickBooks and I will show you a couple of these different reports I’m going to head over to the navigation Pane and I’m going to click on reports there were several different categories of reports here but I want to focus right now on the one that says who owes you and then below it are your sales and customers underneath who owes you probably one of the most common reports is the open invoices right here which you’ve seen before and this is just a list of all of your customers who still owe you money even if they owe you a penny you’ll notice that it shows you all the information you need about the transaction like the dates you can see the amount and if you’re on that line and you just point to any of these pieces of information you’ll see that you can click and actually go to that transaction if you actually change that transaction and then get out of it when you come back here as long as you save the transaction the report will be updated another one I’m going to go back to reports that you’ll want to look at is going to be under who owes you customer balance detail this report will show you all the transactions that occurred for each customer and each job or sub customer when you first come in it might look like you’re just looking at the invoices anything open what you need to do is go up to the top right and choose customize come down to where you see filter and you’ll notice that where it says AR paid it says unpaid right now go ahead and choose all from the list and then run the report at the bottom and now you’ll see each transaction that happened for each customer sub customer and or job here I’m going to go ahead and go back to reports we’ll look at a few more here back down to who owes you there are some other ones that you might want to look at for example you might want to run a collections report this one is going to show you all of the information about the customer this time you’ll notice there’s a phone number here as well so if you needed to make some phone calls and call some of these people you’ve got that information right here on this report I’m going back to reports some other ones just to notice you do have the ability to run an accounts receivable aging detail and an aging summary anytime you see a summary and a detail this summer we will show you the line item and one total whereas the Aging detail will show every single option that made up that category or that line item that you would normally see you can see an invoice list here if you like you can see a terms list there’s a statement list just all kinds of things you can look at underneath who owes you now done under sales and customers here’s where you can run things like a customer contact list this is just going to give you each customer their phone number their email that sort of thing you can also go in and look at estimates by customer you might want to see if you have any income by customer you might want to see all your payment methods product lists and services your sales you can look at those by customer you can look at them by product and you can also look at them by time or activities so just know that there’s a lot of different reports under those two options right there now most of your reports can be customized if you happen to run for example a sales by customer detail you’ll notice that you can come up here and change the report period I can look at all dates and then I can hit run the report and I’ll see all of the information for each customer I can group these by customer I can group them by product I’ve got all different kind of ways I can group this report and also something else to notice is that all of your reports are automatically run on a cruel basis not a cash basis and you can change it per report if you need to accrual basically means that as soon as you invoice a customer it’s going to show as income to your business and QuickBooks whether they’ve paid it or not if you had any expenses let’s say you entered a bill it would show it as an expense whether you paid that bill or not we’re going to see how this really works when we look at a profit and loss a little bit later but let’s go ahead and just kind of wrap this up I just want you to see the reports that were available for customers and for your sales and most of them will be under these two headings let’s go ahead now and move over to module 6 and talk a little bit about products and services okay we’re just starting module 6 now and in this module I want to talk to you about how products and services work in QuickBooks a product or service is something that you either sell your customer or sometimes you purchase those products and services as well and you want to set those up and make sure you set them up correctly so that you have accurate reports as far as inventory or as far as some of your profit loss those types of things let’s go ahead and flip over to QuickBooks and start talking a little bit about how products and services work I’m going to give you a quick overview and then I’ll take you into section two and show you how to add some of those products and services to get to a list of your products and services go to the gear icon on the top right of your screen then underneath the list you’ll see products and services this is a list of all the products and services that you have set up remember that sometimes you buy these and sometimes you sell these and there are different types of products and services you can create you’ll notice currently mine are sorted by type if I wanted to sort by any of these other columns I would click on the name and just sort by that column you’ll see there are Services you provide like Landscaping trimming you’ll see that there are different Pest Control those are all services you provide when you get past the service items you’re going to see that there are actual inventory items as well inventory means that you actually sell physical products and you count how many you have for example when I look at Rock Fountain you’ll notice that it looks like I have two on hand and I can buy more and add to my inventory or I can sell these and that will take it out of my inventory there are other types of items that you can have as well you can have non-inventory items those are actually items that you don’t want to check how many you have in the back but they’re physical items that you either buy or sell you’ll notice when you’re looking at this list that you can see the name of the item if you’re using SKU numbers and you’ve got those set up you’d be able to see that here the item type a description and this is the description that automatically appears when you pull that item onto an invoice or some sort of form in QuickBooks you’ve got the sales price if there’s one set up sometimes it’s different for every customer so they just don’t set up a price you just do it when you’re actually invoicing the customer or if you happen to buy this whenever you’re purchasing it if this item is subject to sales tax you would see that here if it’s inventory we saw that you can see the quantity on hand and the reorder point that basically means that when you get down to a certain number that QuickBooks will pop up and tell you you need to order some more and you can see that they did not set that up in this exercise the last column that you see is the action column here’s where you can edit to one of these items if you need to make it inactive adjust the quantity that you might have on hand those types of things but this is where your list of products and services is going to live what I want to do now is show you how to set up some of these products and services so let’s head on over to section two so you can see how to do that now that you’ve had a quick overview of what the products and services screen looks like I want to take you in and show you how to create your own products and services some of you may only have six or seven in your business other businesses might have thousands it just really depends on what your business does let’s go ahead and flip over and talk about how to add those new products and services the way you’re going to add a new product or service is head up to the new option in the products and services window the first thing you need to tell QuickBooks is what type of product or service is this that you’re adding is it going to be inventory inventory means that you want to keep some on hand and then have QuickBooks remind you when you get down to a low number so you can order some more that’s called true inventory you can check how many you sold and how many you purchased sometimes you don’t want to keep any in the back room and that’s what we call non-inventory it might be a physical product that you buy or sell but you just run reports when you need to to see how many you bought or sold you don’t really need to keep any in the back room a service is a service you provide and then you have the option to take any of those three and put them into what’s called a bundle the example they use here is a gift basket of fruit cheese and wine you might actually add something else to that gift basket like a spoon or a cup or something like that and you can actually set those up as one of these three types and then you can create a bundle which includes those three items let’s go ahead for now and say that we’re going to set up a service the first thing we’re going to do is go ahead and give our service a name and I’m just going to call this maintenance you have the ability if you have SKU numbers for your products and services to put that number in here and also you can add a picture of that particular product or service from over here where this little pencil is this would have to be a picture that’s already in your computer that you can go and grab and pull in you can also put your products and services in different categories there’s a few already created for the exercise but if you wanted to add a new category you could do that by hitting the add new Option and creating a category like you see here they’ve got design landscaping and pest control as categories the next thing is the description when you actually put this on an invoice which means you’re selling this to your customer what is the description that appears let’s say that it says quarterly maintenance and then the next thing is going to be the sales price if you have a flat rate you charge for this then you can type that in and it will pre-populate for you but if it’s different every single time then you’ll want to just leave this blank nothing happens here it’s only when you buy or sell the product or service that numbers play into your reports but let’s just say that we have a flat rate of 250 a quarter we charge for this and then the next thing which is the most important thing on this screen is the income account that you want this money to go back to when you put this on an invoice in this case they’ve got it automatically going back to Services income which is where I would probably leave it but if you wanted to put this in any other account feel free it’s just it needs to be an income account now if you don’t pick an income account QuickBooks will not say anything to you about it but you’ll look at reports and they’ll be really wrong and you can’t figure out why look at the word the word is income meaning it needs to go to an income account if this was a particular product or service that you were going to charge sales tax on then here’s where you tell QuickBooks this is a taxable product or service or non-taxable product or service typically Services you provide are non-taxable and physical items you sell are taxable now if this happens to be a particular product or service that you buy from a vendor that you like you can check the box and put in the vendor information but I’m going to go ahead and uncheck that and click save and close at the bottom and now you’ll notice that I have a new service in my list here it’s called maintenance and you can see all of the information all the way across now if I needed to edit that information I can click on edit here that will take me back to this screen and I can change whatever I need to and save and close again and then it will be updated you do have a couple of other actions you can take under this drop down arrow here you can make this service inactive if you need to you can also run a report on this service or you can duplicate it and that’s a quick way to go ahead and set up your new products and services now that you know how to set up a service let’s go ahead and look at setting up an inventory product that way you can see how to tell QuickBooks how many you currently have on hand and then you can see how inventory is added to or deducted from that number now that you know how to add new products and services in QuickBooks let’s talk specifically about adding inventory products true inventory means that you want to keep a count on how many of these products you have in your office you want QuickBooks to let you know when you get low so you can order some more you’ll want to actually know how many you have on hand when you first set up your new inventory product and once you’ve done that then as you invoice customers that is how your products will get out of inventory and as you purchase them that’s how your products will get back into inventory let’s go ahead and flip over to QuickBooks and I will show you how to add an inventory product you’re going to add an inventory product the same way we added the new products and services over in section two go to the top of your products and services list and choose the new option this time we’re going to choose inventory the first thing you’re going to do is give your inventory a name I’m going to call this one sprinkler clamps and then we’re going to give it a skew I’ll call this one 55 and then we’ll choose a category and let’s say in this case that we’re going to put it under Landscaping now here’s where you put in the initial quantity on hand this means you’re going to do a count before you set up your product and if you have 10 in the back room you’re going to put that in here and that gives it a starting number you also want to have a date to start this with and let’s just say in this case that I want to go back to the beginning of February reorder Point what this means is what number do you want to get down to before QuickBooks pops up and tells you that you need to order some more let’s just say in this case when we get down to three the next thing you’re going to see is inventory asset account now do not change this this is the account that the value of the inventory will actually go into in your chart of accounts remember that inventory is an asset to your business you are worth more because you have it right now but your goal is to sell it and get it out the door this is the asset account the inventory will sit in and then we’re going to put in a description now I would put in the same thing we’ll put a new sprinkler clamps and then if you have a set rate that you charge for this you’ll want to type this in if you don’t have one that’s different every single time then you can just leave it blank let’s say that we sell it for 2.75 you don’t want to change this either because this is the income account that this will go into when you make a sale when I put sprinkler clamps on an invoice and I sell this it will go into the sales of product income account if this is a taxable product you want to make sure that this says taxable and now we have the purchasing information this was the selling information up here this is when you purchase it down here the first thing it asks for is a description when you order this from whatever company you order them from what is their description and sometimes it’ll be the same other times it might have a part number at the end there’s just all kinds of different things that this could say what is the cost and this means on average what do you buy it for it does not mean that the last time you purchased this it was 1.75 let’s just say on average though it is a dollar seventy five and it will go to an expense account called cost of goods sold if you have a preferred vendor then you can pick them from the list here it could be that you like to get these from Hicks Hardware and that’s all you need to tell it I’m going to go ahead and click save and close at the bottom and now I should see my sprinkler clamps right here you can see the SKU number we typed in the sale price the cost there’s 10 of these currently and when we get down to three it’s going to pop up and ask us if we want to order some more don’t forget you have some options over here under your action column if you wanted to go and adjust the quantity maybe you discovered there’s really only nine in the back room you’ll be able to do that you also have the ability to adjust the starting value and that’s really all there is to adding inventory products let’s go ahead now and move over to section four and talk a little bit about purchase orders if your company buys a lot of products you might want to create a purchase order system for your business when you do this it’s a way of actually tracking everything you’ve ordered and that way you can see what’s come in if there’s anything back ordered that sort of thing and this is also going to be a way to start the process of receiving your items into inventory let me go ahead and show you how to create a purchase order before we get started there’s a couple things that you need to know first of all if you’d like to use the purchase order feature in QuickBooks you have to be enrolled in the QuickBooks Online plus Edition that’s the addition that actually handles purchase orders the other thing is you’re going to have to actually turn on the purchase order feature in the account settings let me show you where to go for that you go up to the gear icon you’re going over to account and settings make sure you’re clicked on expenses and here’s where you see purchase orders if this is not on just come over here to this pencil and then make sure you check the box here to use purchase orders as long as that’s good then you should be fine I’m going to go ahead and close with the X and let’s go and look real quick at our products and services because I want to show you how we’re going to order some more and put it into our inventory I’m going to click on the gear icon under the list the second column I’m going to click products and services if you remember we talked about some of these being inventory and one of these that I want to talk about right here is going to be this rock fountain now let’s say we have two of these but we’re getting ready to do a new job and we need to order two more to have a total of four this is what we’re going to order create a new purchase order I’m going to go to the navigation bar and click on new I’m going to go down in the second column and click on purchase order and the first thing I need to do when creating a purchase order is to pick my vendor I’m going to go down the list here and we’re going to pick Hicks Hardware if you had Hicks Hardware’s email it would be pulled in right here you can see there’s the mailing address and then let’s talk about the ship too for a second if you want to have Hicks Hardware ship these directly to your customer you can choose your customer from here if not then it’s just going to come to your office you don’t need to choose anything there here’s the date of your purchase order and in this case they’re using the crew number field so we’re going to plug something in there you can also set a ship via which would say USPS FedEx you can also set a sales rep if you had those as well now looking down the list you want to use the item details not the category details here and remember we’re getting ready to order some more rock fountains now let’s take the other ones out of the list we’re just going to go ahead and click the little trash can over on the right and I want to get two of these so that I have a total of four when I’m done again if it’s related to a particular customer you’ll want to plug that information in right here if you were ordering other things as well you can go ahead and put all of these in here you have a place to put a message to your vendor a memo and at the bottom some attachments once this is done you’re going to go ahead and send this over to your vendor I’m going to go ahead and say save and close and that’s how that works if you had the vendor’s email address you could have emailed this directly to them other than that maybe you call them on the phone and ordered it but you do have your PO in here now so that whenever you go to receive these items you have something to receive it against that’s how you actually create a purchase order the next step in the process is that your products actually come in and you’re going to go in and receive those products into your inventory let’s go ahead and head over to section five so I can show you how that works now that you’ve created a purchase order you can actually receive the items into your inventory The Logical process is that once you order the items from your vendor they’re going to come in the next week 10 days probably and you’re going to want to receive them into your inventory let’s go ahead and flip over to QuickBooks so I can show you how that process works let’s head back to our products and services for a moment I want to show you that if we go down and look at Rock Fountain we still just have two and that’s because all we’ve done at this point is order two more once we get through going through this receiving products into inventory feature then you’re going to notice that this number will go up to four all you have to do is head over to the navigation bar and click on new and we’re going to create a new bill the first thing I’d ask you is who is your vendor and this is where I’m going to pick Hicks Hardware and you’ll notice as soon as I do that that this little window pops up over on the right and this is letting me know that I have an open purchase order if you want to actually add the items on this purchase order to this bill just click add and if you look down here it’s added the rock fountains it’s added two of them and it’s got a rate amount and everything we talked about here because this is an actual bill from the vendor we want to make sure that the rate and the amount and all of that is correct if there happen to be a sale and that’s why we had ordered two of these we would change the rate and of course the amount would change in that case going back up to the top you’ll see that it pulled in our mailing address for Hicks Hardware we’re going to want to choose the terms that are on that bill we’re going to say net 30. here’s the bill date meaning the date that it was actually printed and the due date meaning the date it was actually due we’ll also want to plug in our bill number over here and that’s really all we need to do there’s a place for a memo down on the bottom left we can add attachments if we want and when we’re finished we’re going to save and close now let’s go see how many we have in inventory now and if we’re looking at Rock Fountain we have four of those and that’s one of the ways that things get into inventory through a purchase order now we’ll be looking at some other ways things get into inventory it could be you’ve written a check it could be that you’ve actually gone in to use your debit card but this is a way of creating an order and then receiving that order all right hey there welcome to QuickBooks desktop 2022 my name is Cindy mceuken and I’m going to be your instructor and help walk you through this course I wanted to take a few moments and just give you a quick introduction and also tell you a little bit about what to expect as you go through this series of videos I’ve actually been a QuickBooks instructor for over 20 years I’ve worked with all the different versions of QuickBooks I’ve actually taught large classes I work with individuals and anything in between what we’re going to do in this course is we’re going to start the very beginning I’m not going to assume you know anything I want to make sure that you have a really good foundation and you set up your QuickBooks file the correct way we’re going to start by going through the screen itself getting you familiar with what you’re looking at and then we’ll jump right in and start creating what we call the chart of accounts we’ll spend some time in there because that’s going to be the most important part of QuickBooks because all of your money will flow through one of these accounts we’re going to see we’ll look at the accounts payable section of QuickBooks we’re going to look at the accounts receivable section we’ll be looking at sales tax payroll and pretty much anything you would need to get your business set up the correct way in QuickBooks you might want to get out of pen and paper so that you can take notes feel free to go through these videos as many times as you need to and if you have any questions feel free to just shoot us an email and we’ll be more than glad to get back with you we want to make sure you’re well set up and on your way to getting your QuickBooks desktop file set up the correct way so you can have success in the future well let’s go ahead now and get started I’ve got a few things in this first module that I just want to talk to you about so we’ll talk next about what’s the difference in the desktop and the online version and then really in module two we’ll start jumping into how to get this company file set up and go in the correct way well let’s go ahead now and flip over to the next video and we will just talk about the desktop versus the online version of QuickBooks in two it makes both a desktop version of QuickBooks and an online version of QuickBooks and I wanted to take a moment and just talk to you about the differences in the two hey this is Cindy again and we’re actually in module one working in the second video the reason I wanted to actually show you the online version is because at some point you may want to take your data from your desktop version and upload it to the online version or vice versa maybe you’re currently using the online version and you prefer to go to the desktop version if that’s the case you’ll need to ask Intuit to actually grab your data file for you and then they’ll send it to you and you can upload it into your desktop version either way I just want you to be familiar with both so that you make an informed decision when you’re working with QuickBooks now a couple things to know as of the 2022 version of QuickBooks desktop it is now subscription based just like the online version one of the things that desktop users really liked was the fact that they didn’t have to go out and purchase the desktop version every single year now you have to if you do not update your subscription every year it will actually stop working now the online version has always been subscription based most people will actually pay for it by the month some people will pay for it by the year the advantage of working with the online version is that you have the ability as long as you have internet access to log in anywhere in the world that you happen to be and access your data now you can’t really do that with the desktop version unless you have some sort of software that will allow you into your actual computer let’s go ahead and take a peek at what each one of these looks like briefly so that you can see some of the differences visually here I went ahead and put these side by side so that we could get a good visual this side is the desktop version and this is the online version and the first thing you’ll notice is that they don’t really look alike it’s a little bit harder to work with the online version because it doesn’t really have the home screen set up the way the desktop version does on this home screen you’ve got different sections and you have a flow chart that really tells you what to do next where you don’t really have that over on the online version one thing that a lot of people just assume is that the online version allows you to download all your transactions from the bank and that is true but you can also do that in the desktop version as well a lot of times people will choose the online because they assume that’s the only one that will download their transactions and that’s not really the case the other thing is you’ll notice that you have a menu over here on the left just like you have here but again they’re set up differently so you really just have to learn your way around when you’re working with the online version of QuickBooks if you’re currently using the desktop and you’d like to try out the online they do have a free 30-day trial that you can go and try it out if you’d like if you go to the website go to intuit.com and you can look for the online version and they’ll have all the different subscriptions that they sell and you can look for the one that would fit your budget and that would have all the options that you want in that particular version if you don’t like it you can always come back to your desktop version so I just wanted to give you a quick feel for that and let you know that you have that option available if you want let’s go ahead now and flip over to the third video in this module and I’ll talk to you a little bit about some of the differences in the different versions of the desktop that into it has available for you hey there welcome to module two this is the module where we’re going to talk about getting started with QuickBooks hi this is Cindy I’m your instructor and we’re working in this first video now in module two where we’re going to talk about how to set up your company file each file you create in QuickBooks is called a company you can have as many Company files as you like and neither one talks to the other so if you have a need to keep them separate this is the perfect way to do it sometimes you might want to keep your business separate from your personal it could be if you’re a bookkeeper you have multiple customers you can have each one of those set up with their own company file let me go ahead and take you over to QuickBooks right now and I’ll show you how to get started setting up that company file when you first load QuickBooks you’re going to see a screen that looks like this in this area here if you’ve previously opened any company files you would see them listed so that you could double click and go to that particular file here’s where you would create a new company file you can also open an existing one that may not be on the list or if you’ve actually created a backup and you want to restore that backup you could do that here and there’s also some sample files there’s always a product base and a service based one here that you can work with what we’re going to do is we’re going to hop on over to the next video and talk a little bit about using What’s called the easy step interview we’ll create a new company and that will launch us right into that easy step interview before we can start using QuickBooks we have to go through and actually set up our company file and we’re going to do that by using something called the easy step interview hi this is Cindy I’m your instructor and we’re actually working in module two now the getting started module we’re on the second video using the easy step interview part one the easy step interview is going to ask you different questions and based on how you answer the questions it’s going to set up your company file for you let’s go ahead and flip over to QuickBooks and we’ll get started using the easy step interview to get started creating your new company file you’re just going to click right here and this will launch you into the easy step interview you’re going to get a screen like this that asks you who you’re creating your company file for for yourself or for someone else you’ll want to go ahead and choose myself let me just mention down here where the other options are you do have the ability if you happen to be using quick and now you can convert that data into QuickBooks or if you’ve got another accounting software you’d like to use you can convert it from here notice there’s also an advanced setup and that’s typically where I suggest you start with because even though it’s going to take you a little bit longer to get through it this is the easy step interview and it will ask you most of the questions you will need to get the company file set up correctly so take a little bit of extra time and fill this out this first screen asks a little bit of information about the company itself I’m going to go ahead and say this is ABC Plumbing this is just a fictitious company and notice as you’re going through here the only thing that you have to put in is the company name now I do suggest that if you’re going to send out correspondence to customers or vendors then you go ahead and fill out the rest of this information that way you don’t have to come back and do it later if you don’t have a different legal name you don’t have to put that in there there’s also a feel for the tax ID now feel free to type this in if you like but just so you’ll know you don’t really need this unless you’re going to be printing some 1099s or doing payroll or something where it needs that tax ID number the next thing you’ll want to do is put in the street address or if you have a PO box and you’d like to use that for mailing you can put that in as well I’m going to go ahead and make up a city and state here you’ll notice the country populates with us so if you happen to be in a different country you can go ahead and choose that from the list the next thing you’ll want to put in is the phone number now the reason that you do want to put the phone number here is because remember if you’re actually going to send correspondence out to customers or vendors you probably want that phone number on those invoices or anything you’re sending out there’s a place for a fax number as well and an email address and a website I’m going to click on next now this page asks you to select your industry you can see as you’re looking through the list of different Industries there’s pretty much an industry for everything but if you can’t figure out which one works best for you at the bottom of this list is a general product and a general service based business I’ll just go ahead for now and pick the general product based business and click next this next screen asks you how is your company organized you can see there are different choices here there is the sole proprietor option you might be an LLC an S corp but honestly if you have an accountant that does your taxes I’d probably pick other or none from the list and here’s why what’s going to happen is when you start setting up different accounts in the chart of accounts it’s going to have an extra field that will ask you which line in your text forms you’d like to put this to you’re not going to know because you’re not an accountant the reason that you want to use those is if you were doing your own taxes and you were using a software like TurboTax for example it would have to know where to pull each line onto that tax form but since you’re not doing your own taxes pick other or none and it won’t even ask you that question I’m going to click next and that would ask me to select the first month of my fiscal year now this will default to January unless you have a different fiscal year then I would go ahead and leave that on January and click next and now it’s asking us about setting up our administrator password now let’s take a few minutes and talk about this as soon as you open QuickBooks it will launch you directly into the last company file that you had open if you would like QuickBooks to ask for a username and password before it launches you into that company file then you’ll want to go ahead and set up that password right here for the administrator this is very highly suggested that you do this you do have the ability to also set up different passwords for different people who will be using QuickBooks and that way you can limit their access to certain areas and we’ll talk about that a little bit later but right now let’s go ahead and set up a password a really good password will have 10 to 12 characters it’s going to have capital letters It’s going to have numbers it’s going to have special characters but make sure it’s something that you’ll remember because if you lose your password it’s very hard to find it I’m going to go ahead and click on next and now it will say creating your company file you’ll want to go ahead and save this somewhere so that you know where to find it I’m going to go ahead and just stick this on my desktop real quick you’ll see it’s now creating my new company file for me you can actually see right back here that it has my company name and you can see over here the icon bar now we could actually go ahead and leave at this point but let’s go ahead and finish the easy step interview I’m going to click next and this section we’ll talk a little bit about what I have that I sell in my business so you can see it ask me do I sell Services only products only or both there’s no wrong answer here if you currently sell services but you’re thinking that later down the road you might want to sell products go ahead and turn them both on you’re actually just turning icons on or off on the screen when you choose these different options I’ll choose both and click next this question asks if I’d like to create estimates in QuickBooks construction is a prime example of an estimate if I want to have my kitchen remodeled I’m probably going to ask for a quote or an estimate for the job before I actually hire someone to do the work I’ll just go ahead and say Yes here this screen asks me about using statements in QuickBooks if you have customers that you invoice you may want to send statements at the end of the month those statements will actually be a summary of everything that happened that month with that customer it will show each invoice any payments they’ve made any credits and it’s a gentle reminder to your customer that they owe you money I’m going to go ahead and turn those on and now it asks me about progress invoicing now this question really goes with that estimate question because if you estimate jobs then you have the ability to turn those estimates into an invoice so that you can receive money from your customer you may not always want to invoice everything that’s on that estimate maybe you want to pull 50 percent maybe you want to pull certain items you have the ability to do that if you use progress invoicing I usually suggest if you are estimating jobs that you do want progress invoicing I’m going to click next and now it asks about managing the bills that I owe you know a lot of people actually just receive a bill and they throw that bill on their desk in a basket and then when it’s time to pay bills they’ll look through the bills and decide which ones to pay that’s certainly okay however if you want to really make good use of QuickBooks go ahead and enter all of the bills whether you receive them in the mail or if they’re just ones that are electronically withdrawn just enter all of those in QuickBooks as bills and that way you can track all of the bills that you owe you’ll be able to run reports to see who you owe if any of it’s over 90 days that sort of thing if not you won’t be able to run any reports on accounts payable I’m going to go ahead and say Yes here and click next and now it asks me if I’d like to track inventory now before I answer this question this video is getting a little bit long so why don’t we go ahead and look over at part two and we will continue talking about how to use the easy step interview hey there welcome back it’s Cindy again we started talking about how to use the easy step interview over in part one now let’s go ahead and finish that video this is part two of using the easy step interview now we’re going to talk a little bit about tracking inventory this screen asks if we’d like to track inventory in QuickBooks true inventory means that you sell physical
products and you need to know at any time how many you might have in the store room you might want to run reports to see how many of a particular product you’ve sold so you can order some more if you track inventory you will want to say yes to this question in QuickBooks let me also mention here that it could be that you don’t really track inventory you might sell physical products but you don’t need to track how many you have in the store room if that’s the situation you still want to say yes here because you probably use purchase orders anytime you use the purchase order system in QuickBooks you will want to have this option set to yes so that the purchase order options show up on your home screen I’m going to choose yes there and click next this screen is asking us about tracking time in QuickBooks now there are a couple different ways you can use this feature if you actually work on different projects for customers and you want to track the time that you spent working on that project then you’ll want to turn this option on you can also track the amount of time that your employees or your subcontractor spend working on a particular project or job if you’re wanting to track time just go ahead and say Yes here and then click next this question asks do you have employees now I want you to make sure that you don’t confuse employees with 1099 contractors because they’re totally different so this question can be very misleading if you tell it yes here then you’re telling QuickBooks that you do want to track payroll in QuickBooks we’ll talk about payroll in a later module because payroll is a subscription you’ll want to sign up for it’s not free the other thing we’ll talk about a little bit later is your 1099 contractors if you want to send 1099s there’s a certain way you have to set them up and they’re not set up in the employee section of QuickBooks if you just want to turn on the option for payroll you would say yes or you can leave it on no I’m going to leave it on now and click next now we’re getting ready to get into this last little section here this is just saying that we’re going to talk a little bit about using accounts and QuickBooks QuickBooks has to have a date to start tracking your finances you can either choose the beginning of your fiscal year or you can choose today’s date or Tuesday date of your choice I suggest that if you’re just purchasing QuickBooks for the first time and let’s just say it’s towards the end of the year you might want to go ahead and start with maybe the beginning of this month and just put in enough information to finish this year and this could practice for you and then next year you can have a full Year’s worth of information now if you really want to go back and put a full year in that’s certainly your prerogative to do and even if you put in a date whether you choose it or you choose the beginning of the fiscal year you can still enter something prior to that date it just needs a date to start with I’m just going to say the beginning of this fiscal year and click next now we’re almost at the end here it’s asking us to review our income and expense accounts now we’re going to talk more about this in a later module this is what we call your chart of accounts anything you do in QuickBooks runs through one of these now if you happen to be looking through here and you say you know I don’t use a particular one that’s here or maybe there’s one I do want to use let’s say I do want to use business licenses and permits you can go ahead and check those and turn them on here or you can wait until you get into the actual chart of accounts itself and then you’ll be able to add or delete ones you do or don’t want to use so I’m just going to go ahead and click next at this point and it’s going to say congratulations and you can now go to the button that says go to setup right here at the bottom if you get a screen that looks like this it’s asking you if you want to go ahead now and add the people you do business with those would be your customers or your vendors you can add your products and services you sell or your bank accounts it doesn’t hurt to add them here but we’re going to go through these in a later module so if you want to go ahead and click the X in the top right of that window to close it then that will be fine for now now you’re on what we call the home screen in QuickBooks well that’s going to wrap up part two of using the easy step interview let’s head on over now to video four and talk very quickly about the place where you go if you want to actually change the information about the company like the name the address the phone number that’s in a section called my company and we’re going to look at a quick overview of that welcome back to QuickBooks desktop 2022 my name is Cindy and we are actually walking through module 2 right now this is the fourth video and we’re going to talk a little bit in this video about a little option called my company when you actually set up your company file and you went through the easy step interview the very first screen asked you to go ahead and set up the information about your company you set up the name the legal name you set up the address the website the phone number things like that well where would you go if you needed to edit that information you would actually go into this option called my company and that’s where I want to take you right now let’s head on over to QuickBooks and I’ll show you where this option is and we’ll go ahead and make a few changes to that company information you set up when you went through the easy step interview one of the questions I get asked often is how do you go back to the easy step interview if someone’s asking me that question it means that they want to go back and make a change to something they originally set up most of those changes are going to be made under the preferences in QuickBooks now we’re going to look at those preferences in module 3 it’ll be the first and second video but if you want to make a change to the company address the company phone number any of that information you set up on the very first screen when you went through the easy step interview you have to do it this way if you go to your menu and click on company you’ll see an option that says my company here you’ll see the address information that you set up when you went through the easy step interview you’ll notice both of these will be the same unless you happen to have changed some information on the legal name if you want to edit any of this all you have to do is click on this little pencil where it says edit and now you can make any changes you want let’s say that you’d like to add your phone number right underneath your city state and zip all you have to do is type it in and anytime you actually work in QuickBooks and you want to actually pull in the address block now we’ll pull all of this instead of pulling these as two separate blocks we’ll talk more about that in a later module when we talk about customizing your invoices we’ll work with the layout designer and you’ll see what this comes into play I could also go down and add this information if I’d like notice if I want to change the legal information I’ll click over here and edit that information I can change the company identification that’s going to be the federal ID number if you wanted to go ahead and put that in there’s some information on reporting if you want to change your fiscal year and then also some payroll tax information here if you just want to put in a contact for the person who’s signing the payroll tax forms things like that I’m just going to go ahead and click OK here and it will always tell you that you didn’t update your legal address if you didn’t do that and you will have a chance to go back and do that but I’m just going to go ahead at this point and say no I don’t want to go back to the legal address because I’ve already decided that’s correct I’ll click no and now you’ll see that it’s actually added my phone number right here but notice it’s not over here and that’s because I told it I didn’t want to update the legal name and address now I want to point out a couple of other things on this screen that will be very helpful to you look over on the right hand side because sometimes you will have to have this information maybe you’re placing a phone call for some support here is the version of QuickBooks that you have there’s your license number and product number you can also see this has been activated and if you want to view the owner you could do that there’s also some apps that work with QuickBooks some of these are subscription-based services and you can see some of these down here if you wanted to use these you could actually click these options right here and go ahead and sign up with any of these you’d like one I want to mention is this payroll we’re going to talk about payroll in a later module but payroll is not free it is subscription based and these are other things that Intuit can actually sell you if you would like to have a merchant services account with into it you could do that by choosing this option you can order checks from them if you’d like by choosing this option and something to keep in mind is you don’t need to order checks from QuickBooks from Intuit you can order them from wherever you’d like but I just want to give you a quick overview of this my company option here I’m going to go ahead and close that and that will wrap up this video here let’s go ahead now and move over into the next video which is number five and that is where we’re going to talk about how to identify all the components of your home screen here in the QuickBooks environment hey there welcome back this is Cindy again we are working now in module two and we’re talking about getting started with QuickBooks this is video five identifying the components of the QuickBooks environment what I basically want to do is just go over the home screen with you so that you will know what types of options are there and where to go when you get started actually using QuickBooks let’s go ahead and flip over now and we’ll start talking about how to identify those different components of the QuickBooks environment this is what we call your home screen in QuickBooks when you open your company file it will automatically show you the home screen first if you happen to be somewhere else in QuickBooks and you don’t see this screen the easiest way to get back to it is you’ll notice on the left you have a home option right here that you can just click on and it will take you back to this screen now before we start working with this home screen I wanted to show you two quick things that you will want to do to make your home screen work a little bit easier for you you might have noticed there is an icon bar here on the left certainly okay to keep it that it’s a matter of personal preference but I’ll just show you another option that will free up some room on your screen for you if you go to your menu and you click on view you’ll see there’s an option that says top icon bar if you click on that notice now your icon bar is moved to the top up here the next thing you’ll want to do is you’ll want to go ahead and go back to view and click on the first option that says open Windows list and that will appear on the left hand side of your screen and just to tell you what this is every time you click one of these icons you’re going to open that window so if I clicked here for example I would open this window and this is where I would go to enter any bills I want to track in QuickBooks now you’ll notice I have two options over here if I don’t want to close this bills window I can click on home and go back to the home screen or back to enter bills and I’ll be back on the bill screen now if I close this bill screen just by going to the top right and clicking the X not the one at the very top that’ll close QuickBooks but the one right under it will close that window and now you can see it’s gone over here because that window is closed let’s take a few moments and see how this home screen is actually set up there’s actually five sections here this first section at the top where it says vendors this is your accounts payable section anything having to do with the people or businesses that you buy from those are called vendors you’re going to be able to enter the bills you receive from your vendors you’re going to be able to track inventory things like that in this section this next section here where it says customers this is your accounts receivable anything having to do with your customers these would be people that buy from you and you can see that sometimes you invoice customers you might want to receive payments from them send them a statement so this section here is all about your accounts receivable this third section at the bottom where it says employees this is your payroll at the top right you’ll see a section that says company these icons don’t really have anything to do directly with the customers or vendors but they have to do with the file itself and I want you to get really familiar with this one right here the chart of accounts this is your most important part of QuickBooks because everything will flow through there and we’re going to spend more time talking about that over in module three notice there’s also items and services those would be physical products that you might sell or Services you provide this is where you could set those up the last section I want to point out here is called banking think about things you would do at the bank you would make a deposit you could actually open your check register from here maybe print checks from here these are considered banking functions and you’ll find those in this section when you’re looking at your home screen you’ll notice there’s a flowchart and you want to always follow the flowchart from the beginning to the end for example with customers you may or may not estimate jobs in your business but let’s say that you do the next thing you would do is create an invoice then you would receive the money from the customer at some point and then you would actually record that money and put it in the bank so follow the flowchart from the beginning to the end for every one of these and you’ll have no problem with whatever you’re doing now I do want to point out a couple things on the very right hand side of your screen first of all where it says account balances once you start setting up your accounts your checking accounts Savings credit cards all of those accounts in the chart of accounts you will see them here and you’ll also see the balance and a quick way to go to one of those accounts is to click on it from here and open it up this section here really has to do more with things that Intuit can sell you for example they have merchant services accounts available if you accept credit cards if you want to order checks and supplies from them you can if you don’t want to see this section here just hit this little arrow and it will kind of hide that section for you and then notice this backup status one of the things that happens with the QuickBooks desktop version is you will need to back it up on a regular basis and one of the ways to do that is to choose this backup now option another option you have is for a fee into it we’ll back it up for you and you can sign up with that right here online by going to this link here but again if you don’t want to see this just hit the little arrow and that will hide it so you don’t have to keep looking at it that’s a quick overview of the home screen itself you’ll want to get very very familiar with it let’s go ahead now and look at the last video in this section I want to talk to you before we leave about how to convert your QuickBooks desktop data to the online version if you ever decide that you want to actually move all of your information to the cloud so you can access it from anywhere in the world hey welcome back it’s Cindy again we are wrapping up module two the getting started module we’re actually on video number six now I want to show you how to convert your QuickBooks desktop data to the online version you might decide at some point that you want to try the online version because that way you could have access to your data wherever you happen to be as long as you have internet access you can actually convert that data now you can’t go the other way you can’t have the online version and then download it to QuickBooks desktop in two it has to do that for you but it’s pretty easy to go the other way so let’s flip over to QuickBooks and I’ll show you how to convert that desktop data to the online version the first thing I suggest you do is back up your company file if you want to jump ahead go down and look at module 17 it’s video number 10 where I show you how to backup your company file once you’ve done that go ahead and click on file from your menu come down to utilities and then you’ll see an option that says copy company file for QuickBooks Online QuickBooks does have to close all the windows before it exports your company file just go ahead and choose OK here and then you can save your file anywhere you like I’m going to save mine to this QuickBooks desktop data file folder I set up notice it’s a DOT OE file go ahead and click on Save and now it’s actually exporting your data and you can see that didn’t take very long it says you’ve successfully exported your company file all you have to do at this point is Click ok the next thing you need to do is go online and get a subscription for your QuickBooks Online there will be an option in there where you can actually upload your data file that you just exported and that’s all you have to do to convert your QuickBooks desktop data to the online version well that’s going to wrap up module 2 getting started now we’re going to go into module 3 and talk a little bit about customizing the QuickBooks environment foreign we’ve made it all the way down to module 3 now and we’re going to talk in this module about different ways to customize the QuickBooks environment I wanted to start off in this first video which is the preferences and talk to you a little bit about the options that you can change that make working with QuickBooks a little bit easier a lot of these options we’re going to talk about in this preferences section are going to be options that you told it in the easy step interview you did or did not want to turn on or off there are two parts to this so make sure you watch both parts so that you get a full idea of all the preferences that are available for you to work with let’s go ahead and start now with preferences part one when you first set up your company file you went through the easy step interview in that easy step interview it asked you a series of questions and based on how you answer those questions it turned icons on or off on your home screen for example if you told it that you do not estimate jobs you wouldn’t have this estimates icon here you can turn most of these on or off in the preferences along with some other things you’re going to see that will make QuickBooks a lot easier for you to use let’s head on over to the preferences we’re going to click on edit from the menu and come down to preferences you’ll notice there are several options on the left that you can click on each of these will have a tab that says my preferences or company preferences when you click on them let’s start at the top with accounting and we’ll work our way down you’ll notice under accounting there are no options under my preferences I’ll go ahead and click on company preferences and there are a few here that you’ll want to know about let’s take a peek at this first one use account numbers this has to do with using general ledger numbers in your chart of accounts I’m going to cancel this for a moment and just go over to the chart of accounts right here just to show you what it currently looks like this is what we call your chart of accounts it’s the most important part of QuickBooks we’re going to spend more time talking about this in videos four five and six in this module but for now what I want you to notice is that there are different types of accounts and if you have a type that’s the same like these fixed assets there are two of these then you’ll notice the names of these two are in alphabetical order Once you turn on the general ledger numbers they’re going to be in numerical order I’m going to head back to edit and back to preferences under company preferences I’m going to check the box next to use account numbers and click ok and now you’ll see next to each of these you have general ledger numbers you can edit these numbers but for now you’ll see the generic ones that QuickBooks decided to use notice they’re in numerical order based on the type I’m going to head back to edit and head back to preferences same place back under accounting I’m going back to the company preferences here’s another one that you may want to use not every type of business will use this class feature but if you need it it’s a really great option here’s some examples of how this would work let’s say that your business has three locations and you would like to track your reports based on location you could have a class list set up that would list each of the locations and each time you create a new transaction you can choose the location that that transaction goes to and that’s a great way to run a report on the whole company or if you want to use it to run a report just on those different locations you can do that as well if you start entering transactions that are 90 days in the past as far as the data is concerned or in the future you will see that it will pop up and warn you and if you don’t want it to do that maybe you’re entering a lot of things that are over 90 days old then you can just come in and uncheck these I did also want to mention this date through which the books are closed in real life accounting you close the books at the end of every month and you close the books at the end of the year if you had closed the books through the end of the month and you see a mistake in a prior closed period you would make an offsetting entry in the current period to adjust for that QuickBooks will not automatically close your books it’s not going to warn you it’s not going to say anything if you want to close the books you have to come here you would set a date and a password for example this might say December the 31st of 2021 and then of course you would have a password Here what would happen is if someone went to make a change in that closed period it would pop up and say you’ve closed the books through such and such date do you still want to make that change and if you knew the password you could do that let’s jump down on the left to the general option and I’m going to click on the my preferences tab this very first one I always suggest that you turn on it says pressing enter to move between the fields if you do not have this on what will happen is you will be working in a form let’s say you’re in an invoice for example and you’re thinking that by hitting the enter key you’re going to move to the next field what will actually happen is you will save and close and your screen will disappear and you’ll wonder what happened if you turn this on you can use the enter key or the Tab Key to move between your fields there are several other options you might want to just take a peek at right up here I do want to mention this one here automatically recall last transaction for this name this is a good one because it will save you some work and keep your accounting consistent let’s say that you had previously filled out a check for your electric company what would happen is the next time you went to write a check and you put in that vendor name it will automatically fill all the information from the last time you use that vendor name and all you would really need to do is change the amount of money and save it save you a lot of time here’s another one that you probably want to turn on use today’s date as default if you don’t turn this on then whatever the last date you entered on one of your transactions that’s going to use the same date when you go to the next transaction let’s look now at items and inventory there are no options under my preferences I’ll click on the company preferences tab if you remember one of the questions in the easy step interview asked if you actually track inventory if you had said no none of these options would be checked and they wouldn’t show up on your home screen here by checking all of these you’re going to see options for purchase orders and you’ll also be able to enter invoices and you’ll be able to track inventory as well the next one down says jobs and estimates if you happen to work with different jobs in your business construction being a prime example then you may have different terminology for these you can go in and change that terminology if you want and here’s the question about estimates do you create estimates yes or no and over here is the progress invoicing question if you create estimates you probably do want progress invoicing the next one down I want to mention is payments and you’ll notice sometimes when you make a change that it will pop up and ask you if you want to save your changes I’m going to say yes here and sometimes it will have to close all the windows to make this change I’ll just click ok and now we’re on the payments option one of the things QuickBooks will do is when you receive a payment for a customer and you want to enter that payment you’re going to see in that payment window a list of invoices you’ll see the oldest on down and QuickBooks will automatically apply the payments to the oldest ones you may not want it to do that you may want to manually apply the payments to those invoices the other thing it will do is it will automatically put that payment in an account in your chart of accounts called undeposited funds if you don’t want to do that then you can uncheck that here and we’ll talk about that a little bit later you’ll see here that your customers can pay you online you can actually send them an invoice you can send it through email and it will have a button where the customer can click to pay you automatically their credit card or bank transfer and you can actually set that up with Intuit also down here where it says payment reminders if you wanted to send reminders to your customers that they have payments that are due you can do that and it will prompt you at a certain time of day and also if you want to prompt your daily or weekly you can set that up as well right here now let’s go over to payroll and employees if you’re going to be using payroll through into it you would have to actually set that up with Intuit it is not free it is subscription based but if you’re going to use that here’s a couple of preferences if you’re going to use the full payroll option you’ll click here and that will turn on some options on your home screen so that you can go ahead and set up some of your payroll if not you can leave it on no payroll you also have some options over here for pay stubs for workers comp and sick and vacation time down here at the bottom you can display your employee list by first name last name and you’ll see there’s some other options here that you may want to just look at if you decide you’re going to run your payroll through QuickBooks the next option I want to look at over on the left are your reminders notice I made a change in the payroll section so it asked me if I want to save those changes I will say yes here and then you’ll see some options pop up about reminders this is a list of different items that QuickBooks can remind you of reminders do not show up automatically where they show up is when you first open the company file if you’ve told QuickBooks to remind you you will see a window that pops up that will have those reminders in that window once you close it again you don’t see that until the next time you open QuickBooks you can see here that there are checks to print there’s overdue invoices options to be reminded about inventory reorder money to deposit you can see the options there you can have QuickBooks show you in that window a summary of all the checks to print for example you might want an individual list of each of the checks to print or you can say don’t remind me at all if you have told it to remind you of some of these options then you can go next each one of these and tell it how many days before or after that you want it to remind you of that option so you just go through here and set all of your choices for reminders go ahead and stop the video right here I want to go ahead and look at part two and we’ll continue talking about the remainder of these preferences hey there welcome back to QuickBooks desktop 2022 my name is Cindy and we are working through module 3 where we’ve been looking at different ways to customize the QuickBooks environment we just finished looking at the very first video which is part one of the preferences I want to go ahead and continue that now and just take up where we left off after part one if you got out of the preferences window you can go back to it by clicking on edit from the menu and coming down to preferences let’s go ahead and start where we left off after part one we’re going to actually click on the left where it says reports and graphs there’s one thing in particular that I really want to point out here and that’s under the company preferences tab when you run reports in QuickBooks they’re automatically run on an accrual basis you can choose to run them on a cash basis now we’ll talk more about reports in module 11 but just to tell you what this means when a report is run on a cruel basis that means when you’re looking at your income that number would include any invoices you created that have not yet been paid also as far as expenses go if you’ve entered any bills that you haven’t yet paid they would show up in this report as well as expenses versus if you use the cash option it’s only going to show invoices you’ve actually received money for that you’ve been paid for and it’s also only going to show expenses that you actually spent the money for already and that’s going to be the difference you’ll see when you’re on a report you can also change it as a one-time thing there but just know that your reports are automatically run on an accrual basis you can see there are other options here as well that you might want to look at as far as reports are concerned let’s go down to sales and customers on the left and talk about a few things here I’m going to the my preferences tab one thing I want to point out is there is an option to prompt for time slash cost to add there’s an option in QuickBooks that we’re going to talk about in a later module that will let you take any expenses you’ve accumulated that might go towards a job you’re working on and you can actually pull those expenses into an invoice and that way you can get reimbursed from your customer now if you want QuickBooks to prompt you for those you can choose this option here to do that under the company preferences tab there are a few things that you might want to look at for example if you create sales forms which we’ll talk about in a later module you might have a particular shipping method that you like to use automatically you can choose options like that let’s go look at sales tax if your company collects sales tax from your customers you’ll want to set up those sales tax items now you can do that here but we’re going to talk later when we look at the items list at how to set them up there because you’ll have several different ones you’ll want to set up and you need to make sure you do it the correct way but here you can tell QuickBooks that you do charge sales tax or you don’t and then you can also choose your most common sales tax if you’ve already set it up if you haven’t set it up yet then you’re going to need to wait until you get it set up before you can choose your most common sales tax here this little section here what will happen is if you had items that are taxable and some that are non-taxable you’re going to see that you can have this code right here that says tax or you can create a new one and the same thing for non-taxable I usually just leave those but some people prefer to have different wording for those and you can set that up also items that are taxable will have a t all the way to the right of that line item when we get into the invoicing section you’ll see more about how that works let’s go down on the left and look real quick at send forms you have the ability in QuickBooks to send a form and an example of that would be an invoice you create an invoice and instead of printing it out and mailing it you can actually send it right from QuickBooks if you do that there’s going to be a default template that goes with that you can see they’ve got one already set up here called basic invoice I’ll just edit to show you what that looks like you can see here it would have the customer’s name it’s going to have the invoice number thank you for your business you can see this if you want to edit this you can do this right here so that it pulls the new template I’m going to go ahead and cancel that you can also add a new one if you want in this particular window you’ll see it’s not just for invoices there are other forms as well there’s credit memos there are pay stubs and if you want to edit any of those or create a new one you can do it right from here I’m going to scroll down a little bit on the left here and we see a couple more I want to look at spelling if you’ve made a change to your send forms preferences you’ll want to go ahead and say Yes here I didn’t make any changes but I’ll go ahead and say Yes anyway now under spelling here on the left I’m going back to the my preferences tab you’ll see that it will always check your spelling before it prints or sends any forms it’s a very good idea to spell check any forms that you sent out so that you look professional but if you don’t want that you can uncheck that there are also some options down here for 1099s you’ll want to click on the company preferences tab here’s where it just asks you do you file 1099 forms yes or no and then one last thing I’ll mention on the left under time and expenses you have an option in QuickBooks to track your time now there are a couple of different ways that can be used it could be if you’re in a business where you invoice your customers based on the time you spend working on their projects then you can turn on a timer in QuickBooks and have it track that time for you and then pull it into an invoice it could be that you’re just trying to track the time that your subcontractors worked on a job or your employees so you could turn that option on here if it’s not on and also look at the first day of your work week it’s currently set to Monday but you can choose Sunday through Saturday that’s really the most important options that I think you’re going to work with here in QuickBooks you might want to take time and look through the other ones just go through it anytime and go ahead and make these changes and once you’ve made those go ahead and click OK and now you’ve finished the preferences now that we’ve looked at all of the preferences in QuickBooks let’s go ahead and move over to video number three and talk about how to work with different users in QuickBooks hey there welcome back it is Cindy again we are working in QuickBooks desktop for version 2022. this is video three of module three I want to talk to you a little bit in this video about how to work with the users option in QuickBooks when you open QuickBooks if you’d like to have that user type in a username and password you would need to set up those users in order to be able to do that this is not something you have to do in QuickBooks but it is very highly recommended setting up users will allow you to set up different permissions for different people who use QuickBooks you can actually set it up so they can access certain areas and maybe not access others let’s go ahead and flip over to QuickBooks and I’ll show you how to set up those users you do not have to set up users in QuickBooks but it’s very highly suggested when you set up users that means that when you double click to open your company file each time it will ask whoever’s using it to put in their username and their password in order to allow them access to your company file you do have to be the administrator in order to actually work with the users make sure that administrator is logged in what you’ll want to do is go to the menu and click on company and then you’ll see an option that says set up users and passwords and in the sub menu choose setup users if you did have a password set for the administrator maybe you set it up when you set up your company file through the easy step wizard it would ask you to go ahead and log in and then you just click OK and now you’ll see the user list here you can see the admin is logged in you can add up to five users in QuickBooks each one will have a different login as the administrator you’ll want to be the one to set up these users so that you can write down their username and password and if that user leaves the company go ahead and delete them over here or if you want to edit the user that was there previously you could do that but make sure you have the current user set up let’s say that we’ve hired a new employee and that employee is going to come in a couple days a week and just pay the bills and we just want to give them access to those areas I’m going to click on add user and we’ll put in the user’s name let’s say the name is Carol and we’ll set up a password for Carol you’ll want to make sure you type it in twice and then you’ll want to click next the first thing QuickBooks ask is what do you want this user to have access to notice you can give Carol access to all areas of QuickBooks selected areas which we’re going to choose and I also want to point out this last option for the external accountant if you have an accountant that you want to have access to your QuickBooks file you can give them their own username and password they won’t have access to what we call sensitive customer data credit card numbers things like that I’m going to choose selected areas and click next the first area it asks about is sales and accounts receivable now this has to do with invoicing our customers I’m going to leave that on no access and click next this next question asks about purchases and accounts payable this is what I hired Carol to do I’m going to give her full access notice I could give her selected access where she can only do these three items that you see here but I’ll go ahead and give her full access to accounts payable the next option is checking and credit cards I don’t want her to go in and make deposits and enter credit card charges things like that so I’ll just say no access this screen asks about inventory I’m not going to give her access to inventory not going to give her access to time tracking or payroll this option asks about sensitive accounting activities those are things like do you want Carol to be able to make journal entries in QuickBooks do you want her to have access to online banking functions again I’m going to say no and sensitive financial reporting those are the reports that go with that previous screen now it asks me do I want Carol to have the option to change or delete transactions I usually leave this where it defaults the first option asks do I want Carol to have the ability to change or delete transactions in the area she has access to of course what if she enters a bill twice and needs to delete one of them this option says do you want Carol to be able to change or delete transactions recorded before the closing date if you remember in the preferences there was an option to close the books if you use that function you may not want Carol to go in and make changes in those closed periods this last option just gives me a summary of all of the options I chose I’m going to hit finish and now you’re going to see that our new user Carol has appeared on this list if I ever want to edit her I can come over to edit or delete her I’m going to close this window I want to show you how this option actually works each time you are finished using QuickBooks you want to go ahead and log out you want to click on file and the go down to where it says close company and log off if you do not do this you’re still logged in and the next person that comes in will be logged in as you sometimes QuickBooks will ask you if you want to back up your data we’ll talk about that a little bit later I’ll just say no here and that’s actually going to take me back to this screen that you’re going to be very familiar with where you’re actually able to open an existing file our new user is coming in to work the new user is going to double click on the company file ABC Plumbing in this case and it’s going to bring up a window where Carol has to put in her name and her password notice it has the last username that was logged in I’m going to put in Carol and I’m going to go down and put in her password now Carol has access to this company file you might notice that the icon bar is back on the left hand side again any preferences that are changed are per user now I want to show you how QuickBooks will allow Carol to access areas she should be in and not allow her into other areas we gave Carol permission to work with bills so she will definitely be able to enter bills like this okay let me close that I did not give her access to work with accounts receivable that’s this area down here I’m going to click on create invoices and notice gives me a warning and says that you need sales and accounts receivable permission to perform this action I’m going to go ahead and click OK and that’s how the users work in QuickBooks let me go ahead and log off here I’m going to close company log off and I’ll log back in as the admin again and go ahead and put in the admins password I had set up see how it says Carol I’m going to put in admin and I’ll put in that password again and go ahead and click OK and now I will be back in QuickBooks and I’ll be able to access everything because I am the admin that’s how you’re going to set up users in QuickBooks remember you have to be logged in as the administrator in order to make any changes to the users let’s go ahead now and wrap up this video we’re still in module three I want to go down now to video number four we’re going to talk about working with the chart of accounts in the next few videos video number four is called what is the chart of accounts hey there welcome back to QuickBooks desktop 2022 my name is Cindy and we are walking through module 3 customizing the QuickBooks environment we’re all the way down now to video number four where I want to talk to you about the chart of accounts the chart of accounts is the most important part of QuickBook it’s very important that it’s set up correctly if it’s not then you’re going to run reports and the data will not be accurate and you’ll wonder why let’s go ahead and flip over to QuickBooks and we’ll start talking about how that chart of accounts works the easiest way to access the chart of accounts is from the home screen you’ll see an icon right here that says chart of accounts this is a listing of all the different accounts that are in this company file this list was created based on how you answer the questions when you set up the company file you might look at a different company file that has some of the same accounts but it might have some different accounts and both are correct initially you’ll want to go through this list and add any accounts that you know you’re going to want to use you will want to delete ones you know you’ll never use and if you need to edit the name of some of these you can do that as well it’s very important when you set up this list that you’ve got the types set up correctly over in this column and we’re going to go through each one of the types before we do that I want to point out that currently the list is set up by type and the accounts are set up alphabetically so there are two fixed asset accounts you can see we’ve got accumulated depreciation and furniture and Equipment there you’ll notice also that the general ledger numbers are not on automatically and this is a preference you do not have to use them but if you want to here’s how you turn them on if you go up to edit on the menu come down to preferences make sure on the left you’re clicked on accounting and choose the company preferences tab in here is a check box that says use account numbers you’ll want to make sure that’s checked and then just click OK and now you’ll see you have general ledger numbers over on the left hand side these can be edited if you want to use them but you want a different number you could just edit these just right click and choose this edit option and that will let you go in and edit any of the information that you have set up as far as this account is concerned I want to go through the different types with you now just to make sure you’ve got those set up correctly the first thing you’ll notice is we don’t have any bank accounts here they’re usually set up as the first type here if I wanted to actually enter a debit card transaction I couldn’t do it because I don’t have a bank account set up yet let’s go ahead and add a bank account you can just right click anywhere in this list to access this new option the first thing QuickBooks will ask you is what type of account is this this is a bank account and notice it gives you a couple of examples of what a bank account would be bank accounts are any checking accounts that the company has any savings accounts any money market accounts if you have PayPal you would want to set that up as a bank account if you have a lot of cash expenditures for your business you would want to set one up to have a place to put those those are examples of bank accounts I’m going to click continue at the bottom and the first thing you’ll notice is that if I happen to have chosen the wrong bank account on the previous screen I can go here and actually edit that now the next thing you’ll notice is a place to put the general ledger number I’m just going to add one here and then you put in the name of the account now you can name your accounts anything you like I’m going to call this one checking often I’ll see them referred to as operating account or payroll account you might have your bank name in there as long as you know that it’s a checking account then you’re good this is not a sub account of another one we’ll talk about that in a little while you don’t really need a description unless you want to put something in there someone would have to be on this exact screen to see the description the bank account number and the routing number and I wouldn’t put the bank account number or the routing number in here QuickBooks does not needed what you would need to do though is enter the opening balance as of the start date of your company file what was the balance in this checking account I’m going to say that I had fifteen hundred dollars and I’m going to put the statement ending date as January the 1st of 2022 in this case there’s also an option here that says remind me to order checks when I reach a certain check number it will just pop up there and let you know it’s time to order checks and ask you if you’d like to order them from into it I’m going to hit save and close and now you’re going to see that we have a bank account set up called checking if you get this message they’re asking you if you want to set up your bank feeds we’ll talk about that in a later module for now I’m just going to say no and it will close that window you’ll notice now that you have a bank account and you have fifteen hundred dollars in that account every transaction has a debit and a credit the good thing for you is you don’t have to know the flip side of the transaction because QuickBooks did that for you notice there’s an account here called opening balance equity and it now has fifteen hundred dollars that’s the flip side of your transaction let’s go ahead and set up another one I’m going to right click and choose new I’m going to make this a bank account again this will be a savings account this time I’ve got bank account here I’m going to say savings and I’ll go ahead and open that with a five thousand dollar balance as uh the first of 2022. you can also click this calendar if you want to pick from this list I’m going to click ok and then saving clothes and now you’ll notice that I have five thousand dollars back here in a savings account now notice something that happened here you’ll notice that my checking account had a general ledger number but my savings account did not I want to edit that I’ll just right click on savings and go ahead and edit that account notice it takes me right back to the screen and I’ll go ahead and put in 11 000 and go ahead and save and close and now you’ll see that I’ve got my checking and my savings and they’re both bank accounts with the balance in each notice that the opening balance Equity account now has sixty five hundred dollars if you had a loan where you owe the money then this would be a negative number possibly and don’t freak out when you see that because it’s an accurate picture of your books the next type of account I want to mention are your asset accounts right here an asset is something that your company owns that makes it more valuable you might have furniture desk lamps you might have inventory in the back those assets can fall into one of two categories you can have fixed assets like you see here for furniture and fixtures or you can have what QuickBooks calls other current other current assets are assets that you have right now that make the company more valuable but your goal is to sell those and get them out the door they’re more liquid that’s an other current asset this is the one place where your accountant is going to be very helpful because you’re not really going to know what numbers to plug into your asset accounts when you set up your asset accounts you want to have maybe seven to ten you don’t want to have a ton of asset accounts and they’re going to be more generic you’re going to have asset accounts like furniture and fixtures that you see here you’re going to have one that might be called Vehicles you might have one that’s called equipment those are just big buckets that you’re going to be able to plug those numbers that relate to assets into and this is where your accountant will become very helpful because he or she will help you figure out which accounts to set up and which numbers to plug into those accounts the next one I want to point out here is one that says accounts payable accounts payable is an account where all of the bills you’ve entered will show up if they’re not paid they’ll show up in the accounts payable balance you see right over here once you’ve entered a bill and you’ve paid that bill it will not be part of your balance anymore it’s not considered accounts payable you’ll also have one called accounts receivable that will show up down here once you create your first invoice invoices that have not been paid will show up in the balance of the accounts receivable account once they’ve been paid they will not show up in that balance anymore and we’ll see some of those as we move into one of our sample files in a later module the next thing I want to point out are your liabilities a liability is something that the business owes like a loan for example each loan or liability should be set up separately in accounting you have short-term liabilities and long-term liabilities a short-term liability is something you’re going to pay off in 12 13 months and a long-term liability is something that you’re going to pay off long term maybe five years down the road let me show you how to set up a loan I’m going to right click and choose new you’ll notice that QuickBooks has the loan option here and this is a short-term liability if you’re looking for the long-term liability option they’ve got it listed here under other account types I’ll just choose long-term liability and click continue the first thing I want to do is make sure I have the name of my account set up and again you can name these anything you want I’m just going to say car loan it’s not a sub-account of another if I want a description I could add one maybe the make and model of the car if I want to add the account number I could but QuickBooks doesn’t need it I do need an opening balance however as at the start date of my company file how much did I owe on that loan let’s just say it was twelve thousand dollars and that would be as of the first of this year and I’m going to click OK and then save and close now you’ll see that I’ve got my long-term liability my car loan and I owe 12 000 on that loan now you’ll notice again I don’t have a general ledger number so I’m just going to edit to that I’m going to right click and choose edit and put in a general ledger number now I can click save and close and you’ll see there it is and again I owe twelve thousand dollars on that loan now something super important to know when you make a payment on the vehicle always always always the principal amount goes back to this account you should not set up a car loan as an expense in this list it’s not an expense to the business it is a liability you’ll notice also that the opening balance Equity now has a negative number like I mentioned earlier that’s because you owe more than the actual equity in the business right now that’s an actual accurate picture of what your books look like you can’t change that you want to go through and set up each loan separately so that you can track the balance you owe at any given time let’s go over to video number five working with the chart of accounts part two hey there welcome back it’s Cindy again we’re still working in QuickBooks we’re in desktop 2022. we’re going to go ahead and finish talking about the chart of accounts this is video number five in module three where we’re customizing the QuickBooks environment we’re back in the chart of accounts here when we left off in part one we had just set up a long-term liability we had set up a car payment remember a long-term liability is something you owe and you’re going to pay on it long term the other type of liability is a short-term or other current as QuickBooks calls them and you’ll see an example of one right here sales tax payable when you collect sales tax you are actually going to have to forward that money to the appropriate entity typically that’s done once a month it could be more but it’s on a short-term basis and that’s why those are set up as other current liabilities if you do collect sales tax you won’t have to set this account up because QuickBooks will set it up for you automatically and the balance that you see here will be the balance that is accumulated from all the invoices you’ve created and you have it forwarded those sales tax payments yet let’s talk about Equity accounts you’ll see there are a few here already set up here’s your opening balance Equity we’ve mentioned a few times but a really popular one that you’ll want to think about is owner equity think about the word Equity meaning equal if you own the company and you decide to take one hundred dollars out that would be an owner draw or owner Equity as it’s called if you decide to put that same one hundred dollars into the business that’s called an owner contribution if you’re not Incorporated the way you would pay yourself as a small business owner is by taking draws once you incorporate yourself you want to think about setting up actual payroll where you’re deducting taxes but I want to go ahead and set one up for you just to show you how you would set up an equity account and also this will show you how to set up sub-accounts we’re going to set up one for the owner draws and one for the owner contributions the first thing you want to do is right click and choose new and you’ll want to set up an equity account for the main account which we’re going to call owner all you have to do is put in the account name you really don’t need any of this other information right now there’s probably not going to be an opening balance instead of me saving and closing and then having to create another one I’m going to choose save a new you can see this took me to a blank screen it already has Equity filled in up here now what I’m going to do is create an account for owner draws I’m just going to call this one draws but notice it will be a sub-account of in this case owner and I’m going to save a new and create one more I’m going to create the contributions account remember this is when you put money into the business as a small business owner and this will be a sub account again of owner now I’m going to save and close and you can see all three of these are right here and you’ll notice how contributions and draws look like they’re indented underneath owner that’s because they’re sub-accounts QuickBooks will let you have as many levels as you need typically if you need more than three you’re going to get yourself confused so stay stay with two or three and you’ll be fine you’ll sometimes see other terminology for these sometimes you will see the contributions referred to as shareholder equity there’s many different terms that you might see for that so just set that up however you’d like to see it on reports the next type I want to talk to you about are your income accounts when you actually make a sale in your business that’s considered income you can have just one income account if you like or you can have multiple the next one I want to talk about is a cost of goods sold type anytime that you have to spend money to buy product or buy a service to make a product or service for your business that’s considered a cost of goods sold some examples would be if you buy a lot of materials for your job you might have one called job materials set up if you use subcontractors you might have one set up for subcontractors let’s set up a couple of these so you can see how this works I’m going to right click and choose new you’re going to have to go to where it says other account types and drop the list down and from here I’m going to choose cost of goods sold then I’m going to hit continue and I’m just going to call this one job materials and this would be any time that you have to buy any type of material at all to create that product or service for your business I’m going to hit save and close and you can see now it says job materials if I wanted to go back and add a general ledger number I could do that I could just right click and edit and I’m going to save and close and now you can see it has a general ledger number let’s talk a little bit about setting up one for subcontractors if you have a lot of subcontractors you deal with go ahead and set up the main account called subcontractors and then the sub-accounts can be the different types if you want to break it down that way I’m going to go ahead and right click here I’m going to go back to the other account types and choose cost of goods sold and hit continue I’ll call this one subcontractors and then I’ll go ahead and click save and close and you can see it’s on the list here now let me go ahead and edit that one and give it a general ledger number and then we’ll add some sub-accounts below that now let’s go ahead and add a couple of sub-accounts I’m going to right click and choose new I’m going to choose cost of goods sold I want to call this first one HVAC and we’re going to say it’s a sub-account of and in this case it’s going to be subcontractors now I’m going to hit save and close and you can see how it shows up underneath subcontractors I don’t have a general ledger number again so I’m going to right click and add one here and then I’m going to save and close and you can see that it’s indented underneath subcontractors indicating that it is a sub-account the next type you’re going to see are your expenses down here at the bottom and you can see there are quite a few expenses here I’m going to go ahead and stop the video and let’s go ahead and move over to part three and we will continue setting up your chart of accounts hey it’s Cindy again welcome back to QuickBooks desktop 2022 we’ve been working in module 3 where we’re talking about customizing the QuickBooks environment this is going to be part three of working with the chart of accounts you want to make sure you watch part one and two and then you can go on and watch part three to make sure you have a full understanding of how to set up this entire chart of accounts let’s go ahead and flip over to QuickBooks and we’ll keep going with the chart of accounts we’re back in the chart of accounts now and I want to take a look at your expense accounts here you’re going to add quite a few expense accounts to this list you’ll probably also add quite a few sub-accounts let’s just look at a few of these just so you get an idea of what types of accounts are here and which ones you might want to add the first one you’ll see here is advertising and promotion this is typically where you’re going to add any advertising that you do whether it’s social media advertising whether you do print maybe you do some TV advertising and if you want to have sub accounts below to break that out you could do that I want to point out automobile expense right now you could dump your gas in here you might actually add any repairs and maintenance to your vehicles but if you wanted to run a report on gas you wouldn’t be able to do that because you have it all in one account let me just add a couple of sub-accounts here just again to refresh your memory on how to do this I’m going to right click on automobile expense and I’m going to add a new account when you add a sub account it has to be the same type as the main account so this will be an expense I’ll hit continue and I’ll just call this one gas you want to make sure that it is a sub account of in this case automobile expense if you want to add your general ledger number make sure you do that here and then go ahead and save and close and now you’ll see gas is a sub-account under automobile expense next you’ll see Bank service charges typically this is where you’re going to put your 14 a month to have the account you’re going to put NSF fees maybe you wire some money you have a fee for that you might
also put your PayPal charges in here looking down this list you’ve got your business licenses the permits charitable contributions computer and internet expense this is where you’re going to put your monthly cable bill for the internet or if you have to have the computer fixed you might put that expense here let’s say that you don’t use Continuum education all you have to do is right click on that account and just delete it if you’ve used this account before it will not let you delete it and it will give you a message telling you that if you see this one that says are you sure you want to delete that means you’ve never used it and you can click OK and go on here’s Insurance expense some people put their automobile insurance under this one some people put it back under automobile expense it’s not wrong either way it’s just how would you like to see it when you run a report looking down this list you’ll see interest expense there are two that you’ll want to add underneath this one you’ll want to add loan interest and credit card finance charges let me just add those two so that you can see these here I’ll just right click and new this will be an expense I’ll hit continue and the first one I’ll add is finance charges and this is going to be finance charges related to credit cards typically I’ll go ahead and make that a sub-account of interest expense and I’m going to add the other one here so I’ll say save a new and this one here will say loan interest any loan interest will go to this account now I’m going to hit save and close and you’ll see I’ve added those there and you can go back and add a general ledger number if you want I want to talk to you next about payroll expenses quite often I will see that businesses actually Outsource their payroll usually when that happens you’re going to want to add three accounts as sub-accounts underneath payroll expenses you’re going to add one for the admin fee you’re going to add one for the total of the net checks and another for the total of the net taxes you’ll see Postage and delivery and then professional fees typically under professional fees you’ll see one for the accountant and you’ll see another sub-account for attorneys fees looking down this list I want to point out that there is utilities here and telephone expenses right above it here if you wanted to move telephone expense under utilities all you have to do is right click on Telephone expenses edit that account and make it a sub account of utilities you’ll also see one at the bottom that says ask my accountant if you ever have a situation where you don’t know where to put one of the lines on the transaction you’re entering then you can go ahead and add it to this ask my accountant and later you can go back and move it this gives you a really good idea of how to set up this chart of accounts remember it’s super important that it’s set up correctly from the beginning because if you don’t have the correct type then what will happen is you’ll run reports and they will not be accurate let’s go ahead now move over to video number seven and we’re going to talk real quick about some of the different sample files that are in QuickBooks if you happen to be working in one of your company files you’ll want to go ahead and close that company just go up to file on the menu and then come down to the closed company slash log off option when you’re on this window you’ll notice there’s an option here to open a sample file the down arrow will give you a list of the sample files that are loaded into your version of QuickBooks you can see there’s a product based business which is a construction company and there’s a service based business which is a landscaping company you probably want to use the product based business because here you can see everything the way it should be set up the proper way you’ll be able to go in and actually see how things were done and anytime you’re using the sample file it will just give you this message and you can see that it’s letting me know that once I get in the date will be set to December 15th of 2023. all you do is Click OK and then you’ll be inside the sample file you’ll be able to look up anything you’d like if you’re having an issue setting up an account in the chart of accounts for example just go over to the chart of accounts and look and see how it was set up here and set Yours up the exact same way well I just want you to know where those two files are they’re going to be very helpful for you when you have questions about different things as far as setting up your company file well let’s go ahead and move over to video number eight and I want to quickly just talk to you about how to use the company file search option in QuickBooks hey there welcome back we are wrapping up module three we’re here in video number eight we’re going to talk a little bit about how to use the company file search option in QuickBooks for desktop 2022. there are many ways to search in QuickBooks but if you just need to search quickly for something there’s a search box you can just type things into and it will pull up any data that meets your search criteria let’s flip over to QuickBooks and I’ll show you where that file search option happens to be we’re here in one of the sample files this is the sample product based business the first thing you’ll notice is your icon bar is still on the left hand side I wanted to point this out because we’re going to talk about a search box that should appear right up here that way if you’re looking for an amount of money or a name you can just type it in but since there’s not one let me show you how to actually get it to show up if you remember we can actually put the icon bar at the top I’ll click on view from the menu and then choose top icon bar once you do that you’ll see over here now we have a search box let’s say I want to search for a payment for two thousand dollars I can just quickly put in 2000 here and then once I go ahead and hit the enter key you’ll see it pulled up this search history here and you’ll see that it’s got exactly and it’s looked up everything for two thousand dollars so if I happen to be here and I see that I’m looking for this particular payment down here let’s say this one and I want to open it up I can just click open here and go right to that particular transaction in this case it’s a payment while we have the search box open let me just show you a couple of things you’ll see over on the left here that we’re looking through transactions there were 13 that met my criteria I could actually click that Arrow that’s to the left of transactions and it will show me that three of those happen to be bills five of those happen to be payments you can see that list I could be looking just through customers or just through vendors you can see when I click on these that I can put in some criteria down in this section right here where it says amount I had typed in 2000 so it was looking for an exact match but I could be looking for anything greater than 2000 or less than or a range which allows me to put in two different numbers I might be looking for anything between 1 000 and 2000. notice I can also put in a date range down here all of these options make searching really easy in QuickBooks remember that it starts with this search box here you can put text in here too if you’d like it doesn’t have to be just numbers and it will search for any text that you’ve typed in there are other ways to search in QuickBooks we’ll talk about a lot of those down in module 17 but for now I just wanted to see that quick way to search for something in QuickBooks well that’s going to wrap up module 3 where we talked about customizing the QuickBooks environment let’s go ahead and move now over to module 4 and we’ll go ahead and get started with the first video where we’re going to talk about working with customers and jobs hey there welcome back it is Cindy we’re working in QuickBooks desktop 2022 and we’ve made it all the way down now to module 4. this is the module where we’re going to talk about working with customers and jobs you can actually call this accounts receivable we’re going to talk about setting up customers in this module invoicing we’re going to talk about estimating receiving payments anything having to do with accounts receivable we’re going to cover in this module this is the first video where I want to start helping you set up your customers and your jobs there are two parts so make sure you watch part one and two to make sure you have your customers and jobs set up correctly let’s go ahead and flip over to QuickBooks and we’ll get started working with those customers and jobs a customer is a person or a business that buys from you a customer might have different jobs that you’re working on for them as well I want to take you into the customer center and show you where the list of customers and jobs is and how to work with that list before we do that remember that this section here is your accounts receivable section and this is where you’re going to do most everything related to customers the easiest way to access the customer center is to click right here but before I do that let me just point out you can also access the customer center from your icon bar or you can go up to customers on the menu and choose the first option the customer center and I’ll go ahead and do that this is a listing of all of the customers and jobs that you have already set up in QuickBooks before we get started looking at the list let me just tell you a little bit about how this screen is actually set up you’ll notice here on the left you’re seeing every one of your customers last name comma first name and they’re in alphabetical order these that look indented these are jobs for that customer this is the balance the customer owes you and this column over here would allow you to create an attachment and actually attach a file to that customer and that way you don’t have to get all the way out of QuickBooks if you’re looking for something specific that you’ve already attached if you’re clicked on a customer you’ll see that because there’s information right over in this section you can see all of their information as far as their address their phone number their email and over on the right you’re going to see some reports for this customer or job this section here allows you to look at different transactions for that customer maybe you want to look at contacts to Do’s notes or send emails and we’re going to go through all of this but I want you to get just familiar with the screen itself let’s go ahead and add a customer so that you can see all of the information needed when you’re creating customers and jobs the easiest way to do this is you’ll see an option right here that says new customer and job when you click the down arrow you’ll see three choices the first one being to add a new customer the first thing you need to tell QuickBooks is what is the name of the customer if you remember the customer name was alphabetical last name comma first name it’s not going to alphabetize them automatically you’ll need to actually type in their last name and then put a comma and put the first name then it will sort by the first letter of the last name there the next field that you see says opening balance what this means is how much money did the customer owe you as of the start date of your company file you might want to leave that blank if you’re just starting your company file new what you might want to do is actually enter the invoices that aren’t yet paid for that customer if you just have a balance here your accounting will be correct but you won’t have any way of going back and looking at one of those invoices that’s still open notice that I’m on the address info Tab and the first thing it asks me is what is the name of the company if Tom Allen works for a company then you can type it in if not you can leave this blank and I’m going to say he works for ABC Plumbing notice there is a place for the full name you can say mister Tom and of course you’ve got the middle initial and last name a common question is if I have the name here why do I need to put it in here this field right here allows you to put the customer’s name in that list that we saw right over here remember this it’s going to put it in that list however if I wanted to do a mail merge it would pull from these fields right here so it’s important to have these filled in you can see that you can fill in the job title and then all of these fields down here you can actually edit that field to represent any of these options that you see on the drop down now here’s something really important you’ll see when I typed in ABC Plumbing Tom Allen that it populated this right here but this is the actual address block that’s going on a customer’s invoice so you may want to click in there and set it up the way you’d like it to be I’m going to say attention Tom Allen and then I’m going to go ahead and put in a P.O box here I’ll put in the street address we’ll just use a PO Box in this case then I can go ahead and hit the enter key and go ahead and put in the city state and zip and we’ll just say this person is in Charlotte North Carolina if you wanted to have a separate shipping address maybe you like things for this customer shipped to the job site as an example you could type that in over here or you can just copy this address to this one but if you don’t ship anything you really don’t need to fill in this section right over here now let’s go up to our tabs on the left and click on payment settings the first thing you’ll see here on the payment settings tab is there is a field for the account number if you give your customers account numbers you can fill this in if you don’t you can leave this blank the next one you’ll see is for payment terms you might give different payment terms to different customers maybe if it’s a new customer you give them terms of Duo from receipt and maybe if it’s someone you know well then you might get them net 30. we’ll just add net 30 in this case the next field asked you what is their preferred delivery method do they prefer to have things emailed to them mailed or neither and is there a preferred payment method now this doesn’t mean that they’re going to pay you with their Visa card every time it’s just giving you information letting you know that they usually prefer to pay with their credit card you’ll notice there is a place down here to store your customers credit card information I probably wouldn’t do that because if someone gets into your computer you are liable for that as a business if you’ve got someone that wants you to keep their credit card number on file just write it down somewhere and just keep it safe so that no one can get access to it you’ll see over on the right there’s a feel for a credit limit what would happen is if you set a credit limit for your customer and you invoice and those invoices exceed the credit limit it will pop up and ask you would you still like to sell them something and then you can just bypass that there’s also a field for Price levels if you decide that all of your commercial customers should get a 10 discount as an example then you can set that up I’m just going to click on add new for a moment and just show you what the screen looks like the first thing you would do is put in a price level name commercial for example and then you can see that you can tell QuickBooks that every time you invoice a commercial customer their price level will either increase or decrease by whatever percentage you type in and we’ve got commercial already set up back here so I’ll cancel that but that’s how that would work and it would automatically give them that discount you can also set up online payments for your customers if you want to have a button that they can click on once you email the invoice to them that button would allow them to actually pay you with either bank transfer or credit card and you could set that up with Intuit let’s go over to the sales tax settings tab if you charge sales tax to your customers then you can specify whether this customer is a non-taxable or a taxable customer a non-profit organization for example would be non-taxable where a regular customer would be charged sales tax you also would tell QuickBooks what is the most common tax item you would have to have these set up already or go through and set them up here these are actually set up in the items list and we’ll discuss this in a later module and also there’s a feel for the resale number all that means is let’s say that I sell physical products in my store and one of my customers sells physical products as well they might have a resale number already set up that they can give me so that when they purchase those products for me they don’t have to pay sales tax and this would be a good place for you just to keep that information the next tab says additional info this is where you can categorize your customers you can see in this example they have commercial customers and residential customers if you work with sales reps in your organization you can set them up here as well and then these fields that you see on the right these are created by you all you have to do is click on Define Fields And if you have a new field you want to add just type the name of it here and then check underneath each of these if you want that field available for customers vendors and or employees so you don’t have to set it up three different times now the last tab says job info you obviously wouldn’t use this one setting up a customer but if this was a job you could put a description of the job you could have a type of job if you want a status for the job you can put in a start date and end date and a projected end date and those are options you would use if you were creating a new customer I’m going to click OK and let’s see if our new customers in the list and you can see there’s Tom Allen right there now that you have a new customer Tom Allen let’s go ahead and add a job for this customer as long as you’re clicked on your customer you can come up here where it says new customer and job and choose the add job option I want to call this job kitchen remodel I really don’t need to fill in any of this other information it pulls in the information from that customer so unless something’s different all you have to do is Click OK at the bottom and now you’ll see you have a job for your customer if you want to edit any of the information for your customer or your job just make sure your clicked on that customer or job and you can come right over here where you see this little pencil and that means edit and that will take you right back into the screen so you make any changes you want when you’re finished click ok and you’ll see that your customer dropper now created let’s go ahead and now move over into customers and jobs part two and we’ll finish talking about setting up these customers hey there welcome back it’s Cindy again we’re working in QuickBooks desktop 2022 we just got through with customers and jobs part one I want to go ahead and continue talking to you a little bit about setting up your customers and jobs this is part two let’s go ahead and flip over to QuickBooks and we’ll keep going I am back in the customer center just in case you forgot how to get here I’ll go back to the home screen for just a moment from the home screen you can access the Customer Center from here you can access it from the Icon bar under the icon that says customers or you can go to customers on your menu and you’ll see the very first option says Customer Center whichever customer you’re clicked on on the left you’ll see their information in this area of the screen I’m clicked on Christy Abercrombie and you can see that I’ve put in information like her billing address I’ve got her phone number her email address you’ll also see here a map and directions to Christie’s Bill to address and just to show you how this works if I click directions it’s going to open Google Maps and the address will be 5647 Cypress Hill Road let me go ahead and do that so you can see how this works and now I know how to get to Christie’s office let me go ahead and close this a couple things on the right that I want to point out first of all you’ll see there’s a box right here underneath note and when I point there you’re going to see three notes and you can see the dates for each one now those notes actually come from this tab here that says notes that’s these three notes if I wanted to pin one of those notes Here I could do that I could just go ahead and click the pen option here or here and that would actually stick it here permanently so that I can see those notes whenever I come into Christie’s account you’ll notice below this you’ll see a couple of reports for Christy I can run a quick report I can run an open balance meaning every invoice I’ve created for Christy that she has not paid me yet I can show estimates for Christy and also there’s a customer snapshot the other thing I want to mention up here is this little paper clip if you have a file you’d like to attach to Christie’s account all you have to do is make sure that file has a name I’m going to click the paper clip all you would have to do is search your computer if you have the file already saved if it’s on your phone you can actually connect that there’s also a document Center here these would be any documents you’ve already scanned that might be in QuickBooks you can also just take a file and just drop it into this area here maybe it’s on your desktop currently and that will attach it as well if you have a file attached you will see a little paper clip show up right over here and that way you can click there anytime you want to see that file let’s look at some of these tabs down at the bottom I’m going to click on transactions this is one of the easiest ways to look for a transaction related to a customer instead of running a report which you could certainly do or doing a search if you can just come here and look down the list sometimes it’s easier just to pick out the transaction you’re looking for you’ll see these transactions are currently sorted by date you can see the little down arrow there but I could sort them by type or number if I wanted to any of these column headings if you want to open one of these transactions let’s just say the very first one just double click on it in this case it’s a payment and you can see that it brought up that payment for Christie if you needed to make a change to this transaction you could do it right here save and close at the bottom and when you come back you will see that change reflected on this line notice I’m currently looking at all the transactions but I might want to filter this list maybe I just want to see invoices for some reason you can also over here instead of showing all the transactions you can filter it by this date range you can come down and say last month or if you want to see anything last fiscal year this fiscal year you can see your choices here notice at the bottom of this tab you see an option that says manage transactions and there’s a down arrow to the right of it here’s where you can go in and create an estimate an invoice a sales receipt you can create any of these transactions you want right from here chances are you’re not going to be in this screen though but just know that you could if you wanted to this edit selected transaction is the same thing as double clicking and making that change and saving it you’ve also got some reports down here at the bottom if you click the down arrow I can view this entire screen here as a report if I want to let’s take a look at the contacts tab a contact is a person that you want to keep a record of that maybe works for that company it could be that when you call this company you always speak to a certain person maybe Steve you can see Steve is set up as a contact here if I come to the bottom and click the down arrow you can see I can add a new contact here I can edit the one that’s selected or can delete the selected contact I’m just going to click edit because I want to show you the information that you can fill in when you’re editing or adding a contact you would have to put in their name you can see they’ve got Christie’s first and last name notice they have her last name in the same field as the first name I’m going to go ahead and actually cut that out and paste it down here the next thing is I can add her work phone Fax mobile and again these fields can represent whatever you’d like them to just by clicking the down arrow and seeing your choices Christy is a primary contact but notice I could set up an additional contact or a secondary I’ll go ahead and save and close and now you’ll see there’s Christy Abercrombie she’s still our primary contact let’s look at the to do’s a to-do is something you have to do related to this customer if you had any to-do’s already set up you would see them listed here to create a new to do come down to the managed to Do’s at the bottom click the down arrow and you’ll see create new a to-do can be a phone call a fax an email a meeting an appointment or a task you decide I’ll go ahead and say it’s a call and set the priority is it a high priority task or a low priority task it’s going to assume it’s with this customer but notice you can change that if you want to this can actually be with the lead possibly or a vendor or an employee you can set the due date if you want to set a time on that date you just click this check box and set the time now at the bottom is where you type in the details you can also change the status when you first set this up it will be an active to do but you can actually say this to-do has been done or make it inactive I’m going to go ahead and click ok and now you’ll see that this to do has been set up once you’ve completed this task or this to do notice that over in this done column you can actually double click and that will open up this to do you can actually come down here and say done and click OK and now you’ll see there’s a check mark indicating that you’ve completed that to do you can also run reports on your to-do’s notice you can launch a to-do report which looks just like this and you’ll see all the to-do’s that you have set up not just for this customer I’m going to go ahead and close that the last thing I want to mention is the tab that says sent email once you send an email through QuickBooks you’re going to see that email is listed here and that way if you want to go back and see a record of all the emails sent to this client you would have that list now there’s a couple of other things here I just want to mention real quick before we wrap up working with customers and jobs you’ll notice at the top here the new transactions we’ve seen this screen already we saw this under transactions when we went to manage transactions we could create these new items this is the same list chances are you won’t be on this screen when you want to create these and all of these happen to be on your home screen by the way as well but if you happen to be here you can do it from here if you wanted to print you could print your customer list you might print just job information or customer and job transaction list you can also export this customer list over to excel that way you have a list that’s separate from QuickBooks if you needed it for some reason you can export the transactions you can also import from Excel if you have a list of customers already set up then you can either import them or paste them in from Excel and just to show you this screen I’m going to click on paste from Excel when you look at this screen what you see here is a listing of all of your customers and each field of information what you can do if you have a lot of customers you want to add you can go to the bottom and just start typing the new customer information in these fields and when you’re finished you can save changes and they’ll be in QuickBooks just as if you set them up from within QuickBooks you also have the ability if you want to do a mail merge with Microsoft Word you can do that from here and we’re going to go through all of this in a later module and then also you have what’s called an income tracker that you can look at we’re going to cover this in a later module as well but just want to give you an idea of what types of things you’re going to see in the customer center well that’s going to wrap up customers and jobs let’s go ahead now and go over to the next video which is number three and talk about customer groups hey there welcome back it is Cindy again we are working in QuickBooks for desktop version 2022 we’re down in video number three now and I want to talk to you about a newer feature that QuickBooks has called customer groups this is a newer tool designed to help small business owners with one of the hardest problems that they have and that’s actually getting your customers to pay you can actually categorize your customers and based on that you can create groups of customers so that you only have to send one email instead of a separate email to each customer let’s flip over to QuickBooks now I’ll show you how to set up customer groups and then I’ll also show you how you can create an email and send it to that group to access the groups option you’ll need to head up to the menu and click on list and you’ll see the very bottom options as manage groups if you had any groups already set up you would see them listed here I don’t so I’m going to click on create customer group and set up my first group the first thing I want to do is give my group a name I’m going to call this one my commercial customers and if you wanted to add a description you could certainly do that but I’m going to click on next and set up some criteria the first thing I’m going to do is drop down this list where it says field what I’d like to do is all of my commercial customers I want to be able to send them emails separate from my residential customers I have customer types already set up so I’ll choose customer type then I’ll select the operator I want any customer types that equal residential and these are the two customer types that I had set up over in QuickBooks once you’ve filled in those three options go ahead and click on ADD and then you’ll see it on this list of selected fields I’m going to click next and now it gives me a list of all of the customers that meet that criteria if you see one that you don’t want to be in this list you can uncheck it if you do uncheck one QuickBooks will give you a message letting you know that if you want to add this customer back to the list you’ll have to manually do that I’m going to go ahead and say no in this case and I’m going to hit finish at the bottom now my group called commercial customers has been created successfully I’m going to click OK and now you’ll see it in the list there if I want to create another group here’s the button to create that new customer group now that I have a customer group here you’ll notice under actions there’s a down arrow this is where you would go if you want to edit the group maybe you want to change the name of the group you can delete the entire group or you can actually send the group an email I’ll click on email so you can see how this works now this can be any type of email it doesn’t have to be an email regarding their payment you’ll notice that it actually pulled up a list of all the customers that met my criteria if I don’t want to send an email this one time to one of these customers I can just uncheck them so just go through this list and check or uncheck the ones you do or don’t want to send to you’ll notice the one that you’re clicked on you will see that customer’s email address if I click on Chris Baker you’ll see this changes to Chris B it is going to send an email to each customer if you wanted to add another email address you can actually click here and type that in here you can attach a file if you’d like to send a file with this and then you can put in a subject I’ll just put in here payment and then you can set up the body of your email I just put in hope you’re doing well I see that your account is overdue do you think you can make a payment by Friday you can see the rest of that message you want to check your spelling just to make sure that you look very professional you wouldn’t want to have any misspelled words here and then when you’re ready to send this you’ll just hit send now notice also that it tells you down here in this case QuickBooks will take 16 minutes to send these emails and won’t be available just make note of that because if you go ahead and send and then you’re trying to use QuickBooks it might not allow you to do certain things because it’s still trying to actually get those emails out for you and that’s how your customer groups are going to work I’m going to go ahead and close this one let’s go ahead now and head over to video number four of this module one of the new features that QuickBooks 2022 has is the ability to add multiple customer contacts to your emails and I want to show you how to do that thank you hey there welcome back it’s Cindy we’re in QuickBooks desktop 2022. I’ve got a really short video this time because I just want to show you a new feature that’s been added to the desktop 2022 version and that is the ability to add multiple customer contacts to your emails or any forms that you’re actually going to send out let’s go ahead and flip over to QuickBooks and I’ll show you how to do that really quick books allows you to email a form to your customer or vendor in this case I’m on an invoice and I’d like to email it to my customer I’m going to click on email and then I’m going to choose invoice you see that it tells me I have information that’s missing or invalid and that’s because I don’t have an email address set up in Tom Allen’s customer setup window I’m going to go ahead and set it up here if I wanted to add an additional email to this I would put in a semicolon at the end and then put in that second email address because I copied this in you can see it put in that semicolon for me but if I was typing it I would have to type that semicolon and you can put in as many email addresses as you’d like and that’s how you would go ahead and add more than one email address to a form that you’re actually sending out to a customer or a vendor I told you this would be a really short video let’s go ahead and wrap this one up now and head over to video number five where we start talking about estimates welcome back to QuickBooks desktop 2022 this is Cindy and we are working through module four where we’re going through and talking about setting up customers and jobs we’re going to start talking a little bit about estimating jobs in the next couple of videos if you do not create estimates as part of your business then skip on down to video number nine and go ahead and look through invoicing customers for products and services we’re going to go ahead now though and start talking a little bit about creating estimates this is part one make sure you watch both parts so that you get a full understanding of how estimates work let’s flip over to QuickBooks and we’ll start talking about estimating jobs QuickBooks offers the ability to create estimates or quotes So that you can turn those quotes into invoices to get paid from your customer you can actually do what they call progress invoicing meaning that you invoice your customer for certain items that are on that estimate until everything is pulled over on your home screen you’re going to see the estimate icon right here if by chance you don’t see it it’s probably because you told it when you set up the company file that you do not create estimates Let me refresh your memory on how to turn that back on if you go up to edit on the menu come down to preferences make sure on their left you’re clicked on jobs and estimates and then the company preferences tab here’s where you turn on the option to create estimates remember if you’re creating estimates you probably do want progress invoicing as well make sure this one says yes and then go ahead and click ok now you should see your estimate icon appear if you do not use estimates and you want to leave this icon here it doesn’t hurt a thing you would just start with the create invoices if you invoice in your business something else just to know is estimates and purchase orders are on the same line here and that’s because they’re both considered non-posting if you create estimates for a potential customer and you never hear back from them it doesn’t really affect your books you’d have to actually run special estimate reports to even see that estimate let’s go ahead and start the process I’ll show you how this works let’s go ahead and click on estimates the first thing you want to do is choose your customer and your job if you’re working with the job feature always always always click the job underneath your customer that you’re wanting to work with if you just choose the customer name you’re going to run reports and see other and you won’t know which job those transactions go to make sure you click on the customer and the job the next thing you’ll want to do is choose the class if you’re using the class feature if you’re going to use the class feature use it consistently because it doesn’t do you any good to use it sometimes and not others reports won’t be accurate I’m going to choose remodel in this case there are also different templates you could use when it comes to estimates invoices any of your forms if you look over in module 10 that’s where you’re going to see how to actually customize these different templates we’ll just use the one that it pulls up automatically in this case the next thing you want to look at is the date it’s going to pull in the current date but you can change that date if you need to and also notice the estimate number any transaction that’s numbered in QuickBooks whether it’s a check or an invoice or an estimate it’s going to start with number one you probably want to start off by changing that number and then it will number sequentially after that for you unless you change it again you’ll see here it pulled in the name and address of your customer and if you happen to be shipping items somewhere then you’ll want to choose a ship to address over here if you don’t have one you can add it if you don’t need this field at all just leave that blank let’s go look down in the body of the estimate if you notice there’s a column that says item and if you click right underneath it a down arrow appears this will give you a list of all the different items you have set up in QuickBooks items are things that you sell to your customers sometimes you purchase items as well items are set up by different types you’ll see that some of these items are Services you provide you’ll see some of these are actual Parts physical parts there’s inventory and there’s non-inventory you’re going to have different categories that items can fall into we’re going to be looking at these in a later module but for now we’re just going to use a couple that are on the list to show you how this works I want to start with framing you’ll notice once I click framing it pulls in a description and you can type over that you can add to this this will word wrap so you’ve got plenty of space to type in that description if you need a lengthy one I’m going to go over to the quantity field and I’m going to put in 10 and you’ll notice that it calculated my 10 times the cost of 55. this 55 was set up when the item was set up you can type over that if you want to change it this one time only you’ll notice that it calculated the quantity times the cost to give me an amount in this column now we skipped over the unit of measurement you’ll see it’s grayed out right here if you have something you sell by the foot the yard that by the case possibly you could set that up as a unit of measurement and choose one case one yard we don’t have that set up in this example that’s why it’s grayed out let’s take a minute and talk about the markup column you have the ability to mark up an item a dollar amount or a percentage I’m going to say 30 percent in this case you have to put the percents on or it won’t do the percentages then if You tab through it you’ll notice that it calculated that for you to give you a total the last column says tax that means that for sales tax purposes framing is a labor and this is a non-taxable item let’s go put in one more we’ll do a physical part this time let me scroll down and find some wood doors we’re going to choose exterior wood door and I want to add two of these we’ll just use the cost that it brings in I wanted to point out the markup in this case the reason that this markup pulled in automatically and it’s a negative number is because when you’re setting up an item you can tell Quickbooks on average how much you pay for it and on average how much you sell it for if those two were set up it will pull in a markup for you we obviously don’t want that one we’ll just delete it and we’ll put in a dollar amount in this case I’ll just say a thousand dollars and notice it did the calculation for me this is a taxable item because it’s a physical product that we’re selling our customer like you could keep going down the list and add as many items as you want this line here is not the very bottom you it will keep going as long as you keep adding items let’s take a look at a few things at the bottom of the screen on the left side you’ll see there’s an area for customer message there are some that are pre-set up but if you want to add your own you would click add new and add one right below that is a field for a memo just something quick you might want to say that the customer won’t see and this also tells me the customer is taxable as far as sales tax is concerned on the right hand side you can see it gives me the subtotal the total of the markup if there were sales tax it has the sales tax rate pulled in and you see that amount and then there’s a total at the very bottom that gives you a general idea of how to set up the body of the estimate what I want to do now is head on over to part two and we’re going to go up and look at some of these options that you see on your icon bars up here and a few of these different tabs we’ll be looking through some of the reports and things like that so let’s head over to part two and I’ll see you shortly hey there welcome back to QuickBooks desktop 2022 this is Cindy we just completed estimates part one where we were able to set up an estimate for a potential customer let’s go ahead now and finish talking about some of the other options available for estimates this is part two let’s flip over to QuickBooks and we’ll keep going in part one we actually created an estimate for a customer and I’d like to go back to that estimate since I’m on the home screen the easiest way to do that is to go back to the customer center right here and look at the transactions for my customer we were using Tom Allen and you can see that here is Tom Allen’s estimate all I have to do is double click and go right to it now that we have the estimate created what I’d like to do is go for some of the options with you and we’ll go to the top of your screen and go through these four tabs starting with the main tab here you’ll see the first thing you can do is use the find feature that QuickBooks has to find an estimate you could use the arrows that go left or right to go to the next or previous estimate keep in mind that every transaction in QuickBooks is set up in date order that means that if you had previously entered an estimate and maybe you backdated it a few days it may not be the previous one when you hit this arrow and you have to keep clicking on it if you can’t find it that way you can click on this find option right here and this will allow you to put in some criteria you can see that you can put in a customer and job name a date beginning and end date here an estimate number and an amount and you can fill in all or any of this information you’d like and then have it find for you and that would generally find that estimate the other option I want to mention is this Advanced option right here you can go in and put in any of this criteria under this filter column that you see and search that way we are going to look at the whole search option a little bit later let’s go ahead and close out of this I just wanted you to know that advanced option was there the next option you see here says new and this will allow you to create a new blank estimate this is the exact same thing as going to the bottom of your screen and clicking on Save and new that you see down here the next option over says save if you’re working on this and you want to save what you’ve done so far you can hit that save option notice there’s a down arrow there that will also let you save this as a PDF file if you’d like to do that here’s where you would delete this estimate if you’d like you can also create a copy that’s handy if you need to create another one exactly like this and maybe just make one or two changes saves you a lot of work we’re also going to be looking at memorizing transactions in a later module but this would allow you to basically tell QuickBooks that every month as an example I would like to see the same estimate in QuickBooks you can also Mark a transaction as inactive now what happens is if you have a transaction that’s inactive it will still be in QuickBooks but it’s not counted as any of your numbers when you run reports if you ran an estimate report for example it wouldn’t be counted in those totals the next option you see is the option to print this you’ve got a couple things you can do you can preview this which we’re going to do in a second you can also just print the estimate right here you can print an envelope which will do a mail merge with Microsoft Word and you can also save this as a PDF let’s start with the preview option I’m going to just click anywhere in the middle and that will zoom in and I wanted you to take a peek at what your estimates will currently look like because you will want to customize this you can see it’s very plain it has your company name and address at the top you may want to add a logo your telephone number the website email address you can do that by using one of the templates that’s available to customize in QuickBooks and we’re going to do that in a later module you’ll see here’s the date of the estimate the estimate number here’s where it automatically puts the job name notice they call it project that would be something else you may want to customize the big things I wanted you to notice here is that when you have an estimate the customer does not see the name of your item they also don’t see the markup column now you could turn those on if you wanted the customer to see those but generally you don’t want them to so QuickBooks doesn’t turn them on automatically you’ll notice at the bottom you can see the subtotal and the sales tax and the total of the estimate as well and again those are some things you can customize I’m going to close out of that here at the top the next thing I want to point out is the email option you have the option to email this to your customer if you had several that you had set up that you wanted to email you would make sure this little check box where it says email later is checked and then you can email the batch notice you can also attach a file to this this could be a Word document or something you’ve scanned in notice you can also create an invoice from here chances are you will not be in this screen when you want to create an invoice but you can do that from here and just to mention the start project there are several different add-on packages you can buy for QuickBooks in two it does make a project management type software and this is where you can go and get a 30-day free trial if you’d like it’s called Maven link and that way you can manage the project the expenses anything related to it let’s look at the formatting tab I mentioned a few moments ago that you could customize this template for this estimate this is where you would be doing some of that and like I said we’re going to look at that in a later module here’s where you can run your spell check you can also insert a line whichever line you happen to be clicked on if you insert a line notice you insert one above the one you’re clicked on if you’re clicked on a line you can also delete that entire line or you can copy the line you’re clicked on and here’s some more customization options we’ll look at in a later module now let’s talk about the send slash ship option I mentioned earlier that QuickBooks can do mail merges with Microsoft Word you can merge envelopes letters and this is where you would work with those options we will work with those in a later module as well and then let’s look at the reports for a moment there are a couple of generic reports already set up for you that are related to estimates you can see you can run an estimate by job if you’d like if you want to see an estimate versus actual you can do that and then you can also see an item price list I also want to point out the transaction history which you have none right now but if you’d already created an invoice based on this estimate maybe received a payment that would create a history you could go look at and sometimes that’s very helpful to narrow down where to find certain transactions typically the main tab is where you’re going to be working so I’ll just click back on that and that’s how estimates work in QuickBooks I’m going to go ahead and save and close at the bottom and if you’ve made any changes to your transaction it will ask you if you’d like to save them just go ahead and say yes now that you know how to create estimates in QuickBooks let’s head over to the next video and we’re going to start talking about how to create invoices based on those estimates that you’ve already created hey there it’s Cindy again welcome back we are working in module four we’re talking about customers and jobs this is actually video number seven where I want to introduce you to how to invoice from estimates if you do not create estimates in your business then you can skip 4-7 and 4-8 and just go right down to 4-9 and just start with invoicing customers for products and services but let’s talk about how to take your estimates and turn them into invoices let’s flip over to QuickBooks and we will go ahead and get started now that you’ve created an estimate for your customer you’ll want to go ahead and pull some of those items onto an invoice that way you can send the invoice out and get paid for some of that hard work you’ve been doing before we do that we’re going to be talking about progress invoicing as we go through and create this invoice and I want to make sure again that you know where that option is in case it’s not turned on if you go back to edit on your menu and come down to preferences you want to make sure on the left here that you’re clicked on jobs and estimates company preferences tab and make sure you’ve chosen yes for the option that says do you do progress invoicing this is what’s going to allow you to pull items from that estimate onto an invoice I’m going to go ahead and click OK because that looks okay and let’s get started I’m going to choose create invoices the first thing I want to point out is if you have this gray bar you can click this Arrow to show the history this may be on already and you want to actually hide the history by clicking on this arrow that will give you more room to work with on your screen the history will just give you the recent transactions any notes customer payments things like that that you may want to see as you’re creating this invoice we’re going to go ahead and hide that for now and notice the first thing that QuickBooks wants to know is who is my customer and my job that I’m actually creating an invoice for I’m going to choose Tom Allen’s sunroom now you have a list of available estimates these are estimates that you’ve created for Tom Ellen’s sunroom and you haven’t pulled everything onto an invoice yet if this window doesn’t pop up the first thing I would do is check your estimate to see if you have the exact same customer and job if I created the estimate originally and I just had Tom Allen and not the sun room but here I chose the sun room there’s not going to be an exact match so this window won’t pop up I’m going to go ahead and click on the estimate I’d like to pull from and then click ok and now you see the progress invoicing window this is what I wanted to make sure was turned on over in the preferences here I have three choices I can go ahead and pull everything on the estimate onto this invoice that would be the first one the next thing I could do is create an invoice for a percentage if I wanted to create an invoice for 30 percent I would just type in the 30. it will let you put the percent sign in that’s okay but it does know it’s a percentage and then the third option at the bottom notice is where you would create an invoice for selected items I’m going to show you how that one works I’ll just click on OK and here you’ll see each item that you had actually pulled on to your estimate you can also see the quantity and all the information about each one if you want to pull over three of the hours of framing you would just type that in the quantity area and then here let’s say I pull over one of the wood doors you could also go in and put in the percentages if you prefer to do that I’m going to click OK and now you can see that it pulled in those quantities that I just told it I wanted to pull over if you wanted to add something to this just click on the next line down and type it in you can add as many items as you want to maybe there’s a freight charge that you want to get reimbursed for they actually already have an item set up called Freight reimbursement here I’m just going to say there’s a quantity of one of these and I’ll just make up an amount 193.26 and you can see it tracks all of that you can see at the bottom here that the sales tax that I’m using is San Thomas and the tax it’s charging is 84.51 you can see the total if there were payments applied you would see that here you wouldn’t see any payments applied until you’ve saved this and then recorded a payment and then opened this back up that’s when you will see any payments have been applied and of course there’s the balance is due right down at the bottom over on the left hand side you can see there’s a place for a customer message they have a few of these already set up but if you wanted to add a new one you would just choose add new and type that in I’ll just choose this one here that says please sign and date this proposal there’s a place for a memo below that and no one will see this memo except you it’s not going to print out on your invoice there’s also a place for the customer text code this just means this customer is subject to sales tax couple things at the top I want to go over real quick if you’re using the class feature make sure that you choose the correct class you want to use that consistently if you’re going to use it so that your reports are accurate there’s also a place for the template that you want to use for this invoice we’re going to cover templates in a later module here’s the date of your invoice I’m going to change this to December 27th the invoice number is populated automatically you can change that if you’d like and double check that your billing information is correct for your customer this customer has terms of net 30. I can change that if I like you can see that if I choose net 15 as an example that this due date reflects 15 days from this date right here when you’re finished just go ahead and save and close if you’re finished or if you want to create another one you can choose save a new I’m going to save and close for now and you’ll notice that I changed the terms over here and this is asking me do I want QuickBooks to change those terms permanently in the customer’s record I’ll go ahead and say yes for now and that invoice has now been completed let’s go ahead and move over now to part two that’s going to be video number 4-8 and we’ll continue and talk about some of the options you have when you’re creating that invoice hey there it’s Cindy again welcome back we just wrapped up invoicing from estimates part one we actually went through and created an invoice based on an estimate what I want to do now is go ahead and take you into part two we’re going to complete another invoice based on that estimate and then we’ll go over some of the options you have available when you’re working with that invoice you’ve created let’s go ahead and flip over to QuickBooks and we will continue with part two I want to head back to the customer center I still have it open on the left just to show you that we do have our estimate and then we have one invoice we created what I want to do now is go ahead and create an invoice for whatever was left on that estimate because we’ve completed that job and we’ll just follow the same process I’m going to go back to the home screen and choose create invoices the first thing I want to do is pick my customer and my job I’m going to pick Tom Allen’s sunroom it does tell me I have some available estimates as long as you have even one penny left on that estimate this window will pop up you won’t get this window for this customer and job once you’ve pulled everything from that estimate but I’ll go ahead and choose the estimate I’d like to pull from and click ok and now I get my progress invoicing window again one thing that’s a little bit different from the first time we saw this is notice the first option now says create an invoice for the remaining amounts of the estimate which is what I’m going to do but I could choose another percentage or if I want to pick selected items like we did with the first one I could do that as well I’m going to click OK here and you can see that it pulled in whatever was left on that estimate that had not yet been invoiced I’m just going to double check some of the options on the screen I want to change the date in this case I’ll say that it’s January the 9th notice it does give me the next invoice number my bill to information is correct you want to make sure you check the terms and the due date if you want to add anything to this remember just click on the next line down and you can add anything you’d like to this invoice you do have at the bottom your sales tax your payments applied and your balance due we talked about all of that once you’re finished you can go ahead and save and close or save a new if you want to create a new one before I save and close I want to go through some of the options that you have up under your tabs at the top of your window these are options that pertain to this invoice you’re probably going to stay on the main tab most of the time and some of these options you’ll already be familiar with because you saw them when we actually created the estimate but let’s go back through these the first thing you’ll notice is you have the option to find if you’re looking for an invoice and you just can’t find it using these arrows that go left or right go ahead and click on the find option and that way you can put in some search criteria and have QuickBooks search for you your next option is your new button this allows you to create a new blank invoice remember this is the exact same as coming down to the bottom of your screen and choosing save and new you do have the option to save this invoice if it’s taking a while and you have a lot of line items you might want to save it at various points notice also that you can save this as a PDF just by clicking that down arrow the next thing you’ll see is the delete option this is how you’re going to delete this invoice notice when I click the down arrow I also have an option to just void it if I delete it it’s going to be gone but if I void it it will stay in QuickBooks and it will just say void across it with a zero balance if I wanted to create a copy of this I could I could also memorize this and we’ll talk about memorizing in a later module also Market as pending remember if you mark something as pending it’s going to be in QuickBooks but QuickBooks will not count it in your numbers it could be that you set this up a little bit early and maybe you’re not quite ready to send it out but you don’t want to delete it you could leave it in here and that way it’s not part of your accounts receivable you do have the option to print this I want to show you a preview of what your invoices look like I’ll just click anywhere to zoom in a little bit you can see that it has your company name and your address you will want to customize this a little bit so you have some more information here possibly the telephone number the email address you’ll notice over on the right it says invoice there’s a place for the date and the invoice number and then all of the information that’s on that invoice remember you can customize this like I mentioned and we’ll do that in a later module I’m going to hit the close button at the top and that’ll take me back now underneath print there are a few other options here’s where you can actually print that invoice out or print a batch and what a batch means is if you notice there’s a check box here that says print later if you’ve got several invoices created the ones that have the check mark if you choose the batch option those will be printed if you’re going to be shipping items here’s where you can create a packing slip a shipping label or an envelope these will do mail merges with Microsoft Word and then notice you can also save this as a PDF file you have the ability to email your forms in QuickBooks if you want to email this invoice to your customer you could just choose invoice here and that would let you email it if you want to email the batch all the ones that have the check mark that say email later would be included in that batch here you can attach a file you might have some file that you’ve scanned in or it’s a file that you can access in your computer and actually attach to this so that way you don’t have to go out of QuickBooks and search for those files and open them up they’re right here you do have an option to add your time and your cost we’ll actually talk about this over in the next video which is video number nine invoicing customers for products and services so we’ll hold that and you can also apply credits we’ll hold that one as well and talk about those in the next video and just to tell you what progress means if you wanted to see a timeline of I’ve estimated this job I created an invoice I received a payment that would show you the progress and you can see the
progress too as to how much you’ve actually pulled from that estimate here’s a way to receive a payment against this invoice chances are you’re not going to be on this screen when you want to receive a payment but you can do that you can also create a batch what that basically does is takes one invoice and allows you to send it to multiple customers maybe if you have three customers that are each going to pay a third then you’ll be able to send this to all three and there’s also a place for a refunder credit and we’ll talk about that a little bit later in this module as well I just want you to be familiar with those options because that’s where you’re going to find most of the things you’ll use on a daily basis we’ll look at some of these other tabs when we get over into the next video as well I just want to make sure here that you know how to actually pull all of the information from your estimates onto invoices once you’re finished go ahead and save and close and that’s how you’re going to create your invoices based on estimates let’s go ahead now move over to video 9 in this module and talk about invoicing customers for products and services hey there it’s Cindy again welcome back to QuickBooks desktop 2022. I want to go ahead in this video which is video number nine of module four and talk to you a little bit about invoicing your customers for products and services we’ve been talking about invoicing in the previous videos but that was actually taking estimates you’ve created and turned them into invoices not every business uses the estimate feature if that’s the case you would just start here and just start with the invoicing and go forward let’s move over to QuickBooks and I will show you how to create an invoice to create an invoice for a customer you’ll find the create invoices icon right here on your home screen the first thing you’ll want to do is go ahead and choose your customer and your job I’ll just choose Robert Allard remodel job make sure you choose the class if you’re using the class feature remember to use this consistently so that your reports are accurate you’ve also got an option to choose a template I’m going to use the default one we’ll talk about templates in a later module but you can choose different templates for each invoice you’ll want to make sure you choose the correct date that you’re creating the invoice and remember that invoices are numbered it’s going to number the next one sequentially unless you change that number invoice numbers can include letters if you’d like you just type those in you want to make sure you have the correct billing address for your customer as well each customer can have different terms I’m going to go ahead and say net 30 for this one and you’ll notice that if I choose net 30 the due date defaults to 30 days from this date here you can always change the due date if you want a specific date to be the date this is due here’s where you’re going to click down in the body of the invoice and you’re going to choose an item that you’re going to invoice your customer for you’ll see on this list these are the items that are already set up if you wanted to create a new one you would click on add new and we’ll go through that in a later module right now we’re just going to choose a couple of these we’ll choose floor plans and if you wanted to add a different description this will let you type as much as you want it will word wrap all the way down to the bottom let’s say that we’re going to charge our customer for two sets of floor plans and we’re going to charge a thousand dollars a piece you’ll see that it does the calculation for you when you tab over to the amount column and the last column tells you that for sales tax purposes this item is not subject to sales tax on the next line down I’m going to choose labor and I’ll add a description I’ll add labor for getting the kitchen to prepare for the remodel we’ll say it was a quantity of 30 hours to do that and we charge fifty dollars an hour you can see it did the calculation it’s 1500 for the total amount and this is also a non-taxable item and you can keep adding as many items as you like to this list a couple things to notice at the bottom on the left you have a place for a customer message there are some pre-built ones but if you wanted to add your own you would click on add new and create a new message to add to this list that you could use in the future underneath that it says memo that is strictly for you that memo will not print on the invoice and then you’ll see that this customer is subject to sales tax if you charge sales tax in your business if you wanted to change that you would change it to non-taxable sales on the right hand side if your customer is subject to sales tax you will see the tax they’re being charged the total for that tax if they’ve made any payments to this invoice you would see that right here you wouldn’t see any payments until actually this has been saved and then you’ve applied a payment and opened this back up and that’s when you would see if there were any payments applied and then of course the balance due right here all you need to do at this point is go ahead and save and close and if you’ve made any changes if you remember I changed the class up here and I also changed the terms it will ask you if you like those changes to be reflected in the setup just go ahead and say yes and now that invoice has been created let’s go to our Customer Center over on the left here just to look if I go ahead and click on Robert Allard you will see that there are now two invoices the top one is the one that we just did it was dated December 28th if I just want to see the invoices for that job then I would click on the remodel job on the left and see just those invoices if you want to open that invoice up just double click anywhere on that line and now you will see that invoice you can make changes if you need to and then save it and those changes will be saved a couple things I want to point out at the very top you’ll notice there is a main tab and this is the tab you’re going to use the most often there are several different items here that I want to point out first if you’re looking through your invoices for a particular one and you just can’t find it use these arrows to go left or right to look at the next or previous and if you still can’t find it you can click on this find option and then put in some criteria and QuickBooks will search for you here’s your new option if you want to create a new blank invoice this will save the one you’re on this is the exact same thing as if you came down to the bottom and clicked on Save and new next you can save your invoice if you’re working on this and it’s taking a while you can go ahead and just say save invoice you can also save this as a PDF file if you’d like here’s where you would delete that invoice you can create a copy of this invoice if you need to create another one that’s very similar you can go ahead and make a copy and then just change whatever you need to change and save it we’re going to talk about memorizing in a later module but what this would allow you to do is if you had an invoice that needed to go out once a month let’s say you can actually memorize it and QuickBooks will automatically create that invoice for you next month and then you can send it out you can also Mark this as pending and what that basically means is that if you have an invoice that you want to put in QuickBooks but you don’t want it to count in your numbers when you run reports you can do that it will be inactive and you can always turn it back on when you’re ready to activate it again let’s look at print preview so that you can see what this invoice will look like you can see this invoice actually has the company name and address it doesn’t have the phone number fax email any of that information so you probably want to customize this template you can see it has the word invoice on the right there’s the date the invoice number and then all the information about the invoice we’re going to customize in a later module I’ll go ahead and hit close at the top you’ll also notice that you can email this now notice there is a check box over here for print later and email later if you’ve got several that you’re working on you can check those boxes and when you’re ready to email all of them you can email the batch or if you’re ready to print them all you’ll see there’s an option for batchender here as well here you can attach a file if there’s some file that would pertain to this invoice and you don’t want to have to get out of QuickBooks and go find it you can attach it here and open it up easily here’s an option that we haven’t talked about yet add time slash cost if you’re doing job costing in your business that basically means that you want to make sure every transaction you enter is tied to a job then you can run job costing reports you could run a profit and loss for example to see how much you’ve made or lost on a particular job each transaction will have a place where you can choose the customer or job that it pertains to if you want to be reimbursed from your customer for certain expenses that you might have incurred when you create those expenses whether it be a check credit card transaction whatever it happens to be if you’ve told QuickBooks that it pertains to a particular customer and job then when you come into this invoice you can add time slash cost you can see that here’s the time tab if you had any time you’d created related you could pull that in here’s expenses if you had actually used a credit card and purchased something and tied it back to this job that would be listed you could check it off click OK and then that way it would pull in those expenses you’ve got mileage and also items I’m going to go ahead and click OK there you can also apply credits if you have an existing invoice and you’ve created a credit memo you can apply those credits right here you can also receive a payment for this invoice here chances are you’re not going to be on this screen you’ll probably be on the home screen but you can do it here you can create a batch and that means that if three different customers were going to pay this one invoice you could actually send this to all three customers and then also here’s refund or credit you would use this if the customer had already paid the invoice and you were going to issue a refund or credit memo for them to use for future invoice there’s also a formatting tab here I just want you to be aware that this is here we’re going to be looking at some of the options for customizing your templates that I mentioned over here a few minutes ago we’ll look at that a little bit later here’s your spell check if you want to insert a line delete a line or copy a line you can do that there’s also some options for sending and shipping if you ship items and you typically ship through one of these FedEx UPS or US Postal Service then you can set all that up right from here and the last tab has some different reports you can run related to invoicing most of the time you’ll be using that main Tab and that’s really all there is to actually creating an invoice let’s go ahead and click save and close at the bottom if you’ve made any changes you might want to go ahead and save those if it asks you and now that invoice has been completed let’s go ahead and move over to video number 10 now and we’re going to talk about receiving customer payments towards those invoices hey there welcome back to QuickBooks desktop 2022 this is Cindy we are all the way down now to video number 10 in module 4 and I want to show you in this video how to receive customer payments once you send an invoice to a customer and they pay you how do you actually receive that payment in QuickBooks let’s go ahead and flip over to QuickBooks and I’ll show you how to receive this customer payments when you receive a payment from a customer you want to go ahead and enter it through this receive payment window notice if you’re following the flowchart it’s the next thing after creating voices it doesn’t matter how the customer paid you it doesn’t matter how much they paid you you’re going to enter all of that information in this window if there’s a balance due QuickBooks will remember that the first thing you want to do is put in who the payment was received from if you’re using the job feature always click the job I’m going to say this is for Tom Allen’s sunroom as soon as you choose the customer job you’ll notice that any invoices that are still open for that customer job will appear at the bottom here and it will also let you know the amount that’s due just in case something had already been applied to this you’ll also notice in the top right this is the customer balance the next thing you want to do is put in the amount that the customer paid you I’m going to say in this case it’s 3193 dollars and I’ll leave off the sixth sense just to show you what happens when there’s an over or under payment if you noticed at the bottom this appeared now because it says I have an underpayment of six cents and it’s asking me what would I like to do with that I can just leave it on their account which is what I’m going to do in this case hopefully they’ll pay me next time or if you know you want to write it off you can go ahead and do that with this option right here the other thing to notice is that QuickBooks automatically assumed that all of the first invoice would be paid in full and the balance would go to the second invoice you want to make sure you apply payments correctly because if not six months down the road you’re going to end up with a situation where your books don’t match your customers books if you need to uncheck one of these or if you need to check one you can do that and you can also type over here how much the customer has specified that you apply to each invoice always choose the correct date of the payment and you also have a feel for the reference number over here you’ll notice this is a way to tell QuickBooks how the customer paid you did they give you cash did they write a check you can see that based on which option you choose this may change when I was clicked on cash this said reference number and that would just be any reference number I’d like to pop in there but notice when I click on check it gives me a place to put in the customer’s check number if you click on Visa you’re going to see that it pops up and asks you to enter the card information if you’re not using the Intuit merchant services then you don’t really need to fill this in because you’re not running their card through in QuickBooks it would just be a matter of you keeping that information for your records I’m going to cancel that there are some other options here notice there’s a down arrow that has a few payment methods listed it could be the customer Bartered this with you but if you need to add a new payment method just click on new payment method here and then put in the method they paid you with I’ll add sell to the list and I’ll just say that the payment type is cash but notice my other choices here I’ll just click OK and now that will be on my list for any customers that pay me with zazelle now let’s take a look at some of your options up here under your tabs you can see that we’re under the main Tab and a lot of these options you’re already familiar with you know how to find a customer payment if you’re looking for one here’s a way to create a new customer payment that’s the same thing as that save and new in the bottom right here’s a way to delete the payment you could also come over here and print the payment I’ll just show you what that looks like if you’re going to print the payment it does ask you what you’d like your payment receipt template to look like you have the ability to customize this if you wanted to do that you would click on customize template down here but right now this is what it’s going to look like I’m going to go ahead and say not now and preview this so that you can see here’s what the payment receive actually looks like you can see it’s a little different than an invoice or an estimate it has a place for the date of that payment and also the payment method and you can see where this was a check number and they have that check number in there over on the right is the total amount of the payment and the invoices listed that it paid below I’m going to go ahead and close the next thing I’ll point out is the email option if you want to email that payment receipt you can do that and email it to your customer here’s where you can attach a file you can also look up a customer and an invoice if you click on that this will allow you to search right here it says invoice number and I could search for any customer that has a particular invoice number but I could also search by one of these options as well if I wanted to unapply a payment that basically is going to uncheck all of this see how these are unchecked now I’ll just go back and check them and then if there were any discounts or credits that you’re going to give your customer related to this you could go ahead and enter that now you would enter the amount of the discount you’re going to give the discount account and then a class if you happen to be using that this process payments just allows you to process a credit card payment if you’re using the Intuit merchant services here’s where you could sign up for that if you wanted to check out the options you do have a formatting tab in the back you’ll notice a couple of things here for the payment receipt we saw a few moments ago you can go through and actually use the standard one here or customize it there’s also a tab for reports if you wanted to look at open invoices for this customer or some of these other reports that you see right here you could do that and the last tab over says payments if you wanted to check out again the credit card processing from Intuit you could do that most of the time you’ll stay under the main tab before we save this let’s go ahead and go to the bottom right and you’ll notice that it tells you the amount due how much was applied and if there were any discounts and payments applied here all you have to do when you’re finished is go ahead and save and close and if you’ve made any changes to your transaction you can go ahead and say yes if it asks you if you want to record your changes and that’s how you’re going to receive payments for customers now before we leave let me go ahead and go back to the customer center I’m going to click on customers and if you notice I’m on Tom Allen’s sun room over here and you can see there’s the estimate we created the two invoices we created and then there’s the payment right there if you need to make a change to that payment or open it up to look at it for any reason just double click on it and then you can look and see what’s going on with this payment there is one quick thing that I wanted to mention there’s a question here that says where does this payment go let me just show you real quick where the payment’s going to end up I’m going to go back to the chart of accounts I just clicked on home over on the left here and I’m going to the chart of accounts you’re going to see an account called undeposited funds you’ll see it right here that money that we just received for that payment is in this total right here let me open this up so you can see it here it is right down here at the bottom something to know about undeposited funds these are monies that you’ve received but you have not yet taken to the bank as far as QuickBooks is concerned it’s not a deposit in your checking account one of the ways to keep a check on yourself is this should always say zero if you’ve already deposited all of the money now how does that get to zero well we’re going to talk about that when we talk about recording deposits I’m going to have you go ahead and head over to video number 11. we’ll talk about payment links and then when we get to video 12 we’ll be going through the deposits right here so you can see how that money gets out of undeposited funds and into the actual checking account hey there welcome back to QuickBooks desktop 2022 this is Cindy we’re working through module four and we’re down to video 11 now I want to talk to you about a new feature that QuickBooks has that will allow you to send your customer a link the customer can use that link to go to a secure site and actually send you a payment you can use this whether you’ve invoiced the customer or not it could be that you need to get an advance payment for something you can just send this link via email and have the customer pay you the money will actually be deposited to your account within two to three business days let’s go ahead and flip over to QuickBooks and we’ll look at those payment links this new payment links option is going to be a really great tool to help you collect payments before you even send an invoice if you require down payments in your business this is a great way to email your customer a secure link they click the link they go to a secure site that lets them put in their debit or credit card information and then you receive the funds automatically through direct deposit into your bank account within two to three business days in order to access the payment links option you need to go through the menu and click on customers and then you’ll see payment links the first thing you’ll need to do is decide if you’d like to actually set up what they call a QuickBooks payment account or if you want to use pay as you go the QuickBooks payment account that you see on the left here is a merchant services account you would pay a monthly fee for that whereas the pay as you go is good if you don’t accept a lot of down payments but you want to use this feature whenever someone needs to pay you I’m going to use the pay as you go option and click get started QuickBooks will want a little bit of information from me before it takes me over to the screen where I set up the payment link you can see there’s business info personal info and deposits account info I’m going to start with business info and just click on start and answer a few questions the first thing QuickBooks is asking me is what is the business type and this is where you’re going to pick one of these options that you see here the next thing it asks you is do you have a particular industry or you can pick one that happens to be the closest match I’ll pick the first one accounting auditing and bookkeeping services you can see it did bring in the business name I would also want to put in the website and that’s an optional thing notice my address is already in I don’t have a state so I’ll go ahead and choose that and then you’ll notice that there’s a place to put in your business email I’ll just go ahead and put mine in the next thing you want to do is when you scroll down you’ll see that it actually takes you to the personal info here I’ll click on start and here’s where you want to put in the owner’s first and last name some of your personal information date of birth social and all this is required to set up the account I’m not going to put this in now but you would put it in and click next the last option down the bottom is the deposits account info when you click Start here what this allows you to do is go ahead and add your bank account here’s where you put in the account number and the routing number and then click save I’m going to go ahead and finish setting this up off camera and then I will come back and I’ll show you how to set up that link I went ahead and closed that window because I wanted to show you that once you get out of that window you won’t see that same screen again when you’re ready to actually send that link now all you have to do is go back to customers and back to payment links and what you’re going to see this time is that it’s going to connect you to your Intuit Payment Solutions account which you just created and here it will allow you to go ahead and create that payment link from this button right here I’m going to go ahead and click on that the first thing you want to do is put in the amount of money that you’re wanting to get from your customer and you can put in a product or service or some kind of description right down here so you might say that’s down payment for kitchen remodel here’s where you’re actually going to put in the name of your customer and you can put in your customers email you’ll see here at the bottom that you have the ability to choose the ways that you’d like to get paid if you want to give your customer the option to pay you with a credit card or a bank transfer you can leave both of those selected if you don’t want one or the other just slide this to the left and then that option won’t be available all you have to do now is send the payment link the customer is actually going to receive this they’re going to be able to click that link and pay you and the money will show up in your bank account within two to four business days and that’s how that new payment links feature Works in QuickBooks now that you know how the payment links work in QuickBooks let’s go ahead and head over to video number 12 and we’re going to talk about how to make deposits in QuickBooks now that you know how to receive payments in QuickBooks let’s talk about how to take that payment you’ve received and actually make a deposit so that it shows up in your checkbook register hi this is Cindy welcome back to QuickBooks desktop 2022. we’re going to talk in this video about making deposits in QuickBooks if you’re invoicing customers and receiving payments then most of your deposits will be from those payments you received sometimes though you might have a deposit that’s just strictly something you’re putting into the bank let’s go ahead and flip over to QuickBooks and look at making deposits when you look at the flow chart the last thing you did was receive a payment from a customer following the flow chart to the end you’ll see the next thing is to record your deposit and that’s what you want to do let me just mention that new in the QuickBooks 2022 there’s an icon that says merchant service deposit this has to do with your online banking that we’ll talk about in module seven so we’ll hold this until we get there I’m going to have you follow the flow chart all the way across let me mention a couple of things that I want you to be aware of when you’re making deposits when you were in this receive payment window one of the things that happened is you had an option of where to put your money remember that if you have not turned on the preference telling it where you’d like to put the money this money will automatically go into an account called undeposited funds and that’s what this question is asking you right here if I go back to the chart of accounts I’ll just show you that real quick undeposited funds is actually an account where money sits that’s been received but not deposited yet one of the ways to check yourself is if you have money sitting in here but you know that everything has been deposited then you’ve done something wrong and you can see currently there’s fifty six hundred and thirty three dollars sitting in that account the other thing I want you to be aware of is once you receive your payment do not go over here to the checkbook register and type in that payment the reason is you’re bypassing the record deposits window that money will stay in undeposited funds if you put it directly in the register the other thing is if you’re typing in the register you will have to tell QuickBooks which account in the chart of accounts does this go back to most people will pick accounts receivable which is wrong because you’ve already been there done that or they will pick one of the income accounts and if you do that you’re doubling your income so this will mess you up every time do not do it this way follow the flow chart all the way to the end I’m going to click on record deposits this is a listing of all the payments you’ve collected that went into undeposited funds what you want to do is any of these that actually were deposited into this account you want to check off I could check off the first two for example and now I have a deposit for two thousand four hundred forty dollars if I click OK you’re going to see it pulls in those two payments if by chance you pulled in the wrong ones you’ll notice right up here there’s a button that says payments you can click on and you can check or uncheck these you want to make sure this matches the actual deposit that went into the bank do not come in here and change where it says from account because this is where the money actually came from it came from undeposited funds you can see that it tells you who the money’s from the account over here if it was a check and you had typed in the check number you could pop that in if you forgot to put in a check number just type it right in here there’s your payment method the class and the amount over to the right if you needed to add something to this you could you would just click on the next line down and add it what if you as the business owner wanted to put some personal money into the business account you could do that here remember here’s your chart of accounts and you would just pick the correct account in a situation like that don’t pick an income account because it’s not income to your business this is where you’d want to use one of those owner accounts that you set up it could be the liability or if you wanted to use it as a owner draw you could do that as well in this case they actually call it Capital stock right here you can go ahead and put in a memo if you’d like and then go over to the amount column and put in the amount that you’re depositing you could be actually putting in a rebate if you were doing something like that you would want to choose the account that the money actually was expended from to begin with let’s say it was an office supply you would put it back to office supplies just make sure that you recognize everything that you put here is not income you’ll notice at the top make sure you have the correct account you’re depositing the money to because it’s very easy to put it in the wrong account and it doesn’t show up in your checkbook register and then make sure you have the correct date of the deposit there’s also a place for a memo right here it usually defaults to the word deposit but you can change that to anything you like down at the bottom left you’ll see a couple of things there is a field where you can keep some money from this deposit if you’re a sole proprietor and tell QuickBooks which account that money goes back to there’s a place for a memo and the amount and it would subtract it from your deposit total down here at the bottom let’s go back up and look at a couple of options you have at the top here’s your next and previous if you wanted to actually go through and look for a specific deposit you could do it that way you could save this you could also go through and print a deposit slip that you can actually take to the bank or deposit summary we talked about the payments button and let me just talk to you about the history button for a moment this deposit will have a history once you save this you can see right now it does not but once I save it I can go back and look at the actual payments that were received and from there I could go back and look at the invoices that were all related to this deposit I can also attach a file here if I’d like I’m going to go ahead down to the bottom and a saving close and this deposit for three thousand four hundred and forty dollars will now show up in our checkbook register let’s just go to the checkbook register and see if it’s there I’m going to choose my checking account and click ok and now you’ll see there’s your deposit right there that we just made of three thousand four hundred and forty dollars the blue line just means it’s post-dated so don’t worry about that we’re going to look more at the register in module 7 but for now I just want to make sure you know how to correctly enter a deposit the other thing I want to mention is that if you had a deposit that was not related to an invoice or payment you received you could just type it directly in the register just make sure that you have the correct account chosen from this list and again like I said we’ll get into this a little bit later but that’s how you make deposits in QuickBooks let’s go ahead now and go over to the next video number 13 and talk about how to create credit memos in QuickBooks hey there welcome back it is Cindy we are working in QuickBooks desktop 2022. we’re down to video number 13 in module 4 now and I want to talk to you a little bit in this video about how to create credit memos for your customers you might have a customer that returns a physical product and wants a refund or it could be that you just want to issue a credit towards their account either way we’re going to go through this whole creating credit memos process for you let’s go ahead and flip over to QuickBooks and we’ll see how to get those started when you’re creating a credit memo or refund it’s important to go back to that invoice and see exactly what it is you’re crediting or refunding and the amount you can do that a couple of ways we’ve been working with Tom Allen let’s go ahead and go to the customer center I have it open here on the left here’s Tom Allen’s sunroom and you can see here’s the invoice I’m going to go ahead and open that up because I want to show you that Tom still owes six cents on this invoice and I will go ahead and issue a credit memo to write that off since it’s been hanging out there for a while the other thing I want you to notice is I’m going to show you how to return an item to inventory if they return something and give your customer a refund you can see here is the exterior wood door it’s a thousand ninety dollars and 39 cents I’m going to go ahead and close that because I want to jump ahead and show you one other way that you can look to see how much is owed on an invoice we’re going to jump ahead here we’re going to talk about reports in module 11 but let me show you a very quick report that you can run if you go up to reports on your menu go down to customers and receivables and you’ll see one called open invoices this is going to be any invoice that has a balance even if it’s just a penny when I look at this report I don’t see Tom Allen on the list and that’s because I’m only looking at invoices that are due before December 15th if you come up here and change this date then click inside the report that will update it and you can see now that here’s Tom Allen’s sun room there’s that invoice we just looked at and he has a sixth sense once we create the credit memo and apply it this will disappear from this report let’s go back to the home screen and I’ll show you how to get started issuing that credit memo or refund you’ll see right down here it says refunds and credits the first thing you want to do is pull in your customer and your job I’ll pick Tom Allen’s sunroom you want to make sure you choose the class if you’re using a class feature in this case it’s new construction and here are the different templates you could use we’ll be looking at templates in a different module so we’ll leave that for now make sure you change the date to the date that you’re issuing the credit memo it’s going to pull in the next credit memo number you can change that if you want to but it numbers sequentially and of course you want to make sure you have the customer information correct which it should be at this point here’s what you’re going to do down here where it says item if your customer is returning a physical item or you’re wanting to credit a particular item you want to choose it from the list if it’s something like the sixth sense that’s just hanging out there and I want to write it off my books that’s called bad debt and that is an item they have already set up in this list here I’m going to type it in so that it’ll save us a little time here and anytime you have bad debt it will pop up and give you this warning telling you it’s associated with an expense account you can just click OK and get past that you’ll notice the description that comes up says bad debt or write-off amounts you can edit that to say anything you want but you want to go ahead and make sure that when you put this in you put a quantity of one and you want to make sure you have the exact amount you’re going to write off and make sure this says non-taxable here because if you charge tax you’re actually going to end up with another credit so make sure that it ends up being exactly six cents at the bottom like this says and that’s really all you have to do for this you can go down to the bottom and add a customer message or a memo if you want on the left a couple things at the top you’re familiar with most of these already but here’s where you can search if you want to look for a particular credit memo use the arrows or your find feature if you want to create a new credit memo you can click here that’s the same thing as saving new down at the bottom here’s your save option notice you can save it as a PDF if you choose to you can delete this make a copy memorize it I think you’re familiar with most of these we’ve talked about them in previous videos the only thing that’s a little different that I want to point out is notice because it’s a credit memo you now have the ability to use this to give them a refund or apply this to an invoice in this case we’re trying to wipe out the sixth sense that’s still open I’m going to use it to apply to an invoice you’ll notice that any invoice that’s open will appear here if there’s an exact match it will actually check it off and that is the one I want to apply it to so I’m going to click done at the bottom and saving clothes now let’s go back over to the open invoices on the left you’ll see it says refresh needed indicating that a change has been made to that report and when you look at this notice it’s still February 3rd so Tom Allen’s should show up at the top of the list and you can see that it’s gone because it is paid in full now let’s go back and look at a refund for Tom I’m going to go back to home and refunds and credits again now this time Tom is actually returning one of the wood doors so we’re going to give him a refund go ahead and choose the customer in the job make sure you check everything out over here I’ll go ahead and change this date this time what I’m going to choose is that exterior wood door that we originally invoiced him for and you can type anything you want in this description area you want to make sure that when you put this in you have the correct amount that you charged him for to begin with this one was a thousand ninety dollars and thirty nine cents and we did charge sales tax and that’s all you have to do now at the top you’ll notice that there is an option that says use credit to give a refund I’m going to click on that and it pulls in all of the information you’ll probably want to check the date here and make sure that it has the date that you’re issuing that refund the other thing is how do you want this refund issued via a check do you want to put it back on their Visa card I’ll go ahead and just choose check and you can see there’s the ending balance in the checking account so make sure you have the correct bank account chosen there and the correct class that this goes to and that’s all you have to do this means you’ll be able to print the check but once I click OK this check is actually going to be in the register I’m going to change this date again to make sure I have the correct date here I’m going to go ahead and click OK and now that check is in the register I’m going to hit save and close at the bottom and let’s see how this looks let’s go to our check register I’ll just go right over here and I will open up checking and there is the check that you just created all you have to do now is go in and print it and you are good to go that’s how you’re going to create credit memos and refunds for your customers let’s go ahead now and go over to video number 14 and we’ll talk about creating statements that you can send to your customers at the end of the month so they know exactly how much they owe hey there welcome back to QuickBooks desktop 2022 my name is Cindy and we are all the way down now to module 4 and we’re talking in this video which is number 14 about how to create statements in QuickBooks a statement is basically a history of what happened that month that you can send out to your customers usually you send statements at the end of the month you do not have to send statements but it’s really a great way to gently remind your customers if they owe you money that they need to pay let’s go ahead and flip over to QuickBooks and I’ll show you how to create those statements to get started creating statements for your customers head over to your home screen and click on the icon that says statements the first thing you need to let QuickBooks know is what is your statement date typically statements are sent the end of the month I’m going to set this for December 31st and the statement period would be that same month the beginning of the month through the end of the month but you can also choose whatever date range you would like notice also as far as dates are concerned that you might choose this option here to just include all the open transactions as of that statement date and you will have an option to include any transactions over in this case 30 days past due and you can see you can change that it will default to sending statements to all of your customers but notice you could choose multiple if you choose multiple customers then you click this choose button and you just click on the customers you’d like to send a statement to if you’re wanting to send a statement to One customer choose the One customer option and then you can choose that customer from the drop down list you can also choose to send statements to customers of a particular type that you might have set up or only ones that have a preferred send method maybe they like their statements mail or emailed when you’re through you can always preview what your statement looks like at this point I’m going to hit the preview option and here’s what your statements look like I’ll just click in the middle to zoom in you’ll see it says statement right up at the top it has your company name and address and then of course the customer’s name and address here and notice at the bottom it has the balance forward from the previous month and then any transactions that occurred that month will show up on this statement whether it’s an invoice a payment at the bottom you’re actually going to see for each of these categories the amount that’s due I’m going to hit close at the top and show you some other options you have for statements over on the right you do have template options you can change for your statements we’ll cover that in a later module but notice some of the check boxes here you might not want to see any of the details or you might want to show them you can check or uncheck that option you can choose to print statements by their billing address ZIP code this would actually sort them numerically by ZIP code it’s going to print the due date automatically if you don’t want it to do that just uncheck that option you might choose to not send statements to your customers who don’t owe you money that would be this option here or if they owe you less than let’s just say five dollars maybe you don’t want to send a statement you could actually check the box and choose that amount there if they didn’t have any activity you might not want to send them a statement or if they’re an inactive customer you might not want to send a statement if you’re assessing finance charges you do have the ability to click here and set all those options up for the finance charges after you’ve previewed this you can print these if you’d like and send them through the mail or if you’d like to email them you can do that as well there is an option now also to automate sending your statements this way you don’t have to remember to manually go in and do this every single month let’s go ahead and click on go to payment reminders now that you’re in the payment reminders window the first thing you’ll want to do is create a schedule for these statements up here at the top there’s a drop down and you’ll choose statement from the list the next thing you want to do is tell QuickBooks to send reminders to whichever customer group you have set up if you do not have a customer group set up you can choose add new from here and create one but we’ll just choose commercial customers for now once you have your customer group set up you’ll want to click on add reminder and then fill in some of these options the first thing I’d ask you is what is the statement date you’d like to have on your statements so you can set it for a particular date you can say every week every month every quarter we’ll just go ahead and set this for the first of every month the statement period let’s go and include the previous month if you’re setting this for the first then you’ll want to include the previous month on your statement notice you could also generate a statement for any of these transactions that are open or overdue you just choose whichever option you want I’m going to go back and say statement period for the previous month the next thing it asks you is which invoice details do you want to include notice you can include all the details from each of the transactions you’d include a memo if you’d like and a due date I’ll just go ahead and choose a couple of these the next thing QuickBooks ask is how would you want your statements to look if you’ve chosen different statement templates you can choose them from the list here notice you can also go ahead from here and customize one and we’re going to talk about customizing in a later module the next thing is the email template if you’ve got some set up you’ll see them on the list if not you can go ahead and edit or create one from here and the last thing QuickBooks wants to know is if you want to separate your statements by each job that you’ve done for your customer once you’ve chosen your options go ahead and click on OK and now the statement reminders are set up and every month on the first it will send those out automatically for you that’s really all there is to creating statements for your customers let’s go ahead now and move over to video number 15 and I’ll show you how to use the income tracker in QuickBooks hey there welcome back it’s Cindy again we’re working in QuickBooks desktop 2022. we’re on the last video in module four this is video number 15 where I want to take a few moments and just introduce you to the income tracker that’s available in QuickBooks the income tracker is going to allow you to see on one screen any invoices you’ve sent out that haven’t been paid what has been paid if you have expenses you’re going to be able to see all that in one screen let’s go ahead and flip over to QuickBooks and I will show you how the income tracker works when you’re looking at the home screen you will not see an icon for the income tracker the easiest way to access it is to look for a button that says income tracker here or if you have it on the left you’ll see it I’ll go ahead and click on the income tracker and this is what it looks like this is showing me a list of all the transactions I have set up and you’ve got different ways you can sort this list but I want to show you at the very top across here it shows you that you have so much that’s unbilled meaning that you have estimates you have it turned into invoices yet you’ve got so much that falls in the category of time and expenses related to these jobs here it tells you how many open invoices you have these are invoices that have not yet been paid and how many of those are overdue and this last option here will show you how many were paid in the last 30 days on this next row you can add some filtering options currently I’m looking at all the customers and jobs but if I only want to see a particular customer and job let’s say Christy here then you can see it narrowed the list for me all will always be the Top Choice on the list if you want to see everything the next thing is the type of transaction do you want to look at just invoices do you want to look at any payments you’ve received you just choose the options that work for you the next thing is the status do I want to see invoices that are open overdue or paid or all and then of course you have a date option over on the right hand side where you can see anything this year the last 90 days those are some of your choices there for each of these you can go down the list and check off a particular transaction let’s say that it’s Brian Cook’s invoice on this line right here you’ll notice all the way to the right I have some options where I can go in and receive a payment for that invoice right from this window I can print this row or email this row if you’re on a line that has an estimate you’ll notice that your options are a little bit different I can convert this estimate to an invoice Market is inactive or go ahead and print or email that row so this is a quick way if you just want to go down the list and receive several different payments or if you want to just look and see how many estimates you have there’s all kinds of information that you can actually see from this window another thing I want to point out is let’s say you have several of these selected you can do what’s called a batch action down at the bottom you can actually click the arrow next to batch actions and you can choose invoices or if you want to batch email them you can do that as well you can also see the managed transactions options over here so if you wanted to go to sales receipts or receive payments you could do that take a few minutes and look through this because it might make things a lot easier for you than just hunting for the correct icon and entering things one at a time just an option I’ll go ahead and close the income tracker here and that actually wraps up module four let’s go ahead now and jump over to module 5 and we’re going to talk about working with the vendors hey there welcome back it is Cindy again we are talking about QuickBooks desktop 2022. we’ve made it down to module 5 now and this is the module where we’re going to discuss vendors vendors are people or businesses that you buy from sometimes you buy inventory sometimes you buy a service but anyone you buy from is considered a vendor I wanted to take time in this first video and give you a little bit of information on how to set up your vendors in QuickBooks let’s head on over and to set up some vendors just like with customers we had a customer center we also have a vendor Center you can access it from the home screen by clicking here where it says vendors you can also go up here and click on vendors or if you want to go to vendors from the menu you can do that as well I’m going to the vendor Center this is a list of all of your vendors in alphabetical order you can also see the balance that you owe that vendor if you’ve put in bills that’s the only way you’re going to know a balance and also notice there’s a column where you can attach a file over here you’re currently looking at all of the active vendors if you drop the list down you can see that if you just want to see the ones with open balances you can do that you can also look at all the vendors that would include any that you had made inactive as well I’ll just click on the active vendors whichever vendor you have selected here on the left you’re going to see information about that vendor over on the right you’re going to see their company name you’re going to see their billing address phone number fax email all the same information that we saw when we were looking at the customer center you’ve got a map to their location and directions here and over on the right you can see that if you wanted to run a couple of reports for this vendor like a quick report or an open balance report you can do that as well notice the option to order 1099 forms from Intuit if you want to but you don’t have to order them from into it you can buy them anywhere you like or if you want to order checks from Intuit you can do that right here here’s where you would go up and attach a file this would be any file that you want related to this vendor it could be a bill that you’ve made a copy of and if you want to edit the information for that vendor you would choose this option right here down here at the bottom you’ll notice under the transactions tab you’re looking at all the transactions related to this vendor the types of transactions you’re going to see here are going to be bills you’ve entered purchase orders if you’re using the purchase order system any what we call Bill payment checks you’re going to see all of those listed here you will have options to sort or filter this list you can see that right now you’re filtering it by all of the transactions but you can also filter it by a different date range if you’d like the next tab over has all of your contacts related to this vendor you have any to Do’s related to this vendor under the to-do’s tab and then any notes related to this vendor you’d be able to enter those here also if you’ve sent any emails through QuickBooks to these vendors then you would see them listed here as well what I want to do now is take you in and show you how to actually enter a new vendor what you’re going to do is right up here at the top where it says new vendor you’re going to click and you’re going to enter a new vendor the first thing you want to do is put in the name of the vendor we’re going to say this vendor’s name is Smith painting and you’ll see there’s a place to put in the open balance this field is set up so that you can put any money you owe the vendor as of your start date you can put that number in here as a starting balance I suggest that you don’t put anything here and you actually put in the bills that you still owe as of your start date that way you can go back and look at them at any time the first thing you’ll notice at the bottom here is we’re under the address info and this is where we’re going to put in the address the phone number things like that this is Smith painting now I know we have Smith painting already up here but remember that if you’re going to do mail merges with Microsoft Word it’s going to pull from this field so you want to make sure you’ve got information there if you wanted to put in a name for a particular person there you can do that we’ll say this is Randy Smith if you want to put his job title in there you could do that we’ll say he’s the owner and then you can see there are fields for the phone numbers the faxes emails website and all of these fields can be changed if you want them to represent something different down at the bottom where it says build from this is going to be the billing address that is set up for this vendor I’ve got Smith painting in here I might put in attention Randy Smith and then go ahead and put in whatever Randy’s address is the next tab over to the left is the payment settings tab the first thing that it asks you is do you have an account number with this vendor if you don’t have one just leave that blank what are the payment terms for this vendor we’re going to say net 30 in this case notice there’s a field also so that you can put in the name that the vendor wants printed on their checks this would be the payable to field if the vendor gives you a credit limit you can put that in here and also a billing rate level this means that if you happen to do certain things for this vendor at a particular rate and other things for a different rate you can set those up the next tab I want to point out is the tax setting Tab and this one’s really important if you have 1099 contractors these are not employees these are people that you call in to do special projects they need to actually send you a bill and then you pay that bill that’s the correct way to handle that if you have 1099 vendors you want to have their tax ID in here and also check the box that they’re eligible for 10.99 if you do not do this when it’s time to print 1099s they will not get a 1099. the next tab on the left is account settings if you know that every time you create a check or some type of entry for this vendor there is a certain account for the chart of accounts you want this to go to then you can pick from the list you can assign up to three of these and that way you don’t have to choose those accounts every time you’re working on a transaction and the last tab is additional info if you want to categorize your vendors you can do that you can see there are several different types here I’ll just pick subcontractors and also there are custom Fields over on the right if you want to create additional custom Fields come down to Define fields type in whatever you want that field label or name to be and then check off whether you want that field available for customers vendors and or employees I’m going to click OK and we should see Smith painting in our list you can see it right down here it’s in alphabetical order once we create our first transaction for Smith painting then you will actually see a balance and then you’ll see those transactions show up over here under the transactions tab while we’re in this window I do want to mention a couple things notice if you go up here where it says new transactions there are several different options here you can enter bills pay bills create purchase orders you can see the list but chances are you’re not going to be on this screen when you want to create those transactions you can also come up here and you can print your vendor list you can also export this list to Excel or you can import a list of vendors you have from Excel already you can also do mail merges with Microsoft Word those are some of the options you’re going to be able to do when you’re working with vendors now that we know how to set up vendors let’s go ahead and move over into the second video and talk about how to enter bills in QuickBooks using these vendor names hey there welcome back to QuickBooks desktop 2022 my name is Cindy we are working in module 5 and we’re talking about vendors in this module this is the second video where I want to talk to you a little bit about how to enter bills in QuickBooks bills come in the mail that you have to pay they could be emailed to you but it’s something you know that you’re going to have to pay at a future date you could enter that bill so that you could run reports at any time and see who you owe if you’re on the 30-day overdue category just different things that you’ll want to know about upcoming expenses that you have let’s go ahead and flip over to QuickBooks and I’ll show you how to get started entering bills when you’re looking at your home screen you’ll see this section here is your accounts payable section anything having to do with bills paying bills purchasing inventory things like that will show up in this section when you’re entering bills if it’s just a normal Bill like an electric bill for example you would start here if your business deals with inventory and the purchase order system then you would actually start here and follow the flow chart all the way across to this enter bills button right here you can see from there the flowchart would go this way before I get started I wanted to mention that a lot of people don’t enter bills in QuickBooks they just have them on their desk or know that they’re due and they’ll go ahead and make that payment and enter the payment in QuickBooks that’s certainly okay your accounting will be correct but if you need to forecast your expenses you will want to use the enter bills feature that you see right here and enter all of your bills you will have some things that are automatically deducted like a car payment for example you might not think to enter a bill for that but I would go ahead and do that because it’s part of your forecasting of your expenses for the next few months anything you want to show up in those reports you want to put in the enter Bill section right here let’s go ahead and start with the inter bills and let’s say that our subcontractor Smith painting has sent us a bill for some work that he’s completed he expects to get paid you can see I typed in SM for Smith’s painting and it popped up I could also click the down arrow and choose that vendor from the list if it’s a brand new vendor that you haven’t entered yet you’ll notice at the top of this list it says add new that keeps you from having to go back to your home screen and enter them through the vendor Center you could do it right from here once you click down in the address section you’ll see that the address appeared if you needed to edit that you could edit it right here and once you save this transaction QuickBooks would let you know that it’s going to change it permanently for you ‘ll notice right below that it says the terms are net 30. that’s because we set up the terms of net 30 when we set up the vendor over in the vendor Center if you notice over here it pulls in today’s date and the build date is 30 days from that date what you want to do is put in the correct build date that means if you have a paper bill in front of you go ahead and put in the date it was printed you will want to make sure that the bill due date is correct in a case like that you can go ahead and change that to 30 days or if it says specifically on the bill that is due on a certain date then you change that because this date here is important so when you pull reports you’ll know when things are actually due this reference number would be the number that is on the bill then you have the amount to do we’re going to say it’s 500 dollars you also have a place to put a memo if you’d like you’ll notice down the bottom there are two tabs there is the expenses Tab and there’s the items tab when you’re finished these two tabs have to Total this amount due to the penny or will not let you save it you’ll notice under the expenses tab if you click on the first line this is your chart of accounts you can choose whatever account you’d like to put this to if you have subcontractors then you probably have set these up as a cost of goods sold and you can pick any one of these I’ll just choose subcontractors the next thing you’ll see is the amount this is five hundred dollars for this one account but this can be broken up into as
many accounts as you like if some of the money went to a different account you would just click on the next line down and choose that account and that amount from the list the other thing I want you to notice is there’s a place for a memo and also very important if you are tracking job costing your business you would want to choose the job the customer and the job that this particular expense needs to relate back to if you’re using the class feature you can see that the class option is here and I’ll go ahead and pick remodel now this little box that says billable in the middle here when we talked about invoices there was an option to pull in any expenses that you had incurred relating to this particular customer job that you wanted to be reimbursed for if this is checked then you’ll be able to do that now I do want to mention the items tab over here when you look at items these are physical products that you sell in your business if you are purchasing inventory for example you’ll want to make sure that you pick that particular product from this list so that it goes into your inventory count if you don’t do that and you just say it’s an expense over on this tab then it will not show up in your inventory I’m just going to choose expenses and I’m choosing the subcontractors in this case and I’m going to go ahead and save and close and now that bill has been entered now let’s go back and look at a couple of things first of all I’m going to my vendor Center up here and I’m just going to go down and pick my vendor which is Smith painting and when I click on Smith painting on the left now you’ll see the bill that I entered for Smith painting I’m going to double click and open it back up and there you’ll see it over on the right hand side of your screen you can actually hide this history or if you want to show this history you can basically what it will do is just give you some history like it says of this particular vendor you can see there’s a summary if there was any purchase orders that needed to be received here’s the bill we just entered you won’t see this until it’s saved but since this was saved and we opened it we now see that bill and any notes we had entered down here I’m going to hide that for a few moments and let’s look at a couple of options we have up at the top I want to point out that this is a bill but sometimes a vendor May issue you a credit we’re going to go for vendor credits in video number six of this module but just notice that option a lot of these options on your icon bar you’ll already be familiar with you know how to search you can go backwards or forwards and search for a particular bill you can also hit find and put in some criteria this new option would be the same thing as the save and new Option at the bottom right of your screen and of course if you’re working on this and you’re not finished you can hit save so that you don’t lose any of your information if you wanted to delete this bill this is where you would delete it you could also just avoid it meaning it would stay in the system but it would be voided and the amount due would be zero if you had another bill you wanted to enter that was very similar to this one you could create a copy and then make those few changes on that copy we’re going to talk about memorizing in a later module but if you have a bill that you need to pay on a repetitive basis the car payment is a really good example you pay it every month on the first of the month you could enter that bill one time and then QuickBooks would automatically enter it each month for you because it was memorized here’s where you can print this you can also attach a file here if you had a physical paper bill that you want to attach to this you could do that so you could open it up from here if you need to look at it also if you’re working with purchase orders and you happen to have chosen the wrong one you can go ahead and select the correct one from here and if you needed to enter some time associated with this you could do that this would go into your job costing and we’re going to talk more about this in a later module clear splits that basically means that if you have multiple lines of accounts chosen for example or items you get to clear that and start over you can also calculate or upload and review your bills this is a new feature right here where QuickBooks will actually try to match a bill that you might have as a PDF file for example and we’re going to look at this over in the next video which is number three and of course you can pay your bill from here chances are you’re not going to be on this screen when you’re ready to pay this bill but you can if you happen to be here there are a couple of reports I just want to mention and we’re going to go over reports in a later module but if you wanted to run some reports just on bills you can look at the transaction history here and that would actually show you if there were any payments made towards this you could see that history there you could run an item listing which would just list all of your items if you had some quantity on hand things like that that you might need to know you can look at open purchase orders your vendor balance detail for every vendor you can look at unpaid bills and you can also look at purchases by vendor in a detail format I’m going to hit save and close at the bottom and that’s how you’re going to enter bills and QuickBooks let’s head on over now to video number three in module five and I want to show you a new feature where you have the ability to upload and review your bills right here in QuickBooks hey there welcome back to QuickBooks desktop 2022 my name is Cindy and we are working now in module 5 where we’re talking about the accounts payable function in QuickBooks module 5 is all about vendors and we’re down now to video number five where I want to show you the correct way to pay your bills in QuickBooks let’s head on over to QuickBooks and I will show you how to get those started pain bills is easy in QuickBooks if you look at your home screen this is where you had previously entered that bill and now what you want to do is follow the flow chart all the way across to the pay bills icon before I click on that I just want to mention a couple of things what you don’t want to do is have a bill that you’ve entered and then come down here to either the right checks or the check register to enter that payment if you just come directly to one of these icons what will happen is the bill will stay open because it doesn’t see an association between the bill and the payment make sure you follow the flow chart all the way across to the pay bills once you complete this pay bills option here it will automatically put that payment in the register for you I’m going to click on pay bills this is a list of all the bills that you owe even if you owe a penny and you’ll see that currently they’re listed by vendor in alphabetical order if you want to change that use this sort right here you can sort these by the due date or if you want to sort them by any that might have a discount or credit you can see different options on this list but we’ll leave it sorted by vendor for now you can also filter the list if you’re looking for a particular vendor let’s just say it’s CU Electric then you can actually hide all the other vendors and just work on paying these two I’m going to go back and show all and that’s going to be at the top of the list it says all vendors notice I’m also showing all the bills if I just want to see ones due before a particular date I can choose this option here let’s say I’ve got a couple of different ones that I want to pay I do want to pay CU electric notice there are two bills for that vendor I’ll just pay them both if you’ve got multiple vendors you’re going to pay bills for you can just check them all at the same time and it will write Separate Checks and it will create separate payments for each or if you want to do them one at a time you could do that as well once you check those off you’ll notice that over to the right it assumes you want to pay all of that bill if you’re wanting to make a partial payment just change this amount right here to whatever it is you’re going to pay and it will remember the rest some of your options down at the bottom are to go to that bill this is one of the few places in QuickBooks you can’t double click on the bill itself you have to actually have that bill selected and then choose go to Bill you can also set your discount here if the vendor is giving you a discount for paying early or for some other reason you can go ahead and take that discount here you would type in the amount of that discount you’re taking and then you would have an account set up in your chart of accounts for that discount and if you’re using the class you might have one of those set up as well if your vendor has issued you a credit you can actually apply that credit here you would need to have that credit already set up in QuickBooks and here you could set that here’s the date of your payment the method of your payment it can be check credit card or bank payment remember that if you’re using a debit card or it’s automatically drafted just choose the check option to be printed you would use this option if you’re going to actually put checks in your printer and print these out if you’re going to just pay it online or maybe it’s automatically deducted just choose the assign check number option and you can type anything you want in that check number field make sure you have the correct bank account Chosen and then choose pay selected bills now here’s the check number field when I mentioned to you a moment ago to use the assigned check number if you’re using a debit card you can leave this field blank or you can put the word debit in there any code that you’d like to show QuickBooks what type of payment that is I’m going to click ok and now that payment has been made this is the payment summary window just letting you see the payments you just made if you have more bills to pay you can choose this option or you can say done you also have in the middle of it an option to print a bill payment stub you would print that if you wanted to send a check and maybe a stub with it there’s your print or email Now options I’m going to click on done and let’s see if it’s in our register I’m going over to the check register here on the right I’m going to use my checking account because that’s the bank account I paid this from and here is your payment for CU electric you’ll see that it says bill payment it does not say check that’s how you know you did this correctly that you actually entered a bill and then you made a bill payment to pay this bill I do want to show you one other way that you can pay your bills especially if you have a lot of them this might be a little quicker for you notice I have the vendor Center opened on the left here in the vendor Center you’re going to see this button that says build tracker this will let you see all of the expenses you have related to a particular customer or job or even if they’re just for the office you can see all of that here notice some of these are purchase orders some of these are bills that have been entered and at the top it gives you some totals here’s the total of your 10 purchase orders any open bills if you want to see if any of these are overdue you can look here and you can also see the total that were paid in the last 30 days now you’ll notice that if you’ve got a purchase order you can convert it to a bill we will be talking about purchase orders over in module six also notice if you have a bill that there’s an option to pay that bill all you would have to do is check the box all the way to the left of that bill and then you’ll see it says pay bill here you can also copy or print that bill if you wanted to you also have options to copy or print that bill but when you’re ready to pay it just come down here where it says batch actions and then choose pay bills you’ll see this is your pay bills window that appears you can actually click on that bill if it’s not already chosen if we have another one you want to add to that since there’s another one for Perry Windows I’ll go ahead and check that one as well and then all of the options we just talked about are right here in this window once you’ve made your choices you can pay your selected bills now that payment is in the register as well as the one we did a few moments ago I’m going to choose done and that’s how you pay your bills in QuickBooks now that you know how to pay your bills let’s go ahead and move over into video number six and we’re going to look at how to set up credits that your vendor has given you towards a bill hey there welcome back it’s Cindy we’re working in QuickBooks desktop 2022 and I want to talk to you about vendor credits sometimes a vendor will issue you a credit that you need to apply to a bill or it could be you want to apply it to your account and use it in the future but I want to show you how to handle those once the vendor sends them to you let’s go ahead and flip over to QuickBooks and I’ll show you how to set up those vendor credits I’m here in the vendor Center and you can see that this company owes Cal Gas and Electric 122.68 calgastic electric has issued a credit memo of 25 towards that bill I want to show you how to enter that credit memo and then how to apply it to that bill I’m going back to the home screen and what you’re going to do is you’re going to use the enter bills option the only difference is you’re going to set this as a credit right here and then just fill out the rest of the form this vendor was Cal Gas and Electric we’ll say they issued this credit memo on December the 20th you can put in a reference number and I’ll just put in it’s for 25 dollars if you want to pin a memo here you can do that in this case since it’s Gas and Electric you want to make sure that you pick the correct account from the chart of accounts remember if you’re getting a credit for a physical item that you’ve returned you want to use the items Tab and choose that item if this is for a particular customer or job you want to choose it from the list and also make sure that you’re using the correct class if you’re using the class list I’m going to go ahead and save and close at the bottom now let’s go ahead and follow the flow chart all the way to the end where it says pay bills what I want to show you is if you look at Cow Gas and Electric right here if you go ahead and click on that you’re going to see right down here where it says set credits there’s a 25 credit available so you’re not going to see it when you look up here as long as you have the bill selected you want to apply it to all you have to do is set credits you’ll notice this credit appears right here and all you have to do is click done and now you’ll see that credit is used right up here and it shows you there is an amount to pay now of 97.68 if you’re going to pay that just go ahead and make sure it’s checked off and go through the process and pay the selected bill and that’s all you need to do to enter a credit memo and then apply it to a bill I’m going to go ahead and cancel out of this let’s go ahead now and move over into video number six where we’re going to talk about a new feature in QuickBooks where you can schedule and pay your bills directly from QuickBooks foreign hey there welcome back to QuickBooks desktop 2022 this is Cindy we are moving now into module six where we’re talking about how to work with items and inventory we’re going to start off in this first video just talking about where the items list is and how you would actually go through and set up an item or an inventory part there are actually two parts to this video so make sure you watch videos number one and two let’s go ahead and flip over to QuickBooks and we’ll get started entering some of those items if you’re looking at your home screen you’re going to see an icon here that says items and services that’s where you want to click to go ahead and see all of these different items that you would either sell to your customers or purchase maybe for resale when you first set up your QuickBooks company file this list will be empty you won’t have any items you’ll have to go through and set these up yourself I wanted to go through this list because I want to point out that each of the items that you create can be set up as one of these different types that you see in this column some companies will have three or four maybe six seven items other companies might have over a thousand it just depends on what your business actually does let’s go through and look at some of these starting with the type called service what you’ll find is that based on the type the names will be in alphabetical order over here think of a service as an actual service that you provide to your customers if you look at framing that’s a service you provide labor would be a service repairing that would be a service as well sometimes you’re going to have items that are main items and then you’ll have sub items below carpet and drywall are sub items of this word Subs the next type that you’re going to see down here are your inventory parts you’ll notice for each of these inventory Parts you’re going to see the total quantity that you have on hand these are physical parts that you buy or sell the way that this number changes is when you sell this item meaning you put it on an invoice to sell it this number will go down when you purchase this the number will go up when you’re entering your expenses where you’ve actually made the purchase of these items you’re going to see on the bottom of that screen will be a tab that says items and you’ll be able to actually put that item on that tab let me show you what I’m referring to I’m going to flip back to home for a moment and let’s just say that you were using the right checks feature down here now I know we haven’t gone through this yet but you’re going to see on every type of expense that you enter you’re going to have the same screen down here you could either put this to an expense which is your chart of accounts or this is your items tab here where you can pick an item if I pick door frame and I say that I ordered quantity of two of these then what will happen is over here when I look at door frame this 21 will actually go to 23. you can also manually adjust this inventory we’re actually going to talk about that down in video number seven of this module the next type I want to point out are non-inventory parts these are physical products or parts that you have but you don’t keep a record of how many you have in inventory you might keep two or three on hand all the time just in case you need them or you might order them as you need them but you don’t really keep track of the quantity you have for non-inventory parts below that you’re going to see what they call other charges these are miscellaneous type things you’ll notice there’s a freight reimbursement here there’s a delivery charge they’ve got one set up for permit I did want to point out this next one that says subtotal over here you may not use this very often but what will happen is if you had six or seven line items on an invoice or an estimate and you want to subtotal those you can just add this as the next line and it will actually subtotal everything above it that it has it already subtotaled these are groups down here if you create estimates and you’re constantly creating an estimate for the same thing and maybe it has 100 different line items instead of typing those line items every time you can create a group and that group can include those line items that way the next time you need to put those on an estimate or an invoice you just type this group name and it will pre-populate all of those items for you there is a place to put a discount on your invoice if you’d like and it will figure the discount for you and also you can set up a line item as a payment I personally wouldn’t put payments on an invoice I would make this a separate transaction but you do have that ability and then the last couple of types I want to mention are your sales tax items and your sales tax group when we get to the videos where we talk about sales tax this is where we’re going to come to actually create those items and then if you have several you pay and pay to one entity you can create a group to include those that gives you a quick overview of the different types of items that you can set up in QuickBooks what I want to do now is go ahead and set up a service item with you just so you can get a feel for how to set up these items anywhere in this list you just right click and you’ll see the new Option here notice there’s also an edit option if you need to edit one of the items you already created you can duplicate an item if there’s a new one you need that’s similar to one you already have you can duplicate it and then make those few changes you need you can also delete an item you cannot delete an item if you’ve ever used it if you want to hide it from the list maybe you’re not going to use it in the future then go down here and make it inactive so it hides it from this list for now we’re going back to New so that we can create our new service item the first thing quick to ask you is what type of item is this and these are the types we just mentioned I’m going to say it’s a service item and we’ll actually call this one a consultation if this is a sub item of another you can check the box and pick the item it’s a sub item of remember the example where carpet is a sub item of Subs but this one’s not this is just a new item and then if you’re using the unit of measurement feature you can create a new unit of measurement they have one set up here it looks like they sell things each by the case for example and then of course you have a description area if this is going to change all the time you can just leave it blank I’m just going to duplicate the same item name there and then that can be changed when you’re actually on an invoice if you have a standard rate you charge for this item you’ll want to go ahead and put it in if it’s going to be different every time just leave that on zero and then change it on the invoice or estimate as you need to the next thing QuickBooks ask you is this a taxable item or not this is for sales tax purposes and then the most important thing on this screen is the account this is the chart of accounts and QuickBooks wants to know which account in the chart of accounts do you want this to relay back to nine out of 10 times you want it to relate back to an income account and this is where I see a lot of people have problems because if you don’t point this back to an income account your profit loss will not be correct there’s very few exceptions to that so if you just always pick an income account you’ll be good I don’t see one I’d like to use I’d like to have one called consultation income I’ll just create that I’ll go to the top of the list and choose add new I’m going to put in the account name and I really don’t need to fill any of this other stuff in I’m going to go ahead and hit save and close at the bottom and now you’ll see that it has my new account name there I’m going to click OK and now that item has been created you’ll see it right here at the top of the list that’s how you’re going to create a new service item what I want to do now is stop the video here and let’s move over to part two because I do want to show you how to set up inventory items so that you can go ahead and tell QuickBooks what the quantity is that you have currently and then you have the correct number to move forward with hey there it’s Cindy again welcome back to QuickBooks desktop 2022 we are working in module 6 where we’re talking about items and inventory we just completed part one where I showed you how to go through the items list and start adding items to that list now I want to show you how to specifically add inventory parts to that list let’s head over to QuickBooks and get started we’re back in the items list in QuickBooks in case you forgot how to get here go back to your home screen and you’ll see an icon right here that says items and services we’ll just head back over there to create any new item in your list just right click anywhere in the list and choose the new option the first thing you have to tell QuickBooks is what type of item is this and this is an inventory part let me just mention that if you don’t have inventory part on this drop down list it’s because when you set up the company file you told QuickBooks you do not keep inventory now if you want to change that at this point all you have to do is go back up to edit and back to your preferences in here you want to click items and inventory on the left then choose your company preferences Tab and make sure all of these are checked once you’ve done that go ahead and click OK and then you will see inventory part appear on this list this is going to be a screen door if this was a sub item of another I would click sub item of and choose that item from the list but this is not and notice there’s a place now for the manufacturer’s part number this is just helpful information for you whenever you’re ordering this part that way you don’t have to keep guessing of what that number is if you’re using the unit of measurement you’ll want to choose that from the drop down list you’ll notice when you’re entering inventory that you have two sides to the screen you’ve got the purchase information side and then over here you have the sales information side this is when you buy this item and this is when you sell this item I’m going to put a description in here I’ll just say screen door and of course you can edit that whenever you’re on an estimate or an invoice and what you’ll notice is if you click on the right it’ll put that same description here and of course you can edit that as well this would typically be the description that you would see when you’re purchasing this item from your vendor the next thing you’ll see on the left is the cost on average what do you pay for this if it’s totally different every single time you can leave it on zero I’ll just put in 200. but the sales price is what you charge for this that means anytime you put this on an invoice or an estimate it will pull in in this case 550 and of course you can type over that if you need to if it’s going to vary every single time again then go ahead and leave that blank over here the cost of goods sold account you really don’t want to change this unless you have some sub accounts underneath that you specifically want it to go to but this is considered a cost of goods sold because you have to buy this product to make or sell this product in your business to make sure that stays on a cost of goods sold below that is a feel for the preferred vendor if you have a specific vendor you’d like to buy this from then you can go ahead and pick them from the list if not you can just leave that blank over on the right for sales tax purposes is this item subject to sales tax and typically if it’s a physical item it is and the most important thing on this whole screen again is the income account this word says income so that means pick one of these income accounts that you want this to go to when you’re selling this item where do you want it to end up I’ll just say in this case I wanted to end up under materials income because this is inventory you’ve got some options down at the bottom you can plug in you don’t want to change the asset account this is the account the assets going into inventory is considered an asset to your business because it’s worth something it makes your company more valuable but it is liquid and you’ll want to sell it and get it out the door there is a place to plug in a minimum reorder point and a maximum and what that means is do you want QuickBooks to tell you when you have two left to order some more so you don’t run out and what if you get too many maybe you want to have a Max of 10 in the back room and you want QuickBooks to tell you that here’s where you’re going to put in how many you have on hand that’s the starting number QuickBooks will have and remember if you have one every time you purchase it we’ll add to that number and anytime you sell this it will take away from that number and that’s really all you have to plug in when you’re setting up an inventory part I’m going to click OK and it should tell us that we have one of these let’s go look and see here you can see our screen door and you can see that we have one of them that’s how you’re going to set up inventory and get started with that whole inventory process that’s going to go ahead and wrap up the items and inventory part of this module let’s go ahead and move over to video number three and we’ll talk about purchase orders hey there welcome back to QuickBooks desktop 2022 this is Cindy we are working in module 6 where we’re talking about items and inventory and we’re down now to video number three I want to show you how purchase orders work in QuickBooks if your company uses the purchase order system that means that when you order an item you’re going to actually enter a purchase order once that inventory comes in you’re going to actually enter an item received and then eventually you’re going to pay for that and that’s the purchase order system but let’s go ahead and start with video number three here and I’ll show you how to enter purchase orders before we get started I want to go back to the items list and show you that we have one screen door right now as an inventory part and we want to order three more we’re going to use the purchase order system to do that back on the home screen you should have an icon here that says purchase orders if you do not have this icon it’s because you told QuickBooks when you set up the company file that you do not track inventory to turn that on you would go back to edit down to preferences make sure you’re under the items and inventory option on the left and then click on company preferences you want to make sure all three of these are on and that way your icon will show up on your home screen I’m going to click on purchase orders and we’re going to enter the purchase order the first thing it asks me is who is the vendor that you want to order this from and we’re going to say this is Perry windows and doors this would be my vendored list if you’re using the class feature then you’ll want to go ahead and choose which class this would pertain to if you want to drop ship this to a particular customer site that means when you order it you want the vendor to send it to the customer site then you could choose it from the list but if you’re not doing that you’re just going to put them in your office then you don’t want to choose anything here you can also choose a template for your purchase orders we only have one right now but we can create those and we’ll talk about that in a later module the next thing you want to do is check the date this is going to be the date you’re creating the purchase order and then you want to put in the purchase order number QuickBooks will number sequentially if you want to change that number then you just change it to whatever you’d like it to be here is the vendor information that it pulled in when you chose your vendor right up here at the top and next to that is the ship to information if you were shipping it directly to a client site or if you’re not you’ll see it pulled in that your company information what you want to do is go down on the first line under item and you just want to pick screen door and if you want to add a description this would be the description the vendor uses they might have a particular part number they might call it a particular brand name but you can change that to say anything you’d like we’re going to order three more of these and let’s say that we’re ordering three because they’re on sale they’re 185 dollars so see I’m going to change that right there if they were for a particular customer again you would pick the customer job from this list and if you were using the unit of measurement you would choose it from that list if you could and you can see pulled over the total there of 555. you can keep adding as many items as you like to this purchase order a couple of quick things you’ll notice at the bottom left of the screen there is a place for a vendor message that would be something that you would put in there that only you would see and then a place for a memo over the right hand side you can see your history here if you want to hide that just use the arrow to hide the history or the left Arrow to show the history but that’s just going to show you information about the vendor you can see their phone number if you have an open balance with them if you have any purchase orders still that need to be received and then recent transactions will show up here now let’s head up to the top and see if there’s anything new I think you’re familiar with most of this already you know how to print email these let’s go ahead and see what a purchase order would look like I’ll preview this you can see it just says purchase order here at the top it has all the information on it you can see the company information here on the left there’s the vendor name and address and the ship to name and address and what you’re actually ordering I’m going to close that screen one thing that you haven’t seen yet is this icon here that says create item receipts now that’s going to be the next thing in the process once you receive the physical products into your office chances are you’re not going to be on this screen when you want to receive your items but you could do that and you would also select your item receipts from here but again we’re going to do that back from the home screen there are some reports you can run back here if you want to look at any open purchase orders you can do that you can look at purchase orders by job you can look at them by item detail you might see an item listing and then purchases by vendor detail I’m going to click save and close at the bottom and now that purchase order has been created what I want to do is go back to the items list and just show you that you still only have one screen door right here and that’s because you’ve only ordered it you have not received it into inventory yet and that’s the next thing we’re going to talk about if you’ll head on over to video number four we’re going to talk about how to receive those items into inventory hey there it’s Cindy again welcome back to QuickBooks desktop 2022 we are working our way down through module 6 where we’re talking about items and inventory we’re down to video number four now we’re going to talk about how to receive items into inventory these would be items you’ve already created a purchase order for and now you’ve received them you want to update QuickBooks so that your inventory number is correct let’s flip over to QuickBooks and we will look at how to receive those items into inventory in QuickBooks we’re back in our items list and I wanted to point out the screen doors you still only have one as far as the quantity is concerned because we’ve ordered three more but we haven’t received them now that they’ve come in I want to show you how to go ahead and receive those so that they’ll show up in this list and this quantity will change to four I’m going to go back to the home screen and following the flow chart I’m going to go to the receive inventory icon you’ll notice here that there’s a down arrow because there are two ways that this can happen one you can receive your inventory and the bill is actually in the box with it or you can receive it and later the bill will come in the mail they’ll just have a packing slip for you to look at right now but let’s go ahead and use the bottom option we’ll say without the bill because that’s normally the way it happens and now you’re on what’s called an item receipt now you have to have a purchase order in here already in order to pull from that purchase order the first thing QuickBooks wants to know is who is the vendor in this case it was Perry windows and doors because we have an open purchase order then it’s letting us know that we can receive against one or more of these we’re going to say yes this is how you’re going to close the purchase order you’ll see there are two purchase orders here and you’re going to choose the one you want to receive against let’s just say I picked this one and I click ok you’ll notice it pulled in those three screen doors what if you clicked the wrong purchase order and you want to go back and pick the correct one instead of getting out of this whole thing notice you have a button up here that says select po you would uncheck the one you pulled in check the correct one and then click ok and you can see that I really needed that first one so I’ll go back and choose that first one but the point is that you can actually change the purchase order that you pulled in the next thing I want to do is go ahead and pick the date this is the date that I actually received the items let’s go ahead and say it was January the 3rd you wanted to put in a reference number you could and then the total and that total shouldn’t change because this is not a bill where they would add shipping or tax this should be the same amount that was actually on the purchase order now the only exception is what if when you’re looking at these items here they only sent you two of those doors then you would change this to two which obviously would change your total up at the top and then you could go through the same process when that last one actually comes in going across you’ll see it pulled in the cost it pulled in the amount and if you wanted to make this billable meaning that you would have the customer reimburse you you could click here and you can see it pulled everything you need right from here let’s go back up and see if there’s any new buttons that you’re not familiar with up here you should be familiar with all of these we talked about selecting a PO if you choose the wrong one you could also enter some time what this means is if you are tracking the time that it takes you or an employee to actually process this maybe open the box put the things in inventory the whole process then you can do this because it’s part of your job costing you just pick a start date and an end date for this and then you would click ok you can also clear the splits that means that if you had multiple line items you could clear that and start over and you could recalculate it there are some reports you can run as well related to items there’s an item listing which just list all your items if you want to see the details about any purchase orders that are open you can do that there’s a vendor balance detail and unpaid bills detail and purchases by vendor let’s go ahead and click save and close at the bottom and see if our item list has been updated when I look at screen doors you can now see that I have four screen doors so you can see that once you create those purchase orders it’s not going to change your quantity but once you receive the item in inventory then it will that’s how you’re going to receive items into inventory in QuickBooks let’s move over to the next video number five and we’ll quickly walk through how to handle the bills for those items that you just received hey there welcome back it’s Cindy again we are working through QuickBooks desktop 2022. right now we’re in module number six where we’ve been discussing items and inventory and I want to take a quick moment in this fifth video here and talk to you about how to handle the bills for items that have gone through the purchase order process it’s really going to be fairly easy it’s very similar to just entering any Bill and paying it but I want to follow the flowchart and show you how to take this all the way through let’s go ahead and flip over to QuickBooks and we’ll get started if you have a bill you need to enter that’s for items that went through the whole purchase order system you want to enter those here because you want to complete the flowchart I’m going to click on enter bills against inventory the first thing you want to do is choose an item receipt and you need to pick the vendor this was Perry windows and doors in the previous video we had actually ordered a three screen doors and now we’re ready to enter that bill you can see that an item receipt came up for Perry windows and doors you want to make sure you choose the correct one this would be any items you received through this receive inventory button right here I’m going to click ok and now you can see a bill and it’s pulled in all three of the screen doors that we had ordered when you receive a bill you’ll probably have some additional charges like a shipping charge or something else go ahead and just change all the information needed to make this correct this bill would be for six hundred dollars in this case and I’ll just put in a reference number that would be the bill number or the invoice on the vendor side as they would call it make sure you have to build date correct and make sure you have the due date correct that’s the important one that way when you run reports you’ll have accurate dates when your bills are due now in this case we’re going to break up our amount due we know that 550 is for the items we purchased but the rest of it is for a shipping or freight charge you would click on the expenses Tab and choose Freight and delivery from this drop down list and then put in that amount go ahead and save and close and QuickBooks will say that this transaction is linked to others are you sure you want to change it and you would just say yes here now that Bill is in QuickBooks ready for you to pay it let’s go ahead now and hop over to video six and we’ll go ahead and get those items paid for hey there welcome back to QuickBooks 2022 this is the desktop version and we are going through module six items and inventory we’re down to video number six now paying for items there’s a couple different ways you’re going to be able to actually enter those items in QuickBooks especially if you have inventory and you want it to increase as you make those purchases it could be that it’s a bill you’ve put in and you just want to pay that bill or it could be that you’re actually using a check or debit card or purchasing them online let’s go ahead and flip over to QuickBooks and we’ll look at how to pay for those items let’s go ahead and start with the easiest thing first so far in this module we’ve gone through this whole purchase order system we actually ordered some items we ordered three screen doors we actually received that into inventory and this is when your inventory number increased and then we actually entered the bill when you get ready to pay the bill just follow the flow chart all the way down to this pay bills option right here if you remember we actually purchased our doors from Perry windows and doors and looking down this list you can see there are several you just want to go ahead and check off the one or ones that you’re going to pay and pay this like you normally would when you’re paying any Bill I’m going to go down and change the date that I’m making the payment I’ll choose the method I’ll just leave a signed check number and go ahead and pay selected bills remember that when this screen pops up you can enter anything you want the check number field I’ll just say that I used a debit card to pay this and I’ll click ok and now that is done I’m going to go ahead and choose the done option so that’s the first thing I wanted to point out is paying the bill the next thing is when you’re entering the expense let’s say that you actually went to your local store and you purchased those screen doors maybe you wrote a check used your debit card it really doesn’t matter how you paid for it in QuickBooks desktop anytime you have something that comes out of the checking account it’s considered a check you can use the right checks window whether you paid for it online debit card it doesn’t matter you can use that window or you can use the check register I’m going to use the right checks let’s say that I want to purchase one more screen door I’ll go back to Perry windows and doors notice it tells me I have open purchase orders since I’m purchasing this and I’m not actually applying this to a purchase order I’m going to say no in this case and that way the purchase orders stay open for the next time I’m going to say that the door was 185 dollars now here’s the important thing this is your chart of accounts when you’re under the expenses tab you could most certainly if you’re not tracking inventory then you could come down and pick that this was a job material or whatever account you’d like to use however in this case I want to add that door to my inventory so that means I need to delete this line here and make sure I put it under the items tab just a quick little trick whenever you’re trying to delete an entire line anywhere in QuickBooks just click on that line and all you have to do is hold Ctrl and hit your delete key and it will delete the entire line now I’m going to go to the items Tab and I’m going to put in screen door and there you can see it pops up I’m purchasing one of these it looks like it’s 185 dollars and if it is for a particular customer job I can choose that here but if I’m trying just to keep that in my inventory I’ll just leave that empty now I’m going to go ahead and save this and then close this window when we go back to our items and services let’s see how many we actually have right now we’re going to go down and find a screen door and we have five of them one of them was just added to our inventory we haven’t yet talked about credit cards but if you’re using a credit card to make that purchase the same thing would apply you would click here and you would still see that you have the two tabs you’d want to make sure it’s under the items tab so that goes into your inventory and that’s what I wanted to point out to you about actually paying for those items specifically when you want items to go into your inventory let’s go ahead and finish up module 6 over in the seventh video I’ll show you how to manually adjust your inventory because inventory is going to get off and you want it to always be as accurate as possible hey there welcome back to QuickBooks desktop 2022 this is Cindy we are getting ready to wrap up module six we’ve spent some time talking about items and inventory and then this last video I want to show you how to manually adjust inventory inventory is just going to get off and every now and then you’ll go in the back room and you’ll count and you’ll want to adjust the number in QuickBooks to match what you actually have I want to show you how to do that let’s go ahead and flip over to QuickBooks and we’ll see how to adjust that inventory on the home screen you’ll want to navigate to the items and services icon and that way you can see your list of items if you look at your screen doors here we have five we’re going to adjust this because let’s say there’s only four that we actually have on hand we need to decrease that number by one at the bottom of your items list there are a lot of different options most of these options you can get to by right clicking anywhere on your screen you’ll see there is the adjust quantity value on hand option which you would also find under activities the very bottom option down here the first thing you need to tell QuickBooks is what type of adjustment is this are you adjusting the quantity the total value or both we’re just in the quantity set your adjustment date whatever date you’re doing this adjustment and then you want to have an adjustment account now I’ve set up one called inventory adjustments you’ll want to go ahead and set that up yourself because it will not automatically be in the list QuickBooks will number each one of these adjustments you can change that reference number if you like also if you’re adjusting this for a particular customer job you can choose that from the list if you’re just suggesting inventory on hand then you leave that blank make sure you choose the class this would apply to as well the next thing you want to do is go down and find the item from the items list that you’re adjusting in this case it’s a screen door it tells me I have five but the new quantity I want is four that means there will be a negative one adjustment I’m decreasing by one if you want to put a memo here you can down at the bottom but that’s really all you need to do I’m going to save and close and now if we go down and look at our screen doors we should have four and we do you can see it right there and that’s how you’re going to adjust your inventory that you have on hand well that’s going to go ahead and wrap up module 6 where we’ve talked about items and inventory why don’t we move over now to module 7 and we’re going to look at all the different options related to banking features in QuickBooks if you’re not a subscriber click down below to subscribe so you get notified about similar videos we upload now to get the course transcript and follow along with this video click right over there and click over there to watch more videos on YouTube from Simon says it
Affiliate Disclosure: This blog may contain affiliate links, which means I may earn a small commission if you click on the link and make a purchase. This comes at no additional cost to you. I only recommend products or services that I believe will add value to my readers. Your support helps keep this blog running and allows me to continue providing you with quality content. Thank you for your support!
These sources offer a comprehensive overview of Google Cloud Platform (GCP) services. They begin by examining data storage options like Datastore, Firestore, and Memorystore, then discuss networking fundamentals including VPCs, firewalls, load balancing, and DNS. The sources explore resource organization, IAM roles and permissions, and service accounts. Finally, the sources detail compute engine instances, Kubernetes Engine (GKE), Cloud VPN, Cloud Functions, Cloud SQL, and Cloud Storage, offering insights into their configuration and use.
Cloud Computing Fundamentals Study Guide
Quiz
What is the purpose of the git pull command? The git pull command downloads any updated files or folders from the remote repository to your synced local copy. This ensures you have the latest version of the files.
Why is it recommended to install a code editor like Visual Studio Code? A code editor provides ease of use for managing and editing code, with features like syntactical highlighting, making it easier to work with files such as YAML or Python documents.
Explain the purpose of creating a budget alert in Google Cloud. Creating a budget alert allows you to monitor your cloud spending and receive email notifications when your spending reaches a specified percentage of your budget, helping you control costs.
What is a Cloud Monitoring workspace and why is it needed for budget alerts with user-specific notifications? A Cloud Monitoring workspace is a container used to organize and control access to your monitoring notification channels. It’s needed because email notification channels must belong to a monitoring workspace to function, allowing you to send alerts to specific users.
What is Google Cloud Shell, and what are some of its key features? Google Cloud Shell is a browser-based command-line interface for managing Google Cloud resources. Key features include pre-installed tools, automatic authentication, persistent disk storage, and a built-in code editor.
How can you customize the Cloud Shell environment, and why might you want to do so? You can customize the Cloud Shell environment by creating a .customize_environment file with a script to install additional tools. This is useful for having your preferred tools available whenever Cloud Shell boots up.
Explain the difference between rate quota and allocation quota in Google Cloud. Rate quota limits the number of API requests per day and resets after a specific time, while allocation quota limits the number of resources like virtual machines and does not reset unless resources are explicitly released.
What are IAM policies, and why are they important for security in Google Cloud? IAM (Identity and Access Management) policies define who (members) has what access (roles) to Google Cloud resources. They are crucial for controlling access and ensuring resources are protected.
What is the purpose of a conditional role binding in IAM? Conditional role bindings grant access to Google Cloud resources based on certain conditions, such as time-bound access or resource attributes, providing more granular control over permissions.
What are audit logs, and why might you want to enable them in Google Cloud? Audit logs record administrative activities and access to data within Google Cloud services. Enabling them provides a history of actions taken, aiding in security monitoring and compliance.
Quiz Answer Key
The git pull command downloads any updated files or folders from the remote repository to your synced local copy. This ensures you have the latest version of the files.
A code editor provides ease of use for managing and editing code, with features like syntactical highlighting, making it easier to work with files such as YAML or Python documents.
Creating a budget alert allows you to monitor your cloud spending and receive email notifications when your spending reaches a specified percentage of your budget, helping you control costs.
A Cloud Monitoring workspace is a container used to organize and control access to your monitoring notification channels. It’s needed because email notification channels must belong to a monitoring workspace to function, allowing you to send alerts to specific users.
Google Cloud Shell is a browser-based command-line interface for managing Google Cloud resources. Key features include pre-installed tools, automatic authentication, persistent disk storage, and a built-in code editor.
You can customize the Cloud Shell environment by creating a .customize_environment file with a script to install additional tools. This is useful for having your preferred tools available whenever Cloud Shell boots up.
Rate quota limits the number of API requests per day and resets after a specific time, while allocation quota limits the number of resources like virtual machines and does not reset unless resources are explicitly released.
IAM (Identity and Access Management) policies define who (members) has what access (roles) to Google Cloud resources. They are crucial for controlling access and ensuring resources are protected.
Conditional role bindings grant access to Google Cloud resources based on certain conditions, such as time-bound access or resource attributes, providing more granular control over permissions.
Audit logs record administrative activities and access to data within Google Cloud services. Enabling them provides a history of actions taken, aiding in security monitoring and compliance.
Essay Questions
Discuss the different methods for connecting to Google Compute Engine VM instances, highlighting the advantages and disadvantages of each, and explain which scenarios are best suited for each method.
Explain the role of IAM (Identity and Access Management) in Google Cloud and how you can use IAM policies, including conditional role bindings, to implement a least privilege security model for a project.
Describe the purpose of network address translation (NAT) and its different types (static, dynamic, PAT). How does PAT work in Google Cloud, and what advantages does it provide?
Compare and contrast public and private DNS zones in Google Cloud DNS. Describe a scenario where you would use a private DNS zone and explain how to configure it.
Discuss the importance of creating and managing snapshots for Google Compute Engine disks. Explain how you can use snapshots and snapshot schedules to protect your data and ensure business continuity.
Glossary of Key Terms
Repository: A storage location for software packages.
Clone: To copy a repository from a remote source to your local machine.
git pull: Command to update local files with changes from a remote repository.
Code Editor: A software application for creating and editing code.
Budget Alert: A notification triggered when cloud spending reaches a specified threshold.
Cloud Monitoring Workspace: A container used to organize and control access to monitoring notification channels.
Cloud Shell: A browser-based command-line interface for managing Google Cloud resources.
Quota: Limits on resource usage in Google Cloud.
IAM (Identity and Access Management): A service for managing access control to Google Cloud resources.
IAM Policy: A document that defines who (members) has what access (roles) to Google Cloud resources.
Conditional Role Binding: A role binding with a condition that must be met for the role to be granted.
Audit Logs: Records of administrative activities and access to data within Google Cloud services.
CIDR (Classless Inter-Domain Routing): An IP addressing scheme that allows for more flexible allocation of IP addresses.
VPC (Virtual Private Cloud): A private network within Google Cloud.
Subnet: A sub-division of a VPC network.
Ephemeral IP Address: A temporary IP address assigned to a resource.
Static IP Address: A persistent IP address assigned to a resource.
DNS (Domain Name System): A hierarchical system for translating domain names into IP addresses.
Resource Record: A basic data entry in the DNS system.
SOA (Start of Authority) Record: A DNS record that contains essential information about a DNS zone.
NAT (Network Address Translation): A method of remapping IP addresses to allow private networks to communicate with the public internet.
Cloud DNS: Google Cloud’s managed DNS service.
Virtualization: The process of running multiple operating systems on a single physical server.
Metadata: Data about data, providing information about resources in Google Cloud.
Startup Script: A script that runs when a VM instance starts up.
Snapshot: A point-in-time copy of a disk.
Snapshot Schedule: Automated backups of persistent disks.
Google Cloud Platform Setup and Management Guide
Okay, I will provide a detailed briefing document based on the provided text excerpts.
Briefing Document: Google Cloud Platform Setup and Management
Overview: This document summarizes the key themes and procedures described in the provided text excerpts, focusing on setting up development environments, managing Google Cloud resources, and using specific Google Cloud tools. The sources primarily provide step-by-step instructions for tasks within the Google Cloud ecosystem.
Main Themes and Ideas:
Setting up a Local Development Environment:
Cloning a Git Repository: The process starts with cloning a Git repository to a local machine (Windows, macOS, or Linux).
“just hit enter and it will clone your repository into the repos directory”
“in order to keep these files up to date we need to run a different command which would be a git pull and this can be run at any time”
Installing Git: Instructions are provided for installing Git on different operating systems if it’s not already installed.
“for those of you who do not have get installed you will be prompted with this message to install it”
Installing VS Code: The document recommends installing Visual Studio Code for editing code.
“…i’m going to browse to this url https colon forward slash forward slash code.visualstudio.com and I’ll make sure that the url is in the text below there is a version of this code editor available for windows mac os and linux”
Managing Google Cloud Budgets and Alerts:
Creating Budgets: The document outlines the steps to create budgets in Google Cloud to monitor spending.
“we’re to go ahead and create a new budget right now so let’s go up here to the top to create budget”
Setting Threshold Rules: The document explains how to set threshold rules to receive email notifications when spending reaches a certain percentage of the budget.
“these threshold rules are where billing administrators will be emailed when a certain percent of the budget is hit”
Configuring Email Notifications: The document describes setting up email notification channels using Cloud Monitoring for budget alerts. This involves creating a workspace in Cloud Monitoring.
“…i’m going to go back up here to create budget I’m going to name this to ace dash budget dash users I’m going to leave the rest as is I’m going to click on next again I’m going to leave the budget type the way it is the target amount I’m going to put ten dollars leave the include credits and cost and just click on next”
“because the email notification channel needs cloud monitoring in order to work I am prompted here to select a workspace which is needed by cloud monitoring so because I have none I’m going to go ahead and create one”
Using Google Cloud Shell:
Accessing Cloud Shell: The document details how to access and use Google Cloud Shell.
“as you can see up here in the right hand corner as mentioned earlier you will find the cloud shell logo and so to open it up you simply click on it”
Cloud Shell Environment: The document explains the nature of the Cloud Shell environment, including its ephemeral VM, persistent disk storage, and pre-installed tools.
“when you start cloud shell it provisions an e2 small google compute engine instance running a debian-based linux operating system now this is an ephemeral pre-configured vm and the environment you work with is a docker container running on that vm”
“when your cloud shell instance is provision it’s provisioned with 5 gigabytes of free persistent disk storage and it’s mounted at your home directory on the virtual machine instance”
Customizing the Cloud Shell Environment: The document outlines how to customize the Cloud Shell environment by installing additional tools using a .customize_environment file.
“if you’re looking for an available tool that is not pre-installed you can actually customize your environment when your instance boots up and automatically run a script that will install the tool of your choice”
“in order for this environment customization to work there needs to be a file labeled as dot customize underscore environment”
Accessing Cloud SDK and Other Tools: The document shows that Cloud Shell comes with the Cloud SDK and other useful tools.
“as i mentioned before the cloud sdk is pre-installed on this and so everything that I’ve showed you in the last lesson with regards to cloud sdk can be done in the cloud shell as well”
Understanding Google Cloud Limits and Quotas:
Quota Types: The document distinguishes between rate quota and allocation quota.
“there are two types of resource usage that google limits with quota the first one is rate quota such as api requests per day this quota resets after a specified time such as a minute or a day the second one is allocation quota an example is the number of virtual machines or load balancers used by your project and this quota does not reset over time but must be explicitly released when you no longer want to use the resource”
Quota Enforcement: It explains that quotas protect Google Cloud users and aid in resource management.
“quotas are enforced for a variety of reasons for example they protect other google cloud users by preventing unforeseen usage spikes quotas also help with resource management so you can set your own limits on service usage within your quota while developing and testing your applications”
Monitoring Quotas: The document highlights the importance of monitoring quota usage and setting up alerts.
“and so you can also request more quota if you need it and set up monitoring and alerts and cloud monitoring to warn you about unusual quota usage behavior or when you’re actually running out of quota”
Viewing Quota Limits: The document explains how to view quota limits in the Google Cloud Console using the quotas page and the API dashboard.
“there are two ways to view your current quota limits in the google cloud console the first is using the quotas page which gives you a list of all of your project’s quota usage and limits the second is using the api dashboard which gives you the quota information for a particular api including resource usage over time”
IAM and Policy Management:
Policy Statements: The document introduces IAM policy statements, which are structured in JSON or YAML format.
“this policy statement has been structured in json format and is a common format used in policy statements moving on we have the exact same policy statement but has been formatted in yaml as you can see the members roles and conditions in the bindings are exactly the same as well as the etag and version but due to the formatting it is much more condensed”
Policy Versions: The document details the different policy versions and the use of conditions within the bindings.
“now as i haven’t covered versions in detail i wanted to quickly go over it and the reasons for each numbered version now version one of the i am syntax schema for policies supports binding one role to one or more members it does not support conditional role bindings and so usually with version 1 you will not see any conditions version 2 is used for google’s internal use and so querying policies usually you will not see a version 2. and finally with version 3 this introduces the condition field in the role binding which constrains the role binding via contact space and attributes based rules”
Conditional Role Bindings: It covers conditional role bindings and how they can be used to manage access based on various attributes, including time and resource attributes.
“conditional role bindings are another name for a policy that holds a condition within the binding conditional role bindings can be added to new or existing iam policies to further control access to google cloud resources”
Audit Logs: It explains how to enable audit logs.
“here I can enable the auto logs without having to use a specific policy by simply clicking on default autoconfig and here I can turn on and off all the selected logging as well as add any exempted users now I don’t recommend that you turn these on as audit logging can create an extremely large amount of data and can quickly blow through all of your 300 credit so I’m going to keep that off”
Networking Fundamentals:
Cider Notation: The document explains CIDR notation and subnetting.
“this method is called classless inter domain routing or cider for short now with cider based networks you aren’t limited to only these three classes of networks class a b and c have been removed for something more efficient which will allow you to create networks in any one of those ranges cider ranges are represented by its starting ip address called a network address followed by what is called a prefix which is a slash and then a number this slash number represents the size of the network the bigger the number the smaller the network and the smaller the number the bigger the network”
VPC Networks and Subnets: The document discusses VPC networks and subnets, including auto mode and custom mode VPCs.
“the default vpc is pre-configured automatically and each google cloud project will have one it is regional in scope each region is a subnet and the default vpc provides a usable subnet in each cloud region by default and is known as auto mode which i will get into shortly”
“now when you create a resource in google cloud you choose a network and a subnet and so because a subnet is needed before creating resources some good knowledge behind it is necessary for both building and google cloud as well as in the exam”
IP Addressing: The document discusses internal and external IP addresses, including ephemeral and static options.
“now an ephemeral internal ip address is an ip address that is temporary and persists until the instance is terminated it is released from the resource in this case the vm instance when the instance is stopped or deleted”
“you can assign an external ip address to an instance or a forwarding rule if you need to communicate with the internet with resources in another network or need to communicate with a public google cloud service sources from outside a google cloud vpc network can address a specific resource by the external ip address as long as firewall rules enable the connection”
DNS Concepts: DNS resolution, caching, and resource records are also discussed.
“dns caching involves storing the data closer to the requesting client so that the dns query can be resolved earlier and additional queries further down the dns lookup chain can be avoided and thus improving load times”
“dns resource records are the basic information elements of the domain name system they are entries in the dns database which provide information about hosts”
Network Address Translation (NAT): The purpose, types, and operation of NAT are described.
“at a high level nat is a way to map multiple local private ip addresses to a public ip address before transferring the information”
Compute Engine VM Management:
Connecting to Instances: The document provides instructions on connecting to Compute Engine VMs using different methods, including SSH from the console, using gcloud CLI, and enabling OS Login.
“so that will use my windows account in order to log into my instance you can use gcloud command or you can use the cloud console and so the first thing i want to do is i want to click on the menu here”
Metadata Management: The document explains the use of metadata for instances and projects.
“and so depending on your needs is how you will determine if you will be using instance metadata project metadata and how you would prioritize it in most cases instance metadata will always take precedence over project metadata and so if i use both i will expect to see my instance one value”
Disks: Instructions are given for creating and managing disks.
“okay and my new disk has been created and you can easily create this disk through the command line and I will be supplying that in the lesson text I merely want to go through the console setup so that you are aware of all the different options”
Snapshots: Instructions are given for creating and managing snapshots and snapshot schedules.
“well you’re now going to be backing up a persistent disk which is the disk used by compute engine to give you a little bit more background snapshots act as a safeguard so that in case of failure you’ll always have a copy of the data the disk holds so to understand this better let’s take a quick look at why exactly you would use snapshots”
“the most common type of disaster which would result in data loss would be human error or software malfunction snapshots help prevent this by backing up an exact copy of the data that was on your disk prior to the mishap so you can roll it back quickly”
Key Takeaways:
The text provides a practical guide to setting up and managing Google Cloud resources.
Understanding networking concepts like CIDR and NAT is crucial for configuring Google Cloud environments.
IAM policies and quota management are essential for security and cost control.
Cloud Shell offers a convenient way to interact with Google Cloud resources.
Compute Engine VMs are a foundational element, and various methods exist for managing and connecting to them.
Snapshots are essential for VM protection and backup.
Careful consideration of the appropriate type of IP address for a scenario is essential.
I hope this briefing document is helpful. Let me know if you have any other questions.
Git, Cloud Shell, Cost & IAM Management in Google Cloud
Git & Repository Management
1. How do I clone a GitHub repository to my local machine?
To clone a GitHub repository, use the command git clone [repository URL] in your terminal. This will download all the files and folders from the repository to your local machine within a directory. First you may have to create a local directory with the command mkdir repos, move into that directory with cd repos, then use the clone command.
2. How do I keep my local repository up-to-date with the latest changes from the remote repository?
To update your local repository, navigate to the directory of the cloned repository in your terminal and use the command git pull. This will download any new changes or updates from the remote repository to your local copy. If there are no changes, you will receive a message indicating that you are already up-to-date.
3. What is the purpose of a code editor like Visual Studio Code, and why is it recommended?
A code editor, such as Visual Studio Code, is recommended for editing code files like YAML or Python scripts. It offers ease of use with features like syntax highlighting, making it easier to manage, edit, and understand code.
Google Cloud Shell
4. What is Google Cloud Shell and what are its key features?
Google Cloud Shell is a browser-based command-line interface for managing Google Cloud resources. Key features include:
A pre-configured environment with tools like the Cloud SDK, Bash, Vim, Helm, Git, and Docker.
Automatic authentication with your Google account.
5 GB of persistent disk storage mounted to your home directory.
A built-in code editor.
A web preview feature for running and viewing web applications.
5. How can I install additional tools or customize the Cloud Shell environment?
To customize the Cloud Shell environment, create a file named .customize_environment in your home directory. Add a script within this file containing the commands to install the desired tools. Cloud Shell will run this script upon instance boot. You can edit the file directly in the cloud shell with the command edit .customize_environment. After editing the file, restart Cloud Shell for the changes to take effect.
6. How can I upload or download files to and from Google Cloud Shell?
You can use the upload and download buttons located in the Cloud Shell toolbar. These options allow you to transfer files between your local machine and the Cloud Shell environment.
Google Cloud Cost Management
7. How can I create budget alerts in Google Cloud?
To create budget alerts, navigate to the “Budgets & alerts” section in the Google Cloud Console. Click “CREATE BUDGET,” provide a budget name, specify the projects and products you want to monitor, set a target amount, and define threshold rules. You can also link monitoring email notification channels to send alerts to specific users when budget thresholds are met.
Identity and Access Management (IAM)
8. How can I grant temporary access to Google Cloud resources using conditional role bindings?
Conditional role bindings enable you to grant time-bounded access to resources. You can add a condition to an IAM policy that specifies a start and end date/time. After the specified expiry date and time, the user will no longer be able to access the resource.
Google Cloud Datastore Overview
Cloud Datastore is a highly scalable NoSQL document database designed for automatic scaling, high performance, and ease of application development.
Key aspects of Cloud Datastore:
Scalability Datastore is built to automatically manage scaling using a distributed architecture.
Transactions It can execute atomic transactions where a set of operations all succeed or none occur.
Query Language Datastore has a SQL-like query language called GQL.
Security Data is automatically encrypted before being written to disk and decrypted when read by an authorized user.
Emulator The Datastore emulator provides local emulation of the production Datastore environment for local application development and testing.
Use cases Datastore is ideal for applications relying on highly available structured data at scale, such as:
Product catalogs with real-time inventory and product details.
User profiles that deliver customized experiences.
Transactions based on asset properties.
It’s worth noting that Firestore is the newest version of Datastore, introducing several improvements. Existing Datastore users can access these improvements by creating a new Firestore database instance in Datastore mode, and in the near future all existing Datastore databases will be automatically upgraded to Firestore in Datastore mode.
ACID Transactions in Datastore: A Comprehensive Guide
The sources mention that Datastore has the capability of doing ACID transactions.
Here’s a breakdown of what ACID transactions are (this information is not from the sources):
ACID is an acronym that refers to the four key properties of a reliable database transaction:
Atomicity: This ensures that all operations within a transaction are treated as a single “unit” of work. Either all operations within the transaction succeed, or the entire transaction is rolled back as if it never happened. This prevents partial updates to the database, ensuring data consistency.
Consistency: This ensures that a transaction brings the database from one valid state to another. It enforces rules, constraints, and integrity conditions defined in the database schema. If a transaction violates any of these rules, it’s rolled back, preserving the database’s consistency.
Isolation: This determines how concurrent transactions interact with each other. It ensures that the execution of one transaction is isolated from other concurrent transactions. Isolation levels define the degree to which transactions are isolated from each other, with stricter levels providing greater protection against concurrency-related issues like dirty reads or lost updates.
Durability: This guarantees that once a transaction has been committed, its changes are permanent and will survive even in the event of system failures such as crashes or power outages. Durability is typically achieved through techniques like write-ahead logging and replication, ensuring that committed data is safely stored and can be recovered if needed.
Google Cloud NoSQL Database Options
Google Cloud offers several managed NoSQL database options:
Cloud Datastore This is a highly scalable NoSQL document database built for automatic scaling, high performance, and ease of application development. Datastore is designed to provide high availability of reads and writes and uses a distributed architecture to automatically manage scaling. It has the capability of doing ACID transactions. Datastore has a SQL-like query language called GQL. It is ideal for applications that rely on highly available structured data at scale.
Cloud Bigtable This is a fully managed, wide-column NoSQL database designed for terabyte and petabyte-scale workloads, offering low latency and high throughput. Bigtable is built for real-time application serving and large-scale analytical workloads. You can increase the queries per second by adding more Bigtable nodes.
Firestore This is a flexible, scalable NoSQL cloud database for client and server-side development. It stores data in documents that contain fields mapping to values and these documents are stored in collections. Cloud Firestore is serverless.
Memorystore This is a fully managed in-memory data store service for Redis and Memcached that is used to build application caches. Memorystore automates administration tasks, such as enabling high availability, failover, patching, and monitoring. A common use case for Memorystore is caching.
Google Cloud Networking Overview
Google Cloud provides various networking services that offer flexibility in establishing different types of networks. Here’s an overview of some core and advanced networking features:
Virtual Private Cloud (VPC) VPC manages networking functionality for Google Cloud resources. It is a virtualized network within Google Cloud, acting as a virtualized data center. VPC is a core networking service and a global resource that spans all available regions in Google Cloud. Each VPC contains a default network, and additional networks can be created within a project.
Firewall Rules Firewall rules segment networks with a global distributed firewall to restrict resource access. These rules govern traffic entering instances on a network. A default set of firewall rules is established for each default network, but custom rules can be created.
Routes Routes specify how traffic should be routed within a VPC. They define how packets leaving an instance should be directed, providing a way to define the path of traffic.
Load Balancing Load balancing distributes workloads across multiple instances. There are two main types of load balancing:
HTTP/HTTPS Load Balancing This type of load balancing covers worldwide auto-scaling and load balancing over multiple regions or a single region on a single global IP. It distributes traffic across regions, routing it to the closest region or to a healthy instance in the next closest region in case of failures. It can also distribute traffic based on content type.
Network Load Balancing This is a regional load balancer that supports all ports. It distributes traffic among server instances in the same region based on incoming IP protocol data, such as address, port, and protocol.
Cloud DNS Google Cloud DNS is a highly available service for publishing and maintaining DNS records. It provides low latency for DNS queries and allows managing DNS records (e.g., MX records, TXT records, CNAME records, A records) via the CLI, API, or SDK. With Google Cloud DNS, you can publish and maintain DNS records by using the same infrastructure that Google uses.
Advanced Connectivity Options Google Cloud offers advanced connectivity options, including:
Cloud VPN Connects an existing network (on-premises or in another location) to a VPC network through an IPsec connection. Traffic is encrypted and travels over the public internet between the two networks.
Direct Interconnect Connects an existing network to a VPC network using a highly available, low-latency connection. This connection does not traverse the public internet and connects directly to Google’s backbone.
Direct and Carrier Peering These connections allow traffic to flow through Google’s edge network locations, either directly or through a third-party carrier.
VPC Network Peering VPC peering enables you to peer VPC networks so that workloads in different VPC networks can communicate in a private space that follows the RFC 1918 standard, thus allowing private connectivity across two VPC networks. Traffic stays within Google’s network and never traverses the public internet.
Shared VPC Shared VPCs allow an organization to connect resources from multiple projects to a common VPC network so that they can communicate with each other securely and efficiently using internal IPs from that network. When using shared VPCs, you designate a project as a host project and attach one or more other service projects to it. The VPC networks in the host project are considered the shared VPC networks.
Google Cloud Virtual Private Cloud (VPC) Overview
Virtual Private Cloud (VPC) manages networking functionality for Google Cloud resources. It is a virtualized network within Google Cloud that acts as a virtualized data center. VPC is a core networking service and also a global resource that spans all available regions across the globe.
Key features and characteristics of VPC:
Global Resource VPC networks, including their associated routes and firewall rules, are global resources and are not associated with any particular region or zone.
Encapsulation within Projects VPCs are encapsulated within projects, which serve as logical containers for VPCs.
Subnetworks VPCs themselves do not have IP ranges but are a construct of the individual IP addresses and services within that network. IP addresses and ranges are defined within the subnetworks.
Communication Resources within a VPC network can communicate with one another by using internal or private IPv4 addresses, subject to applicable network firewall rules. These resources must be in the same VPC for communication; otherwise, they must traverse the public internet with an assigned public IP or use a VPC peering connection or establish a VPN connection.
IPv4 Unicast Traffic VPC networks only support IPv4 unicast traffic and do not support IPv6 traffic within the network. VMs in the VPC network can only send to IPv4 destinations and only receive traffic from IPv4 sources. However, it is possible to create an IPv6 address for a global load balancer.
Default Network Unless disabled, each new project starts with a default network in a VPC. The default network is an auto mode VPC network with predefined subnets, where a subnet is allocated for each region with non-overlapping CIDR blocks. Each default network has default firewall rules configured to allow ingress traffic for ICMP, RDP, and SSH traffic from anywhere, as well as ingress traffic from within the default network for all protocols and ports.
Types of VPC Networks There are two different types of VPC networks: auto mode and custom mode.
An auto mode network has one subnet per region, with automatically created subnets using a set of predefined IP ranges. As new GCP regions become available, new subnets in those regions are automatically added to auto mode networks using an IP range on that block.
A custom mode network does not automatically create subnets, giving you complete control over its subnets and IP ranges. Google recommends using custom mode VPC networks in production.
An auto mode network can be converted to a custom mode network to gain more control, but this conversion is one way, meaning that custom networks cannot be changed to auto mode networks.
VPC Network Peering VPC peering enables you to peer VPC networks so that workloads in different VPC networks can communicate in a private space that follows the RFC 1918 standard, thus allowing private connectivity across two VPC networks. Traffic stays within Google’s network and never traverses the public internet.
Shared VPC Shared VPCs allow an organization to connect resources from multiple projects to a common VPC network so that they can communicate with each other securely and efficiently using internal IPs from that network. When using shared VPCs, you designate a project as a host project and attach one or more other service projects to it. The VPC networks in the host project are considered the shared VPC networks.
Google Cloud Associate Cloud Engineer Course – Pass the Exam!
The Original Text
hey this is anthony tavelos your cloud instructor at exam pro bringing you a complete study course for the google cloud associate cloud engineer made available to you here on free code camp and so this course is designed to help you pass and achieve google issued certification the way we’re going to do that is to go through lots of lecture content follow alongs and using my cheat sheets on the day of the exam so you pass and you can take that certification and put it on your resume or linkedin so you can get that cloud job or promotion that you’ve been looking for and so a bit about me is that i have 18 years industry experience seven of it specializing in cloud and four years of that as a cloud trainer i previously been a cloud and devops engineer and i’ve also published multiple cloud courses and i’m a huge fan of the cartoon looney tunes as well as a coffee connoisseur and so i wanted to take a moment to thank viewers like you because you make these free courses possible and so if you’re looking for more ways of supporting more free courses just like this one the best way is to buy the extra study material at co example.com in particular for this certification you can find it at gcp hyphen ace there you can get study notes flash cards quizlets downloadable lectures which are the slides to all the lecture videos downloadable cheat sheets which by the way are free if you just go sign up practice exams and you can also ask questions and get learning support and if you want to keep up to date with new courses i’m working on the best way is to follow me on twitter at antony’s cloud and i’d love to hear from you if you passed your exam and also i’d love to hear on what you’d like to see next [Music] welcome back in this lesson i wanted to quickly go over how to access the course resources now the resources in this course are designed to accompany the lessons and help you understand not just the theory but to help with the demo lessons that really drive home the component of hands-on learning these will include study notes lesson files scripts as well as resources that are used in the demo lessons these files can be found in a github repository that i will be including below that are always kept up-to-date and it is through these files that you will be able to follow along and complete the demos on your own to really cement the knowledge learned it’s a fairly simple process but varies through the different operating systems i’ll be going through this demo to show you how to obtain access through the three major operating systems being windows mac os and ubuntu linux so i’m first going to begin with windows and the first step would be to open up the web browser and browse to this url which i will include in the notes below and this is the course github repository which will house all the course files that i have mentioned before keeping the course up to date will mean that files may need to be changed and so as i update them they will always be reflected and uploaded here in the repo so getting back to it there are two ways to access this repository so the easiest way to obtain a copy of these files will be to click on the clone or download button and click on download zip once the file has been downloaded you can then open it up by clicking on it here and here are the files here in downloads and this will give you a snapshot of all the files and folders as you see them from this repository now although this may seem like the simple way to go this is not the recommended method to download as if any files have changed you will not be up to date with the latest files and will only be current from the date at which you’ve downloaded them now the way that is recommended is using a source control system called git and so the easiest way to install it would be to go to this url https colon forward slash forward slash git dash scm.com and this will bring you to the git website where you can download the necessary software for windows or any other supported operating system and so i’m going to download it here and this should download the latest version of git for windows and it took a few seconds there but it is done and no need to worry about whether or not you’ve got the proper version usually when you click that download button it will download the latest version for your operating system so i’m going to go over here and open this up you’ll get a prompt where you would just say yes and we’re going to go ahead and accept all the defaults here this is where it’s going to install it let’s hit next these are all the components that they’re going to be installed let’s click on next and again we’re going to go through everything with all the defaults and once we’ve reached installing all the defaults it’s gonna take a couple minutes to install and again it took a minute or so we’re going to just click on next and it’s going to ask if you want to view the release notes and we don’t really need those so we can click on ok and simply close that and we’re just going to go over and see if git is installed we’re going to run the command prompt and i’m going to just zoom in here so we can see a little better and there we go and we are just going to type in git and as you can see it’s been installed and so now that we’ve installed git we want to be able to pull down all the folders and the files within them from the repository to our local system and so i’m just going to clear the screen here and we’re going to do a cd to make sure that i’m in my home directory and then we’re going to make a directory called repos and in order to do that we’re going to do mkdir space repos and then we’re going to move into that directory so cd space repos and so again here we want to clone those files that are in the repository to our local system so in order to do that we’re going to use the command git clone so get space clone and then we’re going to need our location of the git repository so let’s go back to the browser and we’re going to go over here to clone or download and here you will see clone with https so make sure that this says https and you can simply click on this button which will copy this to the clipboard and then we’ll move back to our command prompt and paste that in and once that’s pasted just hit enter and it will clone your repository into the repos directory and so just to verify that we’ve cloned all the necessary files we’re going to cd into the master directory that we had just cloned and we’re going to do a dir and there you have it all of the files are cloned exactly as it is here in the repository now just as a note in order to keep these files up to date we need to run a different command which would be a git pull and this can be run at any time in order to pull down any files or folders that have been updated since you did the first pull which in this case would be cloning of the repository again this will provide you with the latest and most up-to-date files at any given moment in time and in this case since nothing has changed i have been prompted with a message stating that i’m up to date if nothing is changed you will always be prompted with this message if there was it will pull your changes down to your synced local copy and the process for windows is completed and is similar in mac os and i’ll move over to my mac os virtual machine and log in and once you’ve logged in just going to go over here to the terminal and i’m just going to cd to make sure i’m in my home directory then i’m going to do exactly what we did in windows so i’m going to run the command mk dir space repos and create the repos directory and i’m going to move in to the repos directory and then i’m going to run git now for those of you who do not have get installed you will be prompted with this message to install it and you can go ahead and just install you’ll be prompted with this license agreement you can just hit agree and depending on your internet connection this will take a few minutes to download and install so as this is going to take a few minutes i’m going to pause the video here and come back when it’s finished installing okay and the software was successfully installed so just to do a double check i’m going to run git and as you can see it’s been installed so now that we have git installed we want to clone all the directories and the files from the github repository to our local repos folder so i’m going to open up my browser and i’m going to paste my github repository url right here and you’ll see the clone button over here so we’re going to click on this button and here we can download zip but like i said we’re not going to be doing that we’re going to go over here and copy this url for the github repository again make sure it says https and we’re going to copy this to our clipboard and we’re going to go back to our terminal and we are going to run the command git space clone and we’re going to paste in our url and as you can see here i’ve cloned the repository and all the files and folders within it and so as is my best practice i always like to verify that the files have been properly cloned and so i’m going to run the command ls just to make sure and go into the master directory and do a double check and as you can see the clone was successful as all the files and folders are here and again to download any updates to any files or directories we can simply run the command git space poll and because we’ve already cloned it it’s already up to date and so the process is going to be extremely similar on linux so i’m going to simply move over to my linux machine and log in i’m going to open up a terminal and i’m going to make my terminal a little bit bigger for better viewing and so like the other operating systems i want to clone all the files and directories from the github repository to my machine and so i’m going to cd here to make sure i’m in my home directory and like we did before we want to create a directory called repos so i’m going to run the command mkdir space repos and we’re going to create the repos directory we’re now going to move into the repos directory and here we’re going to run the git command and because git is not installed on my machine i’ve been prompted with the command in order to install it so i’m going to run that now so the command is sudo space apt space install space get and i’m going to enter in my password and install it and just to verify i’m going to run the command git and i can see here it’s been installed so now i’m going to go over here to my browser and i’m going to paste in the url to my repository and over here we’ll have the same clone button and when i click on it i can get the url for the github repository in order to clone it again make sure before you clone that this says https if it doesn’t say https you’ll have the option of clicking on a button that will allow you to do so once it says https then you can simply copy this url to your clipboard by clicking on the button and then move over back to the terminal and we are going to clone this repository by typing in the get space clone command along with the url of the repository and when we hit enter it’ll clone it right down to our directory so i’m just going to move into the master directory just to verify that the files are there and again they’re all here so again if you’re looking to update your repository with any new updated changes you can simply run the get space pull command to update those files and so that’s the linux setup so you have a local copy of the lesson files now there’s just one more thing that i highly recommend you do and to demonstrate it i’m going to move back over to my windows virtual machine now i’m going to open up the web browser again open up a new tab and i’m going to browse to this url https colon forward slash forward slash code.visualstudio.com [Music] and i’ll make sure that the url is in the text below there is a version of this code editor available for windows mac os and linux you can simply click on this drop down and you’ll find the link to download it for your operating system but in most cases it should automatically show the correct version so just go ahead and click on download and it should start downloading automatically and you should be able to run it right away now the reason behind me asking you to install this utility is for editing code of different sorts whether you’re adjusting yaml or python documents for deployment manager or even managing scripts a code editor will give you the ease of use when it comes to managing editing and even syntactical highlighting of code as shown here below it will highlight the code to make it easier to understand now if you have your own editor that you would prefer to use go ahead and use that but for those that don’t my recommendation will be to use visual studio code so to install visual studio code we’re just going to accept this license agreement and then we’re going to click on next and we’re just going to follow all the defaults to install it it’s going to take a minute or two and for those running windows you want to make sure that this box is checked off so that you can launch it right away let’s hit finish another recommendation would be to go over here to the task bar so you can pin it in place so that it’s easier to find and so now you have access to all the resources that’s needed for this course but with that that’s everything that i wanted to cover for this lesson so you can now mark this lesson as complete and let’s move on to the next one [Music] welcome back and in this lesson i wanted to discuss the various certifications available for google cloud as this number keeps on growing and i am looking to keep this lesson as up to date as possible so with that being said let’s dive in now google cloud has released a slew of certifications in many different areas of expertise as well as different experience levels now there are two levels of difficulty when it comes to the google cloud certifications starting off with the associate level we see that there is only the one certification which is the cloud engineer the associate level certification is focused on the fundamental skills of deploying monitoring and maintaining projects on google cloud this is a great starting point for those completely new to cloud and google recommends the associate cloud engineer as the starting point to undergoing your certification journey this was google cloud’s very first certification and to me was the entry point of wanting to learn more as an engineer in cloud in my personal opinion no matter your role this certification will cover the general knowledge that is needed to know about starting on google cloud and the services within it which is why i labeled it here as the foundational level course i also consider this the stepping stone into any other professional level certifications which also happens to be a recommended path by google with a great course and some dedication i truly believe that anyone with even a basic skill level in it should be able to achieve this associate level certification now it is recommended from google themselves that prior to taking this exam that you should have over six months experience building on google cloud for those of you with more of an advanced background in google cloud or even other public clouds this certification should be an easy pass as it covers the basics that you should be familiar with adding a google twist to it at the time of this lesson this exam is two hours long and the cost is 125 us dollars the exam is a total of 50 questions which consists of both multiple choice and multiple answer questions each of the questions contain three to four line questions with single line answers that by the time you finish this course you should have the confidence to identify the incorrect answers and be able to select the right answers without a hitch moving into the professional level certifications there are seven certifications that cover a variety of areas of specialty depending on your role you might want to take one or maybe several of these certifications to help you gain more knowledge in google cloud or if you love educating yourself and you’re really loving your journey in gcp you will probably want to consider pursuing them all in my personal opinion the best entry point into the professional level would be the cloud architect it is a natural step up from the associate cloud engineer and it builds on top of what is learned through that certification with a more detailed and more thorough understanding of cloud architecture that is needed for any other certification there is some overlap from the cloud engineer which is why in my opinion doing this certification right after makes sense it also brings with it the ability to design develop and manage secure scalable and highly available dynamic solutions it is a much harder exam and goes into great depth on services available the professional cloud architect is a great primer for any other professional level certification and can be really helpful to solidify the learning that is needed in any other technical role i find it the most common path that many take who look to learn google cloud which is why i personally recommend it to them and at the time of this lesson it also holds the highest return on investment due to the highest average wage over any other current cloud certification in the market google recommends over three years of industry experience including one year on google cloud before attempting these exams with regards to the exams in the professional tier they are much harder than the associate level and at the time of this course is two hours long and the cost is 200 us dollars these exams are a total of 50 questions which consists of both multiple choice and multiple answer questions it’s the same amount of questions with the same amount of time but it does feel much harder each of the questions contain four to five line questions with one to three line answers it’s definitely not a walk in the park and will take some good concentration and detailed knowledge on google cloud to solidify a pass after completing the cloud architect certification depending on your role my suggestion would be to pursue the areas that interest you the most to make your journey more enjoyable for me at the time i took the security engineer out as i am a big fan of security and i knew that i would really enjoy the learning and make it more fun for me this is also a great certification for those who are looking to excel their cloud security knowledge on top of any other security certifications such as the security plus or cissp now others may be huge fans of networking or hold other networking certifications such as the ccna and so obtaining the network engineer certification might be more up your alley and give you a better understanding in cloud networking now if you’re in the data space you might want to move into the data engineer exam as well as taking on the machine learning engineer exam to really get some deeper knowledge in the areas of big data machine learning and artificial intelligence on google cloud now i know that there are many that love devops me being one of them and really want to dig deeper and understand sre and so they end up tackling the cloud developer and cloud devops engineer certifications so the bottom line is whatever brings you joy in the area of your choosing start with that and move on to do the rest all the professional certifications are valuable but do remember that they are hard and need preparation for study last but not least is the collaboration engineer certification and this certification focuses on google’s core cloud-based collaboration tools that are available in g suite or what is now known as google workspaces such as gmail drive hangouts docs and sheets now the professional level collaboration engineers certification dives into more advanced areas of g suite such as mail routing identity management and automation of it all using tools scripting and apis this certification is great for those looking to build their skill set as an administrator of these tools but gives very little knowledge of google cloud itself so before i move on there is one more certification that i wanted to cover that doesn’t fall under the associate or professional certification levels and this is the google cloud certified fellow program now this is by far one of the hardest certifications to obtain as there are very few certified fellows at the time of recording this lesson it is even harder than the professional level certifications and this is due to the sheer level of competency with hybrid multi-cloud architectures using google cloud anthos google’s recommended experience is over 10 years with a year of designing enterprise solutions with anthos then a four-step process begins first step is to receive a certified fellow invitation from google and once you’ve received that invitation then you need to submit an application with some work samples that you’ve done showing google your competency in hybrid multi-cloud once that is done the third step is a series of technical hands-on labs that must be completed and is a qualifying assessment that must be passed in order to continue and after all that the last step is a panel interview done with google experts in order to assess your competency of designing hybrid and multi-cloud solutions with anthos so as you can see here this is a very difficult and highly involved certification process to achieve the title of certified fellow this is definitely not for the faint of heart but can distinguish yourself as a technical leader in anthos and a hybrid multi-cloud expert in your industry now i get asked many times whether or not certifications hold any value are they easy to get are they worth more than the paperwork that they’re printed on and does it show that people really know how to use google cloud and my answer is always yes as the certifications hold benefits beyond just the certification itself and here’s why targeting yourself for a certification gives you a milestone for learning something new with this new milestone it allows you to put together a study plan in order to achieve the necessary knowledge needed to not only pass the exam but the skills needed to progress in your everyday technical role this new knowledge helps keep your skills up to date therefore making you current instead of becoming a relic now having these up-to-date skills will also help advance your career throughout my career in cloud i have always managed to get my foot in the door with various interviews due to my certifications it gave me the opportunity to shine in front of the interviewer while being able to confidently display my skills in cloud it also allowed me to land the jobs that i sought after as well as carve out the career path that i truly wanted on top of landing the jobs that i wanted i was able to achieve a higher salary due to the certifications i had i have doubled and tripled my salary since i first started in cloud all due to my certifications and i’ve known others that have obtained up to five times their salary because of their certifications now this was not just from achieving the certification to put on my resume and up on social media but from the knowledge gained through the process and of course i personally feel that having your skills constantly up to date advancing your career and getting the salary that you want keeps you motivated to not only get more certifications but continue the learning process i am and always have been a huge proponent of lifelong learning and as i always say when you continue learning you continue to grow so in short google cloud certifications are a great way to grow and so that about covers everything that i wanted to discuss in this lesson so you can now mark this lesson as complete and i’ll see you in the next one [Music] welcome back and in this lesson i’m going to be talking about the fictitious organization called bow tie inc that i will be using throughout the course now while going through the architectures and demos in this course together i wanted to tie them to a real world situation so that the theory and practical examples are easy to understand tying it to a scenario is an easy way to do this as well it makes things a lot more fun so the scenario again that i will be using is based on bow tie ink so before we get started with the course i’d like to quickly run through the scenario and don’t worry it’s going to be very high level and i will keep it brief so bow tie ink is a bow tie manufacturing company that designs and manufactures bow ties within their own factories they also hold a few retail locations where they sell their bow ties as well as wholesale to other thai and men’s fashion boutiques and department stores across the globe being in the fashion business they mainly deal with commerce security and big data sets bow tie inc is a global company and they are headquartered in montreal canada they employ about 300 people globally with a hundred of them being in sales alone to support both the brick and mortar stores and wholesale branches there are many different departments to the company that make it work such as in-store staff i.t marketing for both in-store and online sales manufacturing finance and more the types of employees that work in bow tie inc vary greatly due to the various departments and consists of many people such as sales for both in-store and wholesale managers that run the stores and sewers that work in the manufacturing plant and many more that work in these various departments the business has both offices and brick and mortar stores in montreal london and los angeles now due to the thrifty mindset of management concentrating all their efforts on commerce and almost none in technical infrastructure has caused years of technical debt and is now a complete disaster within the brick and mortar location there contains two racks with a few servers and some networking equipment the global inventory of bow ties are updated upon sales in both stores and wholesale as well as new stock that has been manufactured from the factory there are point-of-sale systems in each store or office location these systems are all connected to each other over a vpn connection in order to keep updates of the inventory fresh all office and store infrastructure are connected to each other and the montreal headquarters and the point of sale systems and kiosk systems are backed up to tape in the montreal headquarters as well and like i said before management is extremely thrifty but they have finally come to the realization that they need to start spending money on the technical infrastructure in order to scale so diving into a quick overview of exactly what the architecture looks like the head office is located in montreal canada it has its main database for the crm and point-of-sale systems as well as holding the responsibility of housing the equipment for the tape backups the tapes are then taken off site within montreal by a third-party company for storage the company has two major offices one in london covering the eu and the other in the west coast us in los angeles these major offices are also retail locations that consume i.t services from the headquarters in montreal again being in the fashion business bowtie inc employs a large amount of sales people and the managers that support them these employees operate the point-of-sale systems so we’re constantly looking to have the website sales and the inventory updated at all times each salesperson has access to email and files for updated forecasts on various new bowtie designs most sales people communicate over a voice over ip phone and chat programs through their mobile phone the managers also manually look at inventory on what’s been sold versus what’s in stock to predict the sales for stores in upcoming weeks this will give manufacturing a head start to making more bow ties for future sales now whatever implementations that we discuss throughout this course we’ll need to support the day-to-day operations of the sales people and the managers and because of the different time zones in play the back-end infrastructure needs to be available 24 hours a day seven days a week any downtime will impact updated inventory for both online sales as well as store sales at any given time now let’s talk about the current problems that the business is facing most locations hold on premise hardware that is out of date and also out of warranty the business looked at extending this warranty but became very costly as well management is on the fence about whether to buy new on-premise hardware or just move to the cloud they were told that google cloud is the way to go when it comes to the retail space and so are open to suggestions yet still very weary now when it comes to performance there seems to be a major lag from the vpn connecting from store to store as well as the head office that’s responsible for proper inventory thus slowing down the point of sale systems and to top it all off backups taking an exorbitant amount of time is consuming a lot of bandwidth with the current vpn connection now bowtie inc has always struggled with the lack of highly available systems and scalability due to cost of new hardware this is causing extreme stress for online e-commerce whenever a new marketing campaign is launched as the systems are unable to keep up with the demand looking at the forecast for the next two quarters the business is looking to open up more stores in the eu as well as in the us and with the current database in place providing very inefficient high availability or scalability there is a major threat of the main database going down now when it comes to assessing the backups the tape backups have become very slow especially backing up from london and the off-site storage costs continuously go up every year the backups are consuming a lot of bandwidth and are starting to become the major pain point for connection issues between locations on top of all these issues the small it staff that is employed have outdated i.t skills and so there is a lot of manual intervention that needs to be done to top it all off all the running around that is necessary to keep the outdated infrastructure alive management is also now pushing to open new stores to supply bow ties globally given the ever-growing demand as well as being able to supply the demand of bow ties online through their e-commerce store now these are some realistic yet common scenarios that come up in reality for a lot of businesses that are not using cloud computing and throughout the course we will dive into how google cloud can help ease the pain of these current ongoing issues now at a high level with what the business wants to achieve and what the favorable results are they are all interrelated issues so bowtie inc requires a reliable and stable connection between all the locations of the stores and offices so that sales inventory and point-of-sale systems are quick and up-to-date at all times this will also allow all staff in these locations to work a lot more efficiently with a stable and reliable connection in place backups should be able to run smoothly and also eliminate the cost of off-site backup not to mention the manpower and infrastructure involved to get the job done while scaling up offices and stores due to increase in demand the business should be able to deploy stores in new regions using pay as you go billing while also meeting the requirements and regulations when it comes to gpdr and pci this would also give the business flexibility of having a disaster recovery strategy in place in case there was a failure of the main database in montreal now as mentioned before the business is extremely thrifty especially when it comes to spend on it infrastructure and so the goal is to have the costs as low as possible yet having the flexibility of scaling up when needed especially when new marketing campaigns are launched during high demand sales periods this would also give bowtie inc the flexibility of analyzing sales ahead of time using real-time analytics and catering to exactly what the customer is demanding thus making inventory a lot more accurate and reducing costs in manufacturing items that end up going on sale and costing the company money in the end finally when it comes to people supporting infrastructure automation is key removing manual steps and a lot of the processes can reduce the amount of manpower needed to keep the infrastructure alive and especially will reduce downtime when disaster arises putting automation in place will also reduce the amount of tedious tasks that all departments have on their plate so that they can focus on more important business needs now that’s the scenario at a high level i wanted to really emphasize that this is a typical type of scenario that you will face as a cloud engineer and a cloud architect the key to this scenario is the fact that there are areas that are lacking in detail and areas that are fully comprehensible and this will trigger knowing when and where to ask relevant questions especially in your day-to-day role as an engineer it will allow you to fill the gaps so that you’re able to figure out what services you will need and what type of architecture to use this is also extremely helpful when it comes to the exam as in the exam you will be faced with questions that pertain to real life scenarios that will test you in a similar manner knowing what services and architecture to use based on the information given will always give you the keys to the door with the right answer and lastly when it comes to the demos this scenario used throughout the course will help put things in perspective as we will come to resolve a lot of these common issues real world scenarios can give you a better perspective on learning as it is tied to something that makes it easy to comprehend and again bow tie inc is the scenario that i will be using throughout the course to help you grasp these concepts so that’s all i have to cover this scenario so you can now mark this lesson as complete and let’s move on to the next one [Music] hey this is anthony cevallos and what i wanted to show you here is where you can access the practice exam on the exam pro platform so once you’ve signed up for your account you can head on over to the course and you can scroll down to the bottom of the curriculum list and you will see the practice exams here at the bottom now just as a quick note you should generally not attempt the practice exam unless you have completed all the lecture content including the follow alongs as once you start to see those questions you will get an urge to start remembering these questions and so i always recommend to use the practice exam as a serious attempt and not just a way to get to the final exam at a faster pace taking your time with the course will allow you to really prevail through these practice exams and allow you for a way better pass rate on the final exam looking here we can see two practice exams with 50 questions each and so i wanted to take a moment here and dive into the practice exam and show you what some of these questions will look like and so clicking into one of these exams we can get right into it and so as you can see i’ve already started on practice exam one and so i’m going to click into that right now and as you can see the exam is always timed and in this case will be 120 minutes for this specific exam there are 50 questions for this practice exam and you will see the breakdown in the very beginning of the types of questions you will be asked now for the google cloud exams at the associate level they are usually structured in a common format they generally start with one or two lines of sentences which will typically represent a scenario followed by the question itself this question tends to be brief and to the point immediately following that you will be presented with a number of answers usually four or five in nature and can sometimes be very very technical as they are designed for engineers like asking about which gcloud commands to use to execute in a given scenario as well as theoretical questions that can deal with let’s say best practices or questions about the specific services themselves now these answers will come in two different styles either multi-choice or multi-select the multi-choice is usually about identifying the correct answer from a group of incorrect or less correct answers whereas the multi-select will be about choosing multiple correct solutions to identify the answer as well for this associate exam the overall structure is pretty simple in nature and typically will be either right or wrong now sometimes these questions can get tricky where there are multiple possible answers and you will have to select the most suitable ones now although most of these types of questions usually show up in the professional exam they can sometimes peek their heads into the associate and so a great tactic that i always like to use is to immediately identify what matters in the question itself and then to start ruling out any of the answers that are wrong and this will allow you to answer the question a lot more quickly and efficiently as it will bring the more correct answer to the surface as well as making the answer a lot more obvious and making the entire question less complex so for instance with this question here you are immediately asked about google’s recommended practices when it comes to using cloud storage as backup for disaster recovery and this would be for a specific storage type and so quickly looking at the answers you can see that standard storage and near line storage will not be part of the answer and so that will leave cold line storage or archive storage as the two possible choices for the answer of this question and so these are the typical techniques that i always like to use for these exams and so provided that you’ve gone through all the course content you will be able to answer these technical questions with ease and following the techniques i’ve just given and applying them to each question can really help you in not only this practice exam but for the final exam landing you a passing grade getting you certified [Music] welcome back and in this section i wanted to really hone in on the basics of cloud computing the characteristics that make it what it is the different types of computing and how they differ from each other as well as the types of service models now in this lesson i wanted to dive into the definition of cloud computing and the essential characteristics that define it now for some advanced folk watching this this may be a review and for others this may fulfill a better understanding on what is cloud now cloud is a term that is thrown around a lot these days yet holds a different definition or understanding to each and every individual you could probably ask 10 people on their definition of cloud and chances are everyone would have their own take on it many see cloud as this abstract thing in the sky where files and emails are stored but it’s so much more than that now the true definition of it can be put in very simple terms and can be applied to any public cloud being google cloud aws and azure moving on to the definition cloud computing is the delivery of a shared pool of on-demand computing services over the public internet that can be rapidly provisioned and released with minimal management effort or service provider interaction these computing services consist of things like servers storage networking and databases they can be quickly provisioned and accessed from your local computer over an internet connection now coupled with this definition are five essential characteristics that define the cloud model that i would like to go over with you and i believe that it would hold massive benefits to understanding when speaking to cloud this information can be found in the white paper published by the national institute of standards and technology i will include a link to this publication in the lesson notes for your review now these essential characteristics are as follows the first one is on-demand self-service and this can be defined as being able to provision resources automatically without requiring human interaction on the provider’s end so in the end you will never need to call up or interact with the service provider in order to get resources provisioned for you as well you have the flexibility of being able to provision and de-provision these resources whenever you need them and at any given time of the day the second characteristic is broad network access now this simply means that cloud computing resources are available over the network and can be accessed by many different customer platforms such as mobile phones tablets or computers in other words cloud services are available over a network moving into the third is resource pooling so the provider’s computing resources are pooled together to support a multi-tenant model that allows multiple customers to share the same applications or the same physical infrastructure while retaining privacy and security over their information this includes things like processing power memory storage and networking it’s similar to people living in an apartment building sharing the same building infrastructure like power and water yet they still have their own apartments and privacy within that infrastructure this also creates a sense of location independence in that the customer generally has no control or knowledge over the exact location of the provided resources but they may be able to specify location at a higher level of abstraction so in the end the customer does not really have the option of choosing exactly which server server rack or data center for that matter of where the provided resources are coming from they will only be able to have the option to choose things like regions or sections within that region the fourth essential characteristic is rapid elasticity this to me is the key factor of what makes cloud computing so great and so agile capabilities can be elastically provisioned and released in some cases automatically to scale rapidly outwards and inwards in response with demand to the consumer the capabilities available for provisioning often appear to be unlimited and can be provisioned in any quantity at any time and touching on the fifth and last characteristic cloud systems automatically control and optimize resource usage by leveraging a metering capability resource usage can be monitored controlled and reported providing transparency for both the provider and consumer of the service now what this means is that cloud computing resource usage is metered and you can pay accordingly for what you’ve used resource utilization can be optimized by leveraging pay-per-use capabilities and this means that cloud resource usage whether they are instances that are running cloud storage or bandwidth it all gets monitored measured and reported by the cloud service provider the cost model is based on pay for what you use and so the payment is based on the actual consumption by the customer so knowing these key characteristics of cloud computing along with their benefits i personally find can really give you a leg up on the exam as well as speaking to others in your day-to-day role as more and more companies start moving to cloud i hope this lesson has explained to you on what is cloud computing and the benefits it provides so that’s all i have for this lesson so you can now mark this lesson as complete and let’s move on to the next one welcome back in this lesson i wanted to go over the four common cloud deployment models and distinguish the differences between public cloud multi-cloud private cloud and hybrid cloud deployment models this is a common subject that comes up a fair amount in the exam as well as a common theme in any organization moving to cloud knowing the distinctions between them can be critical to the types of architecture and services that you would use for the specific scenario you are given as well as being able to speak to the different types of deployment models as an engineer in the field getting back to the deployment models let’s start with the public cloud model which we touched on a bit in our last lesson now the public cloud is defined as computing services offered by third-party providers over the public internet making them available to anyone who wants to use or purchase them so this means that google cloud will fall under this category as a public cloud there are also other vendors that fall under this category such as aws and azure so again public cloud is a cloud that is offered over the public internet now public clouds can also be connected and used together within a single environment for various use cases this cloud deployment model is called multi-cloud now a multi-cloud implementation can be extremely effective if architected in the right way one implementation that is an effective use of multi-cloud is when it is used for disaster recovery this is where your architecture would be replicated across the different public clouds in case one were to go down another could pick up the slack what drives many cases of a multi-cloud deployment is to prevent vendor lock-in where you are locked into a particular cloud provider’s infrastructure and unable to move due to the vendor-specific feature set the main downfall to this type of architecture is that the infrastructure of the public cloud that you’re using cannot be fully utilized as each cloud vendor has their own proprietary resources that will only work in their specific infrastructure in other words in order to replicate the environment it needs to be the same within each cloud this removes each cloud’s unique features which is what makes them so special and the resources so compelling so sometimes finding the right strategy can be tricky depending on the scenario now the next deployment model i wanted to touch on is private cloud private cloud refers to your architecture that exists on premise and restricted to the business itself with no public access yet it still carries the same five characteristics that we discussed with regards to what defines cloud each of the major cloud providers shown here all have their own flavor of private cloud that can be implemented on site google cloud has anthos aws has aws outposts and azures is azure stack they show the same characteristic and leverage similar technologies that can be found in the vendor’s public cloud yet can be installed on your own on-premise infrastructure please be aware any organizations may have a vmware implementation which holds cloud-like features yet this is not considered a private cloud true private cloud will always meet the characteristics that make up cloud now it is possible to use private cloud with public cloud and this implementation is called hybrid cloud so hybrid cloud is when you are using public cloud in conjunction with private cloud as a single system a common architecture used is due to compliance where one cloud could help organizations achieve specific governance risk management and compliance regulations while the other cloud could take over the rest now i’d really like to make an important distinction here if your on-premise infrastructure is connected to public cloud this is not considered hybrid cloud this is what’s known as hybrid environment or a hybrid network as the on-premises infrastructure holds no private cloud characteristics true hybrid cloud allows you to use the exact same interface and tooling as what’s available in the public cloud so being aware of this can avoid a lot of confusion down the road so to sum up everything that we discussed when it comes to public cloud this is when one cloud provided by one vendor that is available over the public internet multi-cloud is two or more public clouds that are connected together to be used as a single system a private cloud is considered an on-premises cloud that follows the five characteristics of cloud and is restricted to the one organization with no accessibility to the public and finally hybrid cloud is private cloud connected to a public cloud and being used as a single environment again as a note on-premises architecture connected to public cloud is considered a hybrid environment and not hybrid cloud the distinction between the two are very different and should be observed carefully as gotchas may come up in both the exam and in your role as an engineer so these are all the different cloud deployment models which will help you distinguish on what type of architecture you will be using in any scenario that you are given and so this is all i wanted to cover when it comes to cloud deployment models so you can now mark this lesson as complete and let’s move on to the next one welcome back so to finish up the nist definition of cloud computing i wanted to touch on cloud service models which is commonly referred to as zas now this model is usually called zas or xaas standing for anything as a service it includes all the services in a cloud that customers can consume and x can be changed to associate with the specific service so in order to describe the cloud service models i needed to touch on some concepts that you may or may not be familiar with this will make understanding the service models a little bit easier as i go through the course and describe the services available and how they relate to the model this lesson will make so much sense by the end it’ll make the services in cloud easier to both describe and define now when it comes to deploying an application they are deployed in an infrastructure stack like the one you see here now a stack is a collection of needed infrastructure that the application needs to run on it is layered and each layer builds on top of the one previous to it to create what it is that you see here now as you can see at the top this is a traditional on-premises infrastructure stack that was typically used pre-cloud now in this traditional model all the components are managed by the customer the purchasing of the data center and all the network and storage involved the physical servers the virtualization the licensing for the operating systems the staff that’s needed to put it all together including racking stacking cabling physical security was also something that needed to be taken into consideration in other words for the organization to put this together by themselves they were looking at huge costs now the advantages to this is that it allowed for major flexibility as the organization is able to tune this any way they want to satisfy the application compliance standards basically anything that they wanted now when talking about the cloud service model concepts parts are always managed by you and parts are managed by the vendor now another concept i wanted to touch on is that unit of consumption is how the vendor prices what they are serving to their customer now just before cloud became big in the market there was a model where the data center was hosted for you so a vendor would come along and they would take care of everything with regards to the data center the racks the power to the racks the air conditioning the networking cables out of the building and even the physical security and so the unit of consumption here was the rack space within the data center so the vendor would charge you for the rack space and in turn they would take care of all the necessities within the data center now this is less flexible than the traditional on-premises model but the data center is abstracted for you so throughout this lesson i wanted to introduce a concept that might make things easier to grasp which is the pizza as a service so now the traditional on-premises model is where you would buy everything and make the pizza at home now as we go on in the lesson less flexibility will be available because more layers will be abstracted so the next service model that i wanted to introduce is infrastructure as a service or i as for short this is where all the layers from the data center up to virtualization is taken care of by the vendor this is the most basic model which is essentially your virtual machines in a cloud data center you set up configure and manage instances that run in the data center infrastructure and you put whatever you want on them on google cloud google compute engine would satisfy this model and so the unit of consumption here would be the operating system as you would manage all the operating system updates and everything that you decide to put on that instance but as you can see here you are still responsible for the container the run time the data and the application layers now bringing up the pizza as a service model is would be you picking up the pizza and you cooking it at home moving on to platform as a service or paz for short this is a model that is geared more towards developers and with pass the cloud provider provides a computing platform typically including the operating system the programming language execution environment the database and the web server now typically with pass you never have to worry about the operating system updates or managing the runtime and middleware and so the unit of consumption here would be the runtime now the runtime layer would be the layer you would consume as you would be running your code in the supplied runtime environment that the cloud vendor provides for you the provider manages the hardware and software infrastructure and you just use the service this is usually the layer on top of is and so all the layers between the data center and runtime is taken care of by the vendor a great example of this for google cloud is google app engine which we will be diving into a little bit later getting back to the pizza as a service model pass would fall under the pizza being delivered right to your door now with the past model explained i want to move into the last model which is sas which stands for software as a service now with sas all the layers are taken care of by the vendor so users are provided access to application software and cloud providers manage the infrastructure and platforms that run the applications g suite and microsoft’s office 365 are great examples of this model now sas doesn’t offer much flexibility but the trade-off is that the vendor actually takes care of all these layers so again the unit of consumption here is the application itself and of course getting to the pizza as a service model sas is pretty much dining in the restaurant enjoying your pizza now to summarize when you have a data center on site you manage everything when it’s infrastructure as a service part of that stack is abstracted by the cloud vendor with platform as a service you’re responsible for the application and data everything else is abstracted by the vendor with software as a service again using the pizza as a service analogy on premise you buy everything and you make the pizza at home infrastructure as a service you pick up the pizza and you cook it at home when it comes to platform as a service the pizza is delivered and of course software as a service is dining in the restaurant now there will be some other service models coming up in this course such as function as a service and containers as a service and don’t worry i’ll be getting into those later but i just wanted to give you a heads up so now for some of you this may have been a lot of information to take in but trust me knowing these models will give you a better understanding of the services provided in google cloud as well as any other cloud vendor so that’s all i wanted to cover in this lesson so you can now mark this lesson as complete and let’s move on to the next one welcome back in this lesson i wanted to discuss google cloud global infrastructure how data centers are connected how traffic flows when a request is done along with the overall structure of how google cloud geographic locations are divided for better availability durability and latency now google holds a highly provisioned low latency network where your traffic stays on google’s private backbone for most of its journey ensuring high performance and a user experience that is always above the norm google cloud has been designed to serve users all around the world by designing their infrastructure with redundant cloud regions connected with high bandwidth fiber cables as well as subsea cables connecting different continents currently google has invested in 13 subsea cables connecting these continents at points of presence as you see here in this diagram hundreds of thousands of miles of fiber cables have also been laid to connect points of presence for direct connectivity privacy and reduced latency just to give you an idea of what a subsea cable run might look like i have included a diagram of how dedicated google is to their customers as there is so much that goes into running these cables that connect continents as you can see here this is the north virginia region being connected to the belgium region from the u.s over to europe a cable is run from the north virginia data center as well as having a point of presence in place going through a landing station before going deep into the sea on the other side the landing station on the french west coast picks up the other side of the cable and brings it over to the data center in the belgium region and this is a typical subsea cable run for google so continents are connected for maximum global connectivity now at the time of recording this video google cloud footprint spans 24 regions 73 zones and over 144 points of presence across more than 200 countries and territories worldwide and as you can see here the white dots on the map are regions that are currently being built to expand their network for wider connectivity now to show you how a request is routed through google’s network i thought i would demonstrate this by using tony bowtie now tony makes a request to his database in google cloud and google responds to tony’s request from a pop or edge network location that will provide the lowest latency this point of presence is where isps can connect to google’s network google’s edge network receives tony’s request and passes it to the nearest google data center over its private fiber network the data center generates a response that’s optimized to provide the best experience for tony at that given moment in time the app or browser that tony is using retrieves the requested content with a response back from various google locations including the google data centers edge pops and edge nodes whichever is providing the lowest latency this data path happens in a matter of seconds and due to google’s global infrastructure it travels securely and with the least amount of latency possible no matter the geographic location that the request is coming from now i wanted to take a moment to break down how the geographic areas are broken out and organized in google cloud we start off with the geographic location such as the united states of america and it’s broken down into multi-region into regions and finally zones and so to start off with i wanted to talk about zones now a zone is a deployment area for google cloud resources within a region a zone is the smallest entity in google’s global network you can think of it as a single failure domain within a region now as a best practice resources should always be deployed in zones that are closest to your users for optimal latency now next up we have a region and regions are independent geographic areas that are subdivided into zones so you can think of a region as a collection of zones and having a region with multiple zones is designed for fault tolerance and high availability the intercommunication between zones within a region is under five milliseconds so rest assured that your data is always traveling at optimal speeds now moving on into a multi-region now multi-regions are large geographic areas that contain two or more regions and this allows google services to maximize redundancy and distribution within and across regions and this is for google redundancy or high availability having your data spread across multiple regions always reassures that your data is constantly available and so that covers all the concepts that i wanted to go over when it comes to geography and regions within google cloud note that the geography and regions concepts are fundamental not only for the exam but for your day-to-day role in google cloud so just as a recap a zone is a deployment area for google cloud resources within a region a zone is the smallest entity of google’s global infrastructure now a region is an independent geographic area that are subdivided into zones and finally when it comes to multi-region multi-regions are large geographic areas that contains two or more regions again these are all fundamental concepts that you should know for the exam and for your day-to-day role in google cloud and so that’s all i had for this lesson so you can now mark this lesson as complete and let’s move on to the next one [Music] welcome back this lesson is going to be an overview of all the compute service options that are available in google cloud how they differ from each other and where they fall under the cloud service model again this lesson is just an overview of the compute options as we will be diving deeper into each compute option later on in this course so google cloud gives you so many options when it comes to compute services ones that offer complete control and flexibility others that offer flexible container technology managed application platform and serverless environments and so when we take all of these compute options and we look at it from a service model perspective you can see that there’s so much flexibility starting here on the left with infrastructure as a service giving you the most optimal flexibility moving all the way over to the right where we have function as a service offering less flexibility but the upside being less that you have to manage and we’ll be going through these compute options starting on the left here with infrastructure as a service we have compute engine now compute engine is google’s staple infrastructure the service product that offers virtual machines or vms called instances these instances can be deployed in any region or
zone that you choose you also have the option of deciding what operating system you want on it as well as the software so you have the option of installing different types of flavors of linux or windows and the software to go with it google also gives you the options of creating these instances using public or private images so if you or your company have a private image that you’d like to use you can use this to create your instances google also gives you the option to use public images to create instances and are available when you launch compute engine as well there are also pre-configured images and software packages available in the google cloud marketplace and we will be diving a little bit deeper into the google cloud marketplace in another lesson just know that there are slew of images out there that’s available to create instances giving you the ease to deploy now when it comes to compute engine and you’re managing multiple instances these are done using instance groups and when you’re looking at adding or removing capacity for those compute engine instances automatically you would use auto scaling in conjunction with those instance groups compute engine also gives you the option of attaching and detaching disks as you need them as well google cloud storage can be used in conjunction with compute engine as another storage option and when connecting directly to compute engine google gives you the option of using ssh to securely connect to it so moving on to the next compute service option we have google kubernetes engine also known as gke now gke is google’s flagship container orchestration system for automating deploying scaling and managing containers gke is also built on the same open source kubernetes project that was introduced by google to the public back in 2014 now before google made kubernetes a managed service there was many that decided to build kubernetes on premise in their data centers and because it is built on the same platform gke offers the flexibility of integrating with these on-premise kubernetes deployments now under the hood gke uses compute engine instances as nodes in a cluster and as a quick note a cluster is a group of nodes or compute engine instances and again we’ll be going over all this in much greater detail in a different lesson so if you haven’t already figured it out google kubernetes engine is considered container as a service now the next compute service option that i wanted to go over that falls under platform as a service is app engine now app engine is a fully managed serverless platform for developing and hosting web applications at scale now with app engine google handles most of the management of the resources for you for example if your application requires more computing resources because traffic to your website increases google automatically scales the system to provide these resources if the system software needs a security update as well that’s handled for you too and so all you need to really take care of is your application and you can build your application in your favorite language go java.net and many others and you can use both pre-configured runtimes or use custom runtimes to allow you to write the code in any language app engine also allows you to connect with google cloud storage products and databases seamlessly app engine also offers the flexibility of connecting with third-party databases as well as other cloud providers and third-party vendors app engine also integrates with a well-known security product in google cloud called web security scanner as to identify security vulnerabilities and so that covers app engine in a nutshell moving on to the next compute service option we have cloud functions and cloud functions fall under function as a service this is a serverless execution environment for building and connecting cloud services with cloud functions you write simple single purpose functions that are attached to events that are produced from your infrastructure and services in google cloud your function is triggered when an event being watched is fired your code then executes in a fully managed environment there is no need to provision any infrastructure or worry about managing any servers and cloud functions can be written using javascript python 3 go or java runtimes so you can take your function and run it in any of these standard environments which makes it extremely portable now cloud functions are a good choice for use cases that include the following data processing or etl operations such as video transcoding and iot streaming data web hooks that respond to http triggers lightweight apis that compose loosely coupled logic into applications as well as mobile back-end functions again cloud functions are considered function as a service and so that covers cloud functions now moving to the far right of the screen on the other side of the arrow we have our last compute service option which is cloud run now cloud run is a fully managed compute platform for deploying and scaling containerized applications quickly and securely cloudrun was built on an open standard called k native and this enabled the portability of any applications that were built on it cloudrun also abstracts away all the infrastructure management by automatically scaling up and down almost instantaneously depending on the traffic now cloud run was google’s response to abstracting all the infrastructure that was designed to run containers and so this is known as serverless for containers cloudrun has massive flexibility as you can write it in any language any library using any binary this compute service is considered a function as a service now at the time of recording this video i have not heard of cloud cloudrun being in the exam but since it is a compute service option i felt the need for cloudrun to have an honorable mention and so these are all the compute service options that are available on google cloud and we will be diving deeper into each one of these later on in this course again this is just an overview of all the compute service options that are available on the google cloud platform and so that’s all i wanted to cover for this lesson so you can now mark this lesson as complete and let’s move on to the next one [Music] welcome back now in the last lesson i covered all the different options for compute services in this lesson we’re going to cover the options that are available that couple well with these compute services by diving deeper into the different storage types and the different databases available on google cloud again this is strictly an overview as i will be diving deeper into these services later on in the course now when it comes to storage options there are three services that are readily available to you in google cloud each of them have their own specific use case that i will be diving into in just a second the first one i wanted to go over is cloud storage now with cloud storage this is google’s consistent scalable large capacity and highly durable object storage so when i refer to object storage this is not the type of storage that you would attach to your instance and store your operating system on i’m talking about managing data as objects such as documents or pictures and shouldn’t be confused with block storage which manages data at a more granular level such as an operating system not to worry if you fully don’t grasp the concept of object storage i will be going into further detail with that later on in the cloud storage lesson cloud storage has 11 9’s durability and what i mean by durability is basically loss of files so just to give you a better picture on cloud storage durability if you store 1 million files statistically google would lose one file every 659 000 years and you are about over 400 times more likely to get hit by a meteor than to actually lose a file so as you can see cloud storage is a very good place to be storing your files another great feature on cloud storage is the unlimited storage that it has with no minimum object size so feel free to continuously put files in cloud storage now when it comes to use cases cloud storage is fantastic for content delivery data lakes and backups and to make cloud storage even more flexible it is available in different storage classes and availability which i will be going over in just a second now when it comes to these different storage classes there are four different classes that you can choose from the first one is the standard storage class and this storage class offers the maximum availability with your data with absolutely no limitations this is great for storage that you access all the time the next storage class is near line and this is low-cost archival storage so this storage class is cheaper than standard and is designed for storage that only needs to be accessed less than once a month and if you’re looking for an even more cost effective solution cloud storage has cold line storage class which is an even lower cost archival storage solution this storage class is designed for storage that only needs to be accessed less than once every quarter and just when you thought that the prices couldn’t get lower than cold line cloud storage has offered another storage class called archive and this is the lowest cost archival storage which offers storage at a fraction of a penny per gigabyte but is designed for archival or backup use that is accessed less than once a year now when it comes to cloud storage availability there are three options that are available there is region dual region and multi-region region is designed to store your data in one single region dual region is exactly how it sounds which is a pair of regions now in multiregion cloud storage stores your data over a large geographic area consisting of many different regions across that same selected geographic area and so that about covers cloud storage as a storage option the next storage option that i wanted to talk about is file store now file store is a fully managed nfs file server from google cloud that is nfs version 3 compliant you can store data from running applications from multiple vm instances and kubernetes clusters accessing the data at the same time file store is a great option for when you’re thinking about accessing data from let’s say an instance group and you need multiple instances to access the same data and moving on to the last storage option we have persistent disks now with persistent disks this is durable block storage for instances now as i explained before block storage is different than object storage if you remember previously i explained that object storage is designed to store objects such as data or photos or videos whereas block storage is raw storage capacity that is used in drives that are connected to an operating system in this case persistent disks are doing just that persistent disks come in two options the first one is the standard option which gives you regular standard storage at a reasonable price and the other option is solid state or ssd which gives you lower latency higher iops and is just all around faster than your standard persistent disk both of these options are available in zonal and regional options depending on what you need for your specific workload so now that i’ve covered all three storage options i wanted to touch into the database options that are available on google cloud these database options come in both the sql and nosql flavors depending on your use case now getting into the options themselves i wanted to start off going into a little bit of detail with the sql relational options so the first option is cloud sql and cloud sql is a fully managed database service that is offered in postgres mysql and sql server flavors cloud sql also has the option of being highly available across zones now moving into cloud spanner this is a scalable relational database service that’s highly available not only across zones but across regions and if need be available globally cloud spanner is designed to support transactions strong consistency and synchronous replication moving into the nosql options there are four available services that google cloud offers moving into the first one is bigtable and bigtable is a fully managed scalable nosql database that has high throughput and low latency bigtable also comes with the flexibility of doing cluster resizing without any downtime the next nosql option available is datastore and this is google cloud’s fast fully managed serverless nosql document database datastore is designed for mobile web and internet of things applications datastore has the capabilities of doing multi-region replication as well as acid transactions for those of you who don’t know i will be covering acid transactions in a later lesson next up for nosql options is firestore and this is a nosql real-time database and is optimized for offline use if you’re looking to store data in a database in real time firestore is your option and like bigtable you can resize the cluster in firestore without any downtime and the last nosql option is memorystore and this is google cloud’s highly available in memory service for redis and memcached this is a fully managed service and so google cloud takes care of everything for you now i know this has been a short lesson on storage and database options but a necessary overview nonetheless of what’s to come and so that’s about all i wanted to cover in this lesson so you can now mark this lesson as complete and let’s move on to the next one [Music] welcome back now while there are some services in gcp that take care of networking for you there are still others like compute engine that give you a bit more flexibility in the type of networking you’d like to establish this lesson will go over these networking services at a high level and provide you with strictly an overview to give you an idea on what’s available for any particular type of scenario when it comes to connecting and scaling your network traffic i will be going into further details on these networking services in later lessons now i wanted to start off with some core networking features for your resources and how to govern specific traffic traveling to and from your network this is where networks firewalls and routes come into play so first i wanted to start off with virtual private cloud also known as vpc now vpc manages networking functionality for your google cloud resources this is a virtualized network within google cloud so you can picture it as your virtualized data center vpc is a core networking service and is also a global resource that spans throughout all the different regions available in google cloud each vpc contains a default network as well additional networks can be created in your project but networks cannot be shared between projects and i’ll be going into further depth on vpc in a later lesson so now that we’ve covered vpc i wanted to get into firewall rules and routes now firewall rules segment your networks with a global distributive firewall to restrict access to resources so this governs traffic coming into instances on a network each default network has a default set of firewall rules that have already been established but don’t fret you can create your own rules and set them accordingly depending on your workload now when it comes to routes this specifies how traffic should be routed within your vpc to get a little bit more granular routes specify how packets leaving an instance should be directed so it’s a basic way of defining which way your traffic is going to travel moving on to the next concept i wanted to cover a little bit about low balancing and how it distributes workloads across multiple instances now we have two different types of load balancing and both these types of load balancing can be broken down to even a more granular level now when it comes to http or https low balancing this is the type of load balancing that covers worldwide auto scaling and load balancing over multiple regions or even a single region on a single global ip https load balancing distributes traffic across various regions and make sure that the traffic is routed to the closest region or in case there’s failures amongst instances or in instances being bombarded with traffic http and https load balancing can route the traffic to a healthy instance in the next closest region another great feature of this load balancing is that it can distribute traffic based on content type now when it comes to network load balancing this is a regional load balancer and supports any and all ports it distributes traffic among server instances in the same region based on incoming ip protocol data such as address port and protocol now when it comes to networking dns plays a big part and because dns plays a big part in networking google has made this service 100 available on top of giving any dns queries the absolute lowest latency with google cloud dns you can publish and maintain dns records by using the same infrastructure that google uses and you can work with your managed zones and dns records such as mx records tax records cname records and a records and you can do this all through the cli the api or the sdk now some of the advanced connectivity options that are available in google cloud are cloudvpn and direct interconnect now cloudvpn connects your existing network whether it be on-premise or in another location to your vbc network through an ipsec connection the traffic is encrypted and travels between the two networks over the public internet now when it comes to direct interconnect this connectivity option allows you to connect your existing network to your vpc network using a highly available low latency connection this connectivity option does not traverse the public internet and merely connects to google’s backbone and this is what gives it the highly available low latency connection a couple of other advanced connectivity options is direct and carrier peering these connections allow your traffic to flow through google’s edge network locations and pairing can be done directly or it can be done through a third-party carrier and so although this is a very short lesson i will be going into greater depth on all these concepts in later lessons in the course so that’s all i had to cover for this lesson so you can now mark this lesson as complete and let’s move on to the next one welcome back in this lesson we’re going to learn about how resources and entities are organized within google cloud and how permissions are inherited through this approach knowing this structure is a fundamental concept that you should know while working in gcp at any capacity so before defining what the resource hierarchy is i’d like to take a little bit of time to define what is a resource now in the context of google cloud a resource can refer to the service level resources that are used to process your workloads such as compute instance vms cloud storage buckets and even cloud sql databases as well as the account level resources that sit above the services such as the organization itself the folders and the projects of course which we will be getting into a little bit deeper in just a minute the resource hierarchy is google’s way to configure and grant access to the various cloud resources for your company within google cloud both at the service level and at the account level the resource hierarchy in google cloud can truly define the granular permissions needed for when you need to configure permissions to everyone in the organization that actually makes sense so now that we covered what is a resource i wanted to start digging into the resource hierarchy and the structure itself now google cloud resources are organized hierarchically using a parent-child relationship this hierarchy is designed to map an organization’s operational structure to google cloud and to manage access control and permissions for groups of related resources so overall resource hierarchy will give organizations better management of permissions and access control the accessibility of these resources or policies are controlled by identity and access management also known as iam a big component of gcp which we will be digging into a little bit later on in this course and so when an iam policy is set on a parent the child will inherit this policy respectively access control policies and configuration settings on a parent resource are always inherited by the child also please note that each child object can only have exactly one parent and that these policies are again controlled by iam so now to understand a little bit more about how the gcp resource hierarchy works i wanted to dig into the layers that support this hierarchy so this is a diagram of exactly what the resource hierarchy looks like in all of its awesomeness including the billing account along with the payments profile but we’re not going to get into that right now i’ll actually be covering that in a later lesson so more on that later so building the structure from the top down we start off with the domain or cloud level and as you can see here the domain of bowtieinc.co is at the top this is the primary identity of your organization at the domain level this is where you manage your users in your organizations so users policies and these are linked to g suite or cloud identity accounts now underneath the domain level we have the organization level and this is integrated very closely with the domain so with the organization level this represents an organization and is the root node of the gcp resource hierarchy it is associated with exactly one domain here we have the domain set as bowtie inc all entities or resources belong to and are grouped under the organization all controlled policies applied to the organization are inherited by all other entities and resources underneath it so any folders projects or resources will get those policies that are applied from the organization layer now i know that we haven’t dug into roles as of yet but the one thing that i did want to point out is that when an organization is created an organization admin role is created and this is to allow full access to edit any or all resources now moving on to the folders layer this is an additional grouping mechanism and isolation boundary between each project in essence it’s a grouping of other folders projects and resources so if you have different departments and teams within a company this is a great way to organize it now a couple of caveats when it comes to folders the first one is you must have an organization node and the second one is while a folder can contain multiple folders or resources a folder or resource can have exactly one parent now moving into the projects layer this is a core organizational component of google cloud as projects are required to use service level resources these projects are the base level organizing entity in gcp and parent all service level resources just as a note any given resource can only exist in one project and not multiple projects at the same time and moving on to the last layer we have the resources layer and this is any service level resource created in google cloud everything from compute engine instances to cloud storage buckets to cloud sql databases apis users all these service level resources that we create in google cloud fall under this layer now giving the hierarchy a little bit more context i want to touch on labels for just a second labels help categorize resources by using a key value pair and you can attach them to any resource and so what labels help you do is to break down and organize costs when it comes to billing now to give you some more structure with regards to the hierarchy under the domain level everything underneath this is considered a resource and to break it down even further everything you see from the organization layer to the projects layer is considered an account level resource everything in the resource layer is considered a service level resource and so this is how the google cloud resource hierarchy is split up and organized and so before i finish off this lesson i wanted to give you a quick run-through on how policies can be applied at a hierarchical level so i thought i’d bring in tony bowtie for a quick demo so just to give you an example tony bowtie is part of department b and tony’s manager lark decides to set a policy on department b’s folder and this policy grants project owner role to tony at bowtieinc.co so tony will have the project owner role for project x and for project y at the same time lark assigns laura at bowtieinc.co cloud storage admin role on project x and thus she will only be able to manage cloud storage buckets in that project this hierarchy and permission inheritance comes up quite a bit not only in the exam but is something that should be carefully examined when applying permissions anywhere within the hierarchy in your day-to-day role as an engineer applying permissions or policies to resources with existing policies may not end up getting you the desired results you’re looking for and may have a chance to be overlooked now i hope these diagrams have given you some good contacts with regards to resource hierarchy its structure and the permissions applied down the chain now that’s all i have for this lesson on resource hierarchy so you can now mark this lesson as complete and let’s move on to the next one [Music] welcome back in this lesson i will be covering a few different topics that i will touch on when creating a new google cloud account i will be covering going over the free tier and the always free options the differences between them and a demo showing how you can create your own free tier account as well i’ll also be going into what you will need in order to fulfill this demo so for the remainder of this course all the demos will run under the free tier now when i built this course i built it with budget in mind and having viewed on ways where i can keep the price to a minimum while still keeping the demos extremely useful and so the free tier falls within all these guidelines and will help you learn without the high ticket price and so getting into a quick overview of the differences between the free tier and the always free option i have broken them down here with their most significant differences in the free tier google cloud offers you a 12 month free trial with a 300 u.s credit this type of account ends when the credit is used or after the 12 months whichever happens first and so for those of you who are looking at taking advantage of this on a business level unfortunately the free tier only applies to a personal account and cannot be attached to a business account now moving over to the always free option the always free option isn’t a special program but it’s a regular part of your google cloud account it provides you limited access to many of the google cloud resources free of charge and once these limits have been hit then you are charged at the regular per second billing rate and i will show you a little bit later how to monitor these credits so that you don’t go over using this in conjunction with the free tier account is not possible you have to have an upgraded billing account which can also include a business account now there are a bunch more stipulations in this program and i will include a link to both of them in the lesson text below for later viewing at your convenience now lastly before we get into the demo i wanted to go through a quick run-through of exactly what’s needed to open up your free tier account so we’re going to start off with a fresh new gmail address so that it doesn’t conflict with any current gmail address that you may have you’re gonna need a credit card for verification and this is for google to make sure that you’re an actual human being and not a robot and you won’t be charged unless you go above the 300 credit limit as well i highly recommend going into a private browsing session so whether you’re using chrome you would use an incognito session if you’re using firefox you would use private browsing and in microsoft edge you would be using the in private mode and so in order to start with this free trial you can head on over to the url listed here and i’ll also include this in the lesson text so head on over to this url and i’ll see you there in just a second okay so here we are at the free trial url i’m here in google chrome in an incognito session and so we’re not going to sign up we’re going to go over here to create account you can just click on create account for myself because as i mentioned earlier you’re not able to create a free trial account with your business so i’m going to click on for myself and it’s going to bring you to this page where it says create your google account and you’re going to go to create a new gmail address instead and now you’re going to fill in all the necessary information that’s needed in order to open up this new gmail account once you’re finished typing your password you can hit next and now i got prompted for six digit verification code that i have to plug in but in order to do that google needs my telephone number so i’m gonna type that in now and just to let you know this verification is done to let google know that you’re not a bot and you’re a real human and google just sent me a verification code and this is a one-time verification code that i’m going to plug in and i’m going to hit verify and you can plug in the necessary information here for recovery email address your birthday and gender and this is so that google can authenticate you in case you accidentally misplace your password and then just hit next and here google gives you a little bit more information on what your number can be used for and so i’m going to go ahead and skip it and of course we’re going to read through the terms of service and the privacy policy click on agree and as you can see we’re almost there it shows here that we’re signing up for the free trial i’m in canada so depending on your country this may change of course i read the terms of service and i’m going to agree to it and i don’t really want any updates so you can probably skip that and just hit continue and so this is all the necessary information that needs to be filled out for billing and so here under account type be sure to click on individual as opposed to business and again fill in all the necessary information with regards to your address and your credit card details and once you fill that in you can click on start my free trial and once you’ve entered in all that information you should be brought to this page with a prompt asking you exactly what you need with regards to google cloud and you can just hit skip here and i’m going to zoom in here just see a little better and so here you’re left with a checklist where you can go through all the different resources and it even gives you a checklist to go through but other than that we’re in and so just to verify that we’re signed up for a free tier account i’m going to go over to billing and i’m going to see here that i have my free trial credit and it says 411 dollars and due to the fact that my currency is in canadian dollars it’s been converted from us dollars and so we’ll be going through billing in a later lesson but right now we are actually logged in and so that’s all i wanted to cover for this lesson on how to sign up for your free trial account so you can now mark this lesson as complete and you can join me in the next one where we will secure the account using a method called two-step verification [Music] welcome back so in the last lesson we went ahead and created a brand new gcp account in this lesson we’ll be discussing how to secure that gcp account by following some best practices whenever any account is created in google cloud and this can be applied with regards to personal accounts as well as the super admin account as it’s always good to keep safety as a priority this lesson may be a refresher for those who are a bit more advanced as for everyone else these steps could help you from an attack on your account i’d first like to run you through a scenario of the outcome on both secure and non-secure accounts as well as the different options that reside in google cloud when it comes to locking down your account i’ll then run through a hands-on demo in the console to show you how you can apply it yourself so in this specific scenario a username and password is used to secure the account here lark a trouble causing manager looks over the shoulder of tony bowtie while he plugs in his username and password so that he can later access his account to wreak havoc on tony’s reputation as tony leaves for coffee lark decides to log in and send a company-wide email from tony’s account to change an already made decision about next season’s store opening in rome italy that would not look good for tony it was that easy for lark to steal tony’s password and in a real life scenario it would be that easy for someone to steal your password now when someone steals your password they could do even more devious things than what lark did not just sending out harmful emails they could lock you out of your account or even delete emails or documents this is where two-step verification comes in this can help keep bad people out even if they have your password two-step verification is an extra layer of security most people only have one layer to protect their account which is their password with two-step verification if a bad person hacks through your password they’ll still need your phone or security key to get into your account so how two-step verification works is that sign-in will require something you know and something that you have the first one is to protect your account with something you know which will be your password and the second is something that you have which is your phone or security key so whenever you sign into google you’ll enter your password as usual then a code will be sent to your phone via text voice call or google’s mobile app or if you have a security key you can insert it into your computer’s usb port codes can be sent in a text message or through a voice call depending on the setting you choose you can set up google authenticator or another app that creates a one-time verification code which is great for when you’re offline you would then enter the verification code on the sign in screen to help verify that it is you another way for verification is using google prompts and this can help protect against sim swap or other phone number based hacks google prompts are push notifications you’ll receive on android phones that are signed into your google account or iphones with the gmail app or google app that’s signed into your google account now you can actually skip a second step on trusted devices if you don’t want to provide a second verification step each time you sign in on your computer or your phone you can check the box next to don’t ask again on this computer and this is a great added feature if you are the only user on this device this feature is not recommended if this device is being used by multiple users security keys are another way to help protect your google account from phishing attacks when a hacker tries to trick you into giving them your password or other personal information now a physical security key is a small device that you can buy to help prove it’s you signing in when google needs to make sure that it’s you you can simply connect your key to your computer and verify that it’s you and when you have no other way to verify your account you have the option of using backup codes and these are one-time use codes that you can print or download and these are multiple sets of eight-digit codes that you can keep in a safe place in case you have no other options for verification i personally have found use in using these backup codes as i have used them in past when my phone died so ever since lark’s last email tony not only changed his password but added a two-step verification to his account so that only he would have access and would never have to worry again about others looking over his shoulder to gain access to his account as tony leaves for coffee lark tries to log in again but is unsuccessful due to the two-step verification in place tony has clearly outsmarted the bad man in this scenario and lark will have to look for another way to foil tony’s plan to bring greatness to bow ties across the globe and this is a sure difference between having a secure account and a not so secure account and so now that i’ve gone through the theory of the two-step verification process i’m going to dive into the console and implement it with the hands-on demo just be aware that you can also do this through the gmail console but we’re going to go ahead and do it through the google cloud console using the url you see here so whenever you’re ready feel free to join me in the console and so here we are back in the console and over here on the top right hand corner you will find a user icon and you can simply click on it and click over to your google account now i’m just going to zoom in for better viewing and so in order to enable two-step verification we’re gonna go over here to the menu on the left and click on security and under signing into google you will find two-step verification currently it’s off as well as using my phone to sign in is off so i’m going to click on this bar here for two-step verification and i definitely want to add an extra layer of security and i definitely want to keep the bad guys out so i’m going to go ahead and click on the get started button it’ll ask me for my password and because i’ve entered my phone number when i first signed up for the account it actually shows up here this is i antony which is my iphone and so now i can get a two-step verification here on my iphone and again this is going to be a google prompt as it shows here but if i wanted to change it to something else i can simply click on show more options and here we have a security key as well as text message or voice call i highly recommend the google prompt as it’s super easy to use with absolutely no fuss and so as i always like to verify what i’ve done i’m going to click on this try it now button and so because i wanted to show you exactly what a live google prompt looks like i’m going to bring up my phone here on the screen so that you can take a look and it actually sent me a google prompt to my phone and i’m just going to go ahead and open up my gmail app so i can verify that it is indeed me that wants to log in which i will accept and so once i’ve accepted the google prompt another window will pop up asking me about a backup option and so i’ll simply need my phone number and i can either get a text message or a phone call and again you have other options as well so you can use the one-time backup codes which we discussed earlier and you can print or download them but i usually like to use a text message and so i’m going to use that i’m going to send it to my phone and so just to verify it i’m gonna now plug in the one-time code that was sent to me and then just hit next so the second step is the google prompt it’s my default and my backup options if i can’t get google prompt is a voice or text message and again this is for my account antony gcloud ace at gmail.com sending it to my i antony device so turn on two-step verification absolutely and so there you have it there is two-step verification enabled and if i wanted to change the available steps i can do so here i can also edit it i can edit my phone number and i can also set up any backup codes in case i need it in my personal opinion two-step verification is a must-have on any account best practice is to always do it for your super admin account which would be my gmail account that i am currently signed up with but i find is a necessity for any other users and always make it a policy for people to add two-step verification to their accounts i highly recommend that you make it your best practice to do this in your role as an engineer in any environment at any organization again two-step verification will allow to keep you safe your users safe and your environment safe from any malicious activities that could happen at any time and that’s all i have for this lesson on two-step verification and securing your account so you can now mark this lesson as complete and let’s move on to the next one [Music] welcome back now there are many different ways in which you can interact with google cloud services and resources this lesson is an overview of the gcp console and how you can interact with it using the graphical user interface and so for this hands-on demo i will be diving into how to navigate through the gcp console and point out some functions and features that you may find helpful so with that being said let’s dive in and so here we are back in the console up here you can see the free trial status and then i still have 410 credit again this is canadian dollars so i guess consider me lucky so i’m going to go ahead over here and dismiss this don’t activate it because otherwise this will kill your free trial status and you don’t want to do that so i’m just going to hit dismiss so over here on the main page you have a bunch of cards here that will give you the status of your environment as well as the status of what’s happening within google cloud with these cards you can customize them by hitting this button over here customize and you can turn them on or off and you can go ahead and move these around if you’d like and i’m going to put this up here as well i’m going to turn on my billing so i can keep track of exactly what my spend is i don’t really need my get starting card so i’m going to turn that off as well as the documentation i’m going to turn that off as well and the apis is always nice to have as well up here on the project info this reflects the current project which is my first project and the project name here is the same the project id is showing and the project number and i’m going to dive deeper into that in another lesson also note that your cards will reflect exactly what it is that you’re interacting with and so the more resources that you dive into the cards will end up showing up here and you can add them and turn them off at will so i’m going to go up here and click on done because i’m satisfied with the way that things look here on my home page and over here to your left i wanted to focus on all the services that are available in their own specific topics so for instance all of compute you will find app engine compute engine kubernetes and so on so note that anything compute related you’ll find them all grouped together also another great feature is that you can pin exactly what it is that you use often so if i am a big user of app engine i can pin this and it will move its way up to the top this way it saves me the time from having to go and look for it every time i need it and if i’m using it constantly it’s great to have a shortcut to unpin it i simply go back to the pin and click on it again as well if i’d like to move the menu out of the way to get more screen real estate i can simply click on this hamburger button here and make it disappear and to bring it back i can just click on that again and i’ll bring it back again now i know that there’s a lot of resources here to go through so if you’re looking for something specific you can always go up to the search bar right here and simply type it in so if i’m looking for let’s say cloud sql i can simply type in sql and i can find it right here i can find the api and if anything associated with the word sql if i’m looking for cloud sql specifically i can simply type in cloud sql and here it is another thing to note is that if you want to go back to your homepage you can simply go up to the left hand corner here and click on the google cloud platform logo and it’ll bring you right back and right here under the google cloud platform logo you’ll see another set of tabs we have dashboard we also have activity and this will show all the latest activity that’s been done and because this is a brand new account i don’t have much here now because this is my first time in activity this is going to take some time to index and in the meantime i wanted to show you filters if this were a long list to go through where activity has been happening for months i can filter through these activities either by user or by categories or by resource type as well as the date i can also combine these to search for something really granular and beside the activity tab we have recommendations which is based on the recommender service and this service provides recommendations and insights for using resources on google cloud these recommendations and insights are on a per product or per service basis and they are based on machine learning and current resource usage a great example of a recommendation is vm instance right sizing so if the recommender service detects that a vm instance is underutilized it will recommend changing the machine size so that i can save some money and because this is a fresh new account and i haven’t used any resources this is why there is no recommendations for me so going back to the home page i want to touch on this projects menu for a second and as you can see here i can select a project now if i had many different projects i can simply search from each different one and so to cover the last part of the console i wanted to touch on this menu on the top right hand corner here so clicking on this present icon will reveal my free trial status which i dismissed earlier next to the present we have a cloud shell icon and this is where you can activate and bring up the cloud shell which i will be diving into deeper in a later lesson and right next to it is the help button in case you need a shortcut to any documentations or tutorials as well some keyboard shortcuts may help you be a little bit more efficient and you can always click on this and it’ll show you exactly what you need to know and so i’m going to close this and to move over to the next part in the menu this is the notifications so any activities that happen you will be notified here and you can simply click on the bell and it’ll show you a bunch of different notifications for either resources that are created or any other activities that may have happened now moving on over three buttons over here is the settings and utilities button and over here you will find the preferences and under communication you will find product notifications and updates and offers and you can turn them off or on depending on whether or not you want to receive these notifications as well you have your language and region and you can personalize the cloud console as to whether or not you want to allow google to track your activity and this is great for when you want recommendations so i’m going to keep that checked off getting back to some other options you will find a link to downloads as well as cloud partners and the terms of service privacy and project settings and so to cover the last topic i wanted to touch on is the actual google account button and here you can add other user accounts for when you log into the console with a different user as well as go straight to your google account and of course if you’re using a computer that’s used by multiple users you can sign out here as well and so that’s just a quick run-through of the console and so feel free to poke around and get familiar with exactly what’s available in the console so that it’s a lot easier for you to use and allow you to become more efficient and so that’s all i have for this lesson so you can now mark this lesson as complete and let’s move on to the next one welcome back in this lesson i’m going to be going through a breakdown of cloud billing and an overview of the various resources that’s involved with billing billing is important to know and i’ll be diving into the concepts around billing and billing interaction over the next few lessons as well i’ll be getting into another demo going through the details on how to create edit and delete a cloud billing account now earlier on in the course i went over the resource hierarchy and how google cloud resources are broken down starting from the domain level down to their resource level this lesson will focus strictly on the billing account and payments profile and the breakdown are concepts that are comprised within them so getting right into it let’s start with the cloud billing account a cloud billing account is a cloud level resource managed in the cloud console this defines who pays for a given set of google cloud resources billing tracks all of the costs incurred by your google cloud usage as well it is connected to a google payments profile which includes a payment method defining on how you pay for your charges a cloud billing account can be linked to one or more projects and not to any one project specifically cloud billing also has billing specific roles and permissions to control accessing and modifying billing related functions that are established by identity and access management cloud billing is offered in two different account types there is the self-service or online account or you can also choose from the invoiced or offline payments when it comes to the self-service option the payment method is usually a credit or debit card and costs are charged automatically to the specific payment method connected to the cloud billing account and when you need access to your invoices you can simply go to the cloud console and view them online now when it comes to the invoice account first you must be eligible for invoice billing once you are made eligible the payment method used can be check or wire transfer your invoices are sent by mail or electronically as well they’re also available in the cloud console as well as the payment receipts now another cool feature of billing account is sub-accounts and these are intended for resellers so if you are a reseller you can use subaccounts to represent your customers and make it easy for chargebacks cloud billing subaccounts allow you to group charges from projects together on a separate section of your invoice and is linked back to the master cloud billing account on which your charges appear sub-accounts are designed to allow for customer separation and management so when it comes to ownership of a cloud billing account it is limited to a single organization it is possible though for a cloud billing account to pay for projects that belong to an organization that is different than the organization that owns the cloud billing account now one thing to note is that if you have a project that is not linked to a billing account you will have limited use of products and services available for your project that is projects that are not linked to a billing account cannot use google cloud services that aren’t free and so now that we’ve gone through an overview of the billing account let’s take a quick step into the payments profile now the payments profile is a google level resource managed at payments.google.com the payments profile processes payments for all google services and not just for google cloud it connects to all of your google services such as google ads as well as google cloud it stores information like your name address and who is responsible for the profile it stores your various payment methods like credit cards debit cards and bank accounts the payments profile functions as a single pane of glass where you can view invoices payment history and so on it also controls who can view and receive invoices for your various cloud billing accounts and products now one thing to note about payments profile is that there are two different types of payment profiles the first one is individual and that’s when you’re using your account for your own personal payments if you register your payments profile as an individual then only you can manage the profile you won’t be able to add or remove users or change permissions on the profile now if you choose a business profile type you’re paying on behalf of a business or organization a business profile gives you the flexibility to add other users to the google payments profile you manage so that more than one person can access or manage a payments profile all users added to a business profile can then see the payment information on that profile another thing to note is that once the profile type has been selected it cannot be changed afterwards and so now that we’ve quickly gone through an overview of all the concepts when it comes to billing i am now going to run through a short demo where i will create a new billing account edit that billing account and show you how to close a billing account so whenever you’re ready join me in the console and so here i am back in the console and so the first thing i want to do is i want to make sure that i have the proper permissions in order to create and edit a new billing account so what i’m going to do is go over here to the hamburger menu up here in the top left hand corner and click on it and go over to i am an admin and over to iam now don’t worry i’m not going to get really deep into this i will be going over this in a later section where i’ll go through iam and roles but i wanted to give you a sense of exactly what you need with regards to permissions so now that i’m here i’m going to be looking for a role that has to do with billing so i’m simply going to go over here on the left hand menu and click on roles and you’ll have a slew of roles coming up and what you can do is filter through them just by simply typing in billing into the filter table here at the top and as you can see here there is billing account administrator billing account creator and so on and so forth and just to give you a quick overview on these roles and so for the billing account administrator this is a role that lets you manage billing accounts but not create them so if you need to set budget alerts or manage payment methods you can use this role the billing account creator allows you to create new self-serve online billing accounts the billing account user allows you to link projects to billing accounts the billing account viewer allows you to view billing account cost information and transactions and lastly the project billing manager allows you to link or unlink the project to and from a billing account so as you can see these roles allow you to get pretty granular when it comes to billing so i’m going to go back over to the left hand menu over on iam and click on there and i want to be able to check my specific role and what permissions that i have or i will need in order to create a new billing account and so if i click on this pencil it’ll show me exactly what my role is and what it does and as it says here i have full access to all resources which means that i am pretty much good to go so i’m going to cancel out here and i’m going to exit i am an admin so i’m going to click on the navigation menu and go over to billing and so this billing account is tied to the current project and because it’s the only billing account it’s the one that shows up and so what i want to do is i want to find out a little bit more information with regards to this billing account so i’m going to move down the menu and click on account management here i can see the billing account which is my billing account i can rename it if i’d like and i can also see the projects that are linked to this billing account so now that we’ve viewed all the information with regards to the my billing account i’m going to simply click on this menu over here and click on the arrow and go to manage billing accounts and here it will bring me to all my billing accounts and because i only have one is shown here my billing account but if i had more than one they would show up here and so now in order for me to create this new billing account i’m going to simply click on create account and i will be prompted with a name a country and a currency for my new billing account and i’m actually going to rename this billing account and i’m going to rename it to gcloud ace dash billing i’m going to leave my country as canada and my currency in canadian dollars and i’m going to simply hit continue and it’s giving me the choice in my payments profile and because i want to use the same payments profile i’m just going to simply leave everything as is but for demonstration purposes over here you can click on the payments profile and the little arrow right beside the current profile will give me the option to create a new payments profile and we’re going to leave that as is under customer info i have the option of changing my address and i can click on this pencil icon and change it as well i can go to payment methods and click on the current payment method with that little arrow and add a new credit or debit card and as i said before we’re going to keep things the way they are and just hit submit and enable billing now as you can see here i got a prompt saying that a confirmation email will be sent within 48 hours now usually when you’re setting up a brand new billing profile with an already created payments profile you’ll definitely get a confirmation email in less than 48 hours now in order for me to finish up this demo i’m gonna wait until the new billing account shows up and continue with the demo from then and so here i am back in the billing console and it only took about 20 minutes and the gcloud ace billing account has shown up and so with part of this demo what i wanted to show is how you can take a project and attach it to a different billing account and so currently my only project is attached to the my billing account so now if i wanted to change my first project to my gcloud ace dash billing account i can simply go over here to actions click on the hamburger menu and go to change billing here i’ll be prompted to choose a billing account and i can choose g cloud a stash billing and then click on set account and there it is my first project is now linked to g cloud a stash billing so if i go back over to my billing accounts you can see here that my billing account currently has zero projects and g cloud a stash billing has one project now just as a quick note and i really want to emphasize this is that if you’re changing a billing account for a project and you are a regular user you will need the role of the billing account administrator as well as the project owner role so these two together will allow a regular user to change a billing account for a project and so now what i want to do is i want to take the gcloud a stash billing and i want to close that account but before i do that i need to unlink this project and bring it back to another billing account which in this case would be my billing account so i’m going to go back up here to the menu click on my projects and we’re going to do the exact same thing that we did before under actions i’m going to click on the hamburger menu and change billing i’m going to get the prompt again and under billing account i’m going to choose my billing account and then click on set account so as you can see the project has been moved to a different billing account i’m going to go back to my billing accounts and as you can see here the project is back to my billing account and so now that the project is unlinked from the gcloud a stash billing account i can now go ahead and close out that account now in order to do that i’m going to click on gcloud a stash billing i’m going to go down here on the hand menu all the way to the bottom to account management click on there and at the top here you will see close billing account i’m going to simply click on that and i’ll get a prompt that i’ve spent zero dollars and is linked to zero projects now if i did have a project that was linked to this billing account i would have to unlink the project before i was able to close this billing account so as a failsafe i’m being asked to type close in order to close this billing account so i’m going to go ahead and do that now and click on close billing account just as a note google gives me the option to reopen this billing account in case i did this by mistake and i really needed it i can reopen this billing account so now moving back over to billing you’ll see here that i’m left with my single billing account called my billing account with the one project that’s linked to it and so that covers my demo on creating editing and closing a new billing account as well as linking and unlinking a project to and from a different billing account so i hope you found this useful and you can now mark this lesson as complete and let’s move on to the next one [Music] welcome back in this lesson i’m going to be going over controlling costs in google cloud along with budget alerts i will be touching on all the available discounts the number of ways to control costs and go over budget alerts to get a more granular and programmatic approach so starting off i wanted to touch on committed use discounts now committed use discounts provide discounted prices in exchange for your commitment to use a minimum level of resources for a specified term the discounts are flexible cover a wide range of resources and are ideal for workloads with predictable resource needs when you purchase google cloud committed use discounts you commit to a consistent amount of usage for a one or three year period there are two commitment types available and as you can see here they are spend based and resource based commitment types and unlike most other providers the commitment fee is billed monthly so going over the specific commitment types i wanted to start off with spend based commitment now for spend based commitment you commit to a consistent amount of usage measured in dollars per hour of equivalent on-demand spend for a one or three year term in exchange you receive a discounted rate on the applicable usage your commitment covers so you can purchase committed use discounts from any cloud billing account and the discount applies to any eligible usage in projects paid for by that cloud billing account any overage is charged at the on-demand rate spend based commitments can give you a 25 discount off on-demand pricing for a one-year commitment and up to a 52 discount off of on-demand pricing for a three-year commitment now spend-based commitments are restricted to specific resources which is cloud sql database instances and google cloud vmware engine and this commitment applies to the cpu and memory usage for these available resources now the other committed use discount is the resource-based commitment so this discount is for a commitment to spend a minimum amount for compute engine resources in a particular region resource-based
commitments are ideal for predictable workloads when it comes to your vms when you purchase a committed use contract you purchase compute resources such as vcpus memory gpus and local ssds and you purchase these at a discounted price in return for committing to paying for those resources for one or three years the discount is up to 57 percent for most resources like machine types or gpus the discount is up to 70 percent for memory optimized machine types and you can purchase a committed use contract for a single project or purchase multiple contracts which you can share across many project by enabling shared discounts and sharing your committed use discounts across all your projects reduces the overhead of managing discounts on a per project basis and maximizes your savings by pooling all of your discounts across your project’s resource usage if you have multiple projects that share the same cloud billing account you can enable committed use discount sharing so all of your projects within that cloud billing account share all of your committed use discount contracts and so your sustained use discounts are also pooled at the same time so touching on sustained use discounts these are automatic discounts for running specific compute engine resources a significant portion of the billing month sustained use discounts apply to the general purpose compute and memory optimize machine types as well as sole tenant nodes and gpus again sustained use discounts are applied automatically to usage within a project separately for each region so there’s no action required on your part to enable these discounts so for example when you’re running one of these resources for more than let’s say 25 percent of the month compute engine automatically gives you a discount for every incremental minute that you use for that instance now sustained use discounts automatically apply to vms created by both google kubernetes engine and compute engine but unfortunately do not apply to vms created using the app engine flexible environment as well as data flow and e2 machine types now to take advantage of the full discount you would create your vm instances on the first day of the month as discounts reset at the beginning of each month and so the following table shows the discount you get at each usage level of a vm instance these discounts apply for all machine types but don’t apply to preemptable instances and so sustained use discounts can save you up to a maximum of a 30 percent discount so another great way to calculate savings in google cloud is by using the gcp pricing calculator this is a quick way to get an estimate of what your usage will cost on google cloud so the gcp pricing calculator can help you identify the pricing for the resources that you plan to use in your future architecture so that you are able to calculate how much your architecture will cost you this calculator holds the pricing for almost all resources encapsulated within gcp and so you can get a pretty good idea of what your architecture will cost you without having to find out the hard way this calculator can be found at the url shown here and i will include this in the lesson text below now moving right along to cloud billing budgets so budgets enable you to track your actual spend against your plan spend after you’ve set a budget amount you set budget alert threshold rules that are used to trigger email notifications and budget alert emails help you stay informed about how your spend is tracking against your budget this example here is a diagram of a budget alert notification and is the default functionality for any budget alert notifications now to get a little bit more granular you can define the scope of the budget so for example you can scope the budget to apply to the spend of an entire cloud billing account or get more granular to one or more projects and even down to a specific product you can set the budget amount to a total that you specify or base the budget amount on the previous month’s spend when costs exceed a percentage of your budget based on the rules that you set by default alert emails are sent to billing account administrators and billing account users on the target cloud billing account and again this is the default behavior of a budget email notification now as said before the default behavior of a budget is to send alert emails to billing account administrators and billing account users on the target cloud billing account when the budget alert threshold rules trigger an email notification now these email recipients can be customized by using cloud monitoring to specify other people in your organization to receive these budget alert emails a great example of this would be a project manager or a director knowing how much spend has been used up in your budget and the last concept i wanted to touch on when it comes to cloud billing budgets is that you can also use pub sub for programmatic notifications to automate your cost control response based on the budget notification you can also use pub sub in conjunction with billing budgets to automate cost management tasks and this will provide a real-time status of the cloud billing budget and allow you to do things like send notifications to slack or disable billing to stop usage as well as selectively control usage when budget has been met and so these are all the concepts that i wanted to cover when it came to cloud billing budgets now i know this lesson may have been a bit dry and not the most exciting service to dive into but it is very important to know both for the exam and for your role as an engineer when it comes to cutting costs in environments where your business owners deem necessary and so that’s all i had for this lesson so you can now mark this lesson as complete and please join me in the next one where i dive into the console and do some hands-on demos when it comes to committed use discounts budget alerts and editing budget alerts as well as adding a little bit of automation into the budgeting alerts [Music] welcome back in the last lesson i went over a few ways to do cost management and the behaviors of budget alerts in this lesson i will be doing a demo to show you committed use discounts and reservations along with how to create budget alerts and as well how to edit them so with that being said let’s dive in so now i’m going to start off with committed use discounts in order to get there i’m going to find it in compute engine so i’m going to simply go up here on the top left hand corner back to the navigation menu i’m going to go down to compute engine and i’m going to go over here to committed use discounts and as we discussed earlier these commitments for compute engine are resource based and as you can see here we have hardware commitments and reservations now reservations i will get into just a little bit later but with regards to hardware commitments we’re going to get into that right now and as expected i have no current commitments so i’m going to go up to purchase commitment and so i need to start off with finding a name for this commitment and so i’m going to name this commitment demo dash commitment it’s going to ask me for a region i’m going to keep it in us central one with the commitment type here is where i can select the type of machine that i’m looking for so i can go into general purpose and 1 and 2 and 2d e2 as well as memory optimize and compute optimized and so i’m going to keep it at general purpose and one again the duration one or three years and we get down to cores i can have as many vcpus as i’d like so if i needed 10 i can do that and i’ll get a pop-up here on the right showing me the estimated monthly total as well as an hourly rate for this specific vm with 10 cores i can also select the duration for three years and as expected i’ll get a higher savings because i’m giving a bigger commitment so bring it back down to one year and let’s put the memory up to 64 gigabytes here i can add gpus and i have quite a few to choose from as well as local ssds and here with the local ssds i can choose as many disks as i’d like as long as it’s within my quota and each disk size is going to be 375 gigabytes so if you’re looking into committed use discounts and using local ssds please keep that in mind again the reservation can be added here and i’ll be getting into that in just a second and now i don’t want to actually purchase it but i did want to show you exactly what a committed use discount would look like and how you would apply it again here on the right hand side it shows me the details of the estimated monthly total and the hourly rate so i’m going to go over here and hit cancel and if i were to have applied it the commitment would show up here in this table and give me all the specified configurations of that instance right here now touching on reservations reservations is when you reserve the vm instances you need so when the reservation has been placed the reservation ensures that those resources are always available for you as some of you might know when you go to spin up a new compute engine vm especially when it comes to auto scaling instance groups the instances can sometimes be delayed or unavailable now the thing with reservations is that a vm instance can only use a reservation if its properties exactly match the properties of the reservation which is why it’s such a great pairing with committed use discounts so if you’re looking to make a resource-based commitment and you always want your instance available you can simply create a reservation attach it to the commitment and you will never have to worry about having the resources to satisfy your workload as they will always be there so again going into create reservation it’ll show me here the name the description i can choose to use the reservation automatically or select a specific reservation the region and zone number of instances and here i can specify the machine type or specify an instance template and again this is another use case where if you need compute engine instances spun up due to auto scaling this is where reservations would apply so getting back to machine type i can choose from vcpus as well as the memory i can customize it i can add as many local ssds as my quotas will allow me and i can select my interface type and i’m going to cancel out of here now when it comes to committed use discounts and reservations as it pertains to the exam i have not seen it but since this is an option to save money i wanted to make sure that i included it in this lesson as this could be a great option for use in your environment so now that we covered resource-based committed use discounts i wanted to move into spend based commitments and so where you would find that would be over in billing so again i’m going to go up to the navigation menu in the top left hand corner and go into billing now you’d think that you would find it here under commitments but only when you have purchased a commitment will it actually show up here but as you can see here it’s prompting us to go to the billing overview page so going back to the overview page you’ll find it down here on the right and so i can now purchase a commitment and as we discussed before a spend based commitment can be used for either cloud sql or for vmware engine i select my billing account the commitment name the period either one year or three years and it also shows me the discount which could help sway my decision as well as the region as well as the hourly on-demand commitment now you’re probably wondering what this is and as explained here this commitment is based on the on-demand price and once this is all filled out the commitment summary will be populated and after you agree to all the terms and services you can simply hit purchase but i’m going to cancel out of here and so that is an overview for the spend based commitment and again these committed use discounts i have not seen on the exam but i do think that it’s good to know for your day-to-day environment if you’re looking to save money and really break down costs so now that i’ve covered committed use discounts and reservations i wanted to move over to budgets and budget alerts and because i’m already on the billing page all i need to do is go over here to the left hand menu and click on budgets and alerts now setting up a budget for yourself for this course would be a great idea especially for those who are cost conscious on how much you’re spending with regards to your cloud usage and so we’re to go ahead and create a new budget right now so let’s go up here to the top to create budget and i’m going to be brought to a new window where i can put in the name of the budget and i’m going to call this ace dash budget and because i want to monitor all projects and all products i’m going to leave this as is but if you did have multiple projects you could get a little bit more granular and the same thing with products so i’m going to go ahead and leave it as is and just click on next now under budget type i can select from either a specified amount or the last month’s spend and so for this demo i’m going to keep it at specified amount and because i want to be really conscious about how much i spend in this course i’m going to put in 10 for my target amount i’m going to include the credits and cost and then i’m going to click on next now these threshold rules are where billing administrators will be emailed when a certain percent of the budget is hit so if my spend happens to hit five dollars because i am a billing administrator i will be sent an email telling me that my spend has hit five dollars i also have the option of changing these percentages so if i decided to change it to forty percent now my amount goes to four dollars and this is done automatically so no need to do any calculations but i’m going to keep this here at 50 percent and vice versa if i wanted to change the amount the percentage of budget will actually change now with the trigger i actually have the option of selecting forecasted or actual and so i’m going to keep it on actual and if i want i can add more threshold rules now i’m going to leave everything as is and just click on finish and now as you can see here i have a budget name of ace budget now because the budget name doesn’t have to be globally unique in your environment you can name your budget exactly the same and again it’ll give me all the specific configurations that i filled out shows me how much credits i’ve used and that’s it and that’s how you would create a budget alert now if i needed to edit it i can always go back to ace budget and here i can edit it but i’m not going to touch it and i’m just going to hit cancel and so the last thing i wanted to show you before we end this lesson is how to create another budget but being able to send out the trigger alert emails to different users and so in order to do that i’m going to go back up here to create budget i’m going to name this to ace dash budget dash users i’m going to leave the rest as is i’m going to click on next again i’m going to leave the budget type the way it is the target amount i’m going to put ten dollars leave the include credits and cost and just click on next and so here i’m going to leave the threshold rules the way they are and right here under manage notifications i’m going to click off link monitoring email notification channels to this budget now because the email notification channel needs cloud monitoring in order to work i am prompted here to select a workspace which is needed by cloud monitoring so because i have none i’m going to go ahead and create one and so clicking on managing monitoring workspaces will bring you to the documentation but in order for me to get a workspace created i need to go to cloud monitoring now workspace is the top level container that is used to organize and control access to your monitoring notification channels in order for your notification channels to work they must belong to a monitoring workspace so you need to create at least one workspace before adding monitoring notification channels and don’t worry we’ll be getting into greater depth with regards to monitoring in a later section in this course so i’m going to go ahead and cancel this and i’m going to go up to the navigation menu click on there and scroll down to monitoring and then overview and this may take a minute to start up as the apis are being enabled and the default workspace for cloud monitoring is being built okay and now that the monitoring api has been enabled we are now in monitoring the workspace that was created is my first project so now that we have our monitoring workspace created i need to add the emails to the users that i want the alerts to be sent out to and added to the notification channel so in order to do that i’m going to go over here to alerting and up here at the top i’m going to click on edit notification channels and here as you can see are many notification channels that you can enable by simply clicking on add new over here on the right so now what i’m looking for is under email i’m going to click on add new now here i can add the new email address and so for me i’m going to add antony at antonyt.com and you can add whatever email address you’d like and under display name i’m going to add billing admin notification and just click on save and as you can see my email has been added to the notification channel and so this is all i needed to do in order to move on to the next step and so now that i’ve covered creating my monitoring workspace as well as adding another email to my email notification channels i can now go back to billing and finish off my budget alert let’s go over here to budgets and alerts create budget and we’re gonna go through the same steps call this billing alert users leave everything else as is and click on next i’m just going to change the target amount to 10 click on next i’m going to leave everything here as is and i’m going to go back to click on link monitoring email notification channels to this budget now if you notice when i click on select workspace my first project shows up and here it will ask me for my notification channels and because i’ve already set it up i can simply click on it and you’ll see the billing admin notification channel and so if i didn’t have this set up i can always go to manage notification channels and it’ll bring me back to the screen which you saw earlier and so now that that’s set up i can simply click on finish and so now that i have a regular budget alert i also have another budget alert that can go to a different email so if you have a project manager or a director that you want to send budget alerts to this is how you would do it and so that about covers this demo on committed use discounts reservations budgets and budget alerts and so that’s all i wanted to cover for this lesson so you can now mark this lesson as complete and let’s move on to the next one [Music] welcome back in this short lesson i will be covering the exporting of your billing data so that you’re able to analyze that data and understand your spend at a more granular level i will also be going through a short demo where i will show you how to enable the export billing feature and bring it into bigquery to be analyzed now cloud billing export to bigquery enables you to export granular google cloud billing data such as usage cost details and pricing data automatically to a bigquery data set that you specify then you can access your cloud billing data from bigquery for detailed analysis or use a tool like data studio to visualize your data just a quick note here that billing export is not retroactive and this should be taken into consideration when planning for analysis on this data and so there are two types of cloud billing data that you can export there’s the daily cost detail data and the pricing data and these can be selected right within the console depending on your use case and so now that we’ve gone through exactly what billing export is i wanted to get into a demo and show you how to export your cloud billing data to bigquery and go through all the necessary steps to get it enabled so when you’re ready join me in the console and so here we are back in the console and so in order to enable billing export i’m going to be going to the billing page so i’m going to move up to the top left hand corner to the navigation menu and click on billing here in the left hand menu you’ll see billing export and you can just click on there and so for those just coming to billing export for the first time there’s a quick summary of exactly what the bigquery export is used for and as we discussed earlier there is an option for the daily cost detail and for pricing and i’m going to use the daily cost detail in this demo and export that data to bigquery so the first step i’m going to do is to click on edit settings and it’s going to bring me to a new page where it will ask me for my project and this is where my billing data is going to be stored but as you can see here i’m getting a prompt that says you need to create a bigquery data set first now the bigquery data set that is asking for is where the billing data is going to be stored so in order to move forward with my billing export i need to go to bigquery and set up a data set so i’m going to simply click on this button here that says go to bigquery and it’s going to bring me to the bigquery page where i’ll be prompted with a big welcome note you can just click on done and over here in the right hand side where it says create data set i’m just going to click on there and i’m going to create my new data set and so for my data set id i’m going to call this billing export and just as a note with the data set id you can’t use any characters like hyphens commas or periods and therefore i capitalize the b and the e now with the data location the default location is the us multi region but i can simply click on the drop down and have an option to store my data in a different location but i’m going to keep it at default i have the option of expiring this table in either a certain amount of days or to never expire as well when it comes to encryption i’m going to leave it as google manage key as opposed to a customer manage key and i’ll get into encryption and key management a little later on in this course i’m going to go ahead and move right down to the bottom and click on create data set and now my data set has been created i can now see it over here on the left hand side menu where subtle poet 28400 is the id for my project if i simply click on the arrow beside it it’ll show my billing export data set because there’s nothing in it nothing is showing and so now that the data set is set up i can now go back to the billing export page and finish setting up my billing export so with that being said i’m going to go back up to the navigation menu head over to billing and go to billing export under daily cost detail i’m going to click on edit settings and because i have a data set already set up and since it’s the only one it has been propagated in my billing export data set field if i had more data sets then i would be able to select them here as well so i’m going to leave the data set at billing export and simply click on save and so now that billing export has been enabled i’ll be able to check on my billing as it is updated each day as it says here and to go right to the data set i can simply click on this hot link and it’ll bring me right to bigquery and so there is one last step that still needs to be done to enable the billing export to work and that is to enable the bigquery data transfer service api so in order to do that we need to go back to the navigation menu go into apis and services into the dashboard and now i’m going to do a search for the bigquery data transfer service and i’m going to simply go up here to the top search bar and simply type in bigquery and here it is bigquery data transfer api i’m going to simply click on that and hit enable and this might take a minute and you may be asked to create credentials over here on the top right and you can simply ignore that as they are not currently needed and so now that the bigquery data transfer service api has been enabled i’m now able to go over to bigquery and take a look at my billing export data without any issues now it’s going to take time to propagate but by the time i come here tomorrow the data will be fully propagated and i’ll be able to query the data as i see fit and so although this is a short demo this is necessary to know for the exam as well being an engineer and looking to query your billing data you will now have the knowledge in order to take the steps necessary that will allow you to do so and so that’s all i have for this lesson and demo on export billing data so you can now mark this lesson as complete and let’s move on to the next one welcome back in this hands-on demo i’m going to go over apis in google cloud now the google cloud platform is pretty much run on apis whether it’s in the console or the sdk under the hood it’s hitting the apis now some of you may be wondering what is an api well this is an acronym standing for application programming interface and it’s a standard used amongst the programming community in this specific context it is the programming interface for google cloud services and as i said before both the cloud sdk and the console are using apis under the hood and it provides similar functionality now when using the apis directly it allows you to enable automation in your workflow by using the software libraries that you use for your favorite programming language now as seen in previous lessons to use a cloud api you must enable it first so if i went to compute engine or when i was enabling monitoring i had to enable the api so no matter the service you’re requesting here in google cloud and some of them may be even linked together it always has to be enabled in order to use it now getting a little bit more granular when using an api you need to have a project so when you enable the api you enable it for your project using the permissions on the project and permissions on the api to enable it now since this is a demo i want to go over to the navigation menu and go straight into apis and services and so here is the dashboard of the apis and services you can see the traffic here the errors and the latency with regards to these apis as well up here it has a time frame for the median latency that you can select for a more granular search now when it comes to what is enabled already you can see list here of the apis that are enabled and since we haven’t done much there’s only a few apis that are enabled now this hands-on demo is not meant to go into depth with apis but is merely an overview so that you understand what the apis are used for in context with google cloud if you’d like to go more in depth with regards to apis and possibly get certified in it the apogee certification with its corresponding lessons would be a great way to get a little bit more understanding but for this demo we’re going to stick with this overview and so in order to search for more apis that need to be enabled or if you’re looking for something specific you can come up here to enable apis and services or you can do a quick search on the search bar at the top of the page but just as a quick glance i’m going to go into enable apis and services and so you will be brought to a new page where you will see the api library on the left you will see a menu where the apis are categorized and all the apis that are available when it comes to google cloud and other google services so as you saw before when i needed to enable the api for bigquery i would simply type in bigquery and i can go to the api and since the api is enabled there’s nothing for me to do but if i needed to enable it i could do that right there and just as a quick note when going to a service that’s available in the console the api automatically gets enabled when you go and use it for the first time and so again this is just a quick overview of apis and the api library with regards to google cloud a short yet important demo to understand the under workings of the cloud sdk and the console so just remember that when using any service in google cloud again you must enable the api in order to start using it and so that about wraps up this demo for cloud apis so you can now mark this lesson as complete and let’s move on to the next one [Music] welcome back in this demo i’ll be creating and setting up a new gmail user as an admin user for use moving ahead in this course as well as following google’s best practices we need a user that has lesser privileges than the user account that we set up previously and i’ll be going through a full demo to show you how to configure it now in a google cloud setup that uses a g suite or cloud identity account a super administrator account is created to administer the domain this super admin account has irrevocable administrative permissions that should not be used for day-to-day administration this means that no permissions can be taken away from this account and has the power to grant organization admin role or any other role for that matter and recover accounts at the domain level which makes this account extremely powerful now since i do not have a domain setup or using a g suite or cloud identity account i don’t need to worry about a super admin account in this specific environment as gmail accounts are standalone accounts that are meant to be personal and hold no organization and usually start at the project level and so to explain it in a bit more detail i have a diagram here showing the two different accounts i will be using and the structure behind it now as we discussed before billing accounts have the option of paying for projects in a different organization so when creating new projects using the two different gmail accounts they were created without any organization and so each account is standalone and can create their own projects now what makes them different is that the antony gcloud ace account owns the billing account and is set as a billing account administrator and the tony bowtie ace account is a billing account user that is able to link projects to that billing account but does not hold full access to billing so in the spirit of sticking to the principle of lease privilege i will be using the tony bowtie ace account that i had created earlier with lesser privileges on billing it will still give me all the permissions i need to create edit and delete resources without all the powerful permissions needed for billing i will be assigning this new gmail user the billing account user role and it will allow you to achieve everything you need to build for the remainder of the course so just as a review i will be using a new google account that i have created or if you’d like you can use a pre-existing google account and as always i recommend enabling two-step verification on your account as this user will hold some powerful permissions to access a ton of different resources in google cloud so now that we’ve gone over the details of the what and why for setting up this second account let’s head into the demo and get things started so whenever you’re ready join me over in the console and so here i am back in the console and so before switching over to my new user i need to assign the specific roles that i will need for that user which is the billing account user role so to assign this role to my new user i need to head over to billing so i’m going to go back up here to the left-hand corner and click on the navigation menu and go to billing again in the left-hand menu i’m going to move down to account management and click on there and over here under my billing account you will see that i have permissions assigned to one member of the billing account administrator and as expected i am seeing anthony g cloud ace gmail.com and so i want to add another member to my billing account so i’m going to simply click on add members and here i will enter in my new second user which is tony bowtie ace gmail.com and under select a role i’m going to move down to billing and over to billing account user and as you can see here this role billing account user will allow permissions to associate projects with billing accounts which is exactly what i want to do and so i’m going to simply click on that and simply click on save and so now that i’ve assigned my second user the proper permissions that i needed i am now going to log out and log in as my new user by simply going up to the right hand corner in the icon clicking on the icon and going to add account by adding the account i’ll be able to switch back and forth between the different users and i would only recommend this if you are the sole user of your computer if you are on a computer that has multiple users simply sign out and sign back in again with your different user and here i’m asked for the email which would be tony bowtie ace gmail.com i’m gonna plug in my password and it’s going to ask me for my two-step verification i’m going to click on yes and i should be in and because it’s my first time logging into google cloud with this user i get a prompt asking me to agree to the terms of service i’m going to agree to them and simply click on agree and continue and so now i’m going to move back up to overview and as you can see here i don’t have the permissions to view costs for this billing account and so all the permissions assigned for the billing account administrator which is antony g cloud ace is not applied to tony bowtie ace and therefore things like budgets and alerts even billing exports i do not have access to so moving forward in the course if you need to access anything in billing that you currently don’t have access to like budgets and alerts you can simply switch over to your other account and take care of any necessary changes but what i do have access to is if i go up here to my billing account click on the drop down menu and click on manage billing accounts but as you can see here i do have access to view all the billing accounts along with the projects that are linked to them now because these gmail accounts are standalone accounts this project here that is owned by antony gcloud ace i do not have access to in order to access the project i would have to have permissions assigned to me directly in order for me to actually view the project or possibly creating any resources within that project now if i go back to my home page i can see here that i have no projects available and therefore no resources within my environment and so to kick it off i’m going to create a new project and so under project name i am going to call this project tony and you can name your project whatever you’d like under location i don’t have any organization and so therefore i’m just going to click on create and this may take a minute to create and here we are with my first project named project tony as well as my notification came up saying that my project has been created and so now that this project has been created it should be linked to my billing account so in order to verify this i’m going to go over into billing and under the drop down i’m going to click on manage billing accounts and as you can see here the number of projects has gone from one to two and if i click on the menu up here under my projects you can see that project tony is a project that is linked to my billing account i also have the permissions to either disable billing or change billing for this specific project yet in order to change billing i will have to have another billing account but there are no other billing accounts available and so moving forward i will only have this one billing account and so any projects i decide to create will be linked to this billing account and so this is a great example of trimming down the permissions needed for different users and even though this is not a domain owned account but a personal account it’s always recommended to practice the principle of lease privilege whenever you come across assigning permissions to any user now as i said before any billing related tasks that you decide to do moving forward you can simply switch over to your other user and do the necessary changes and so that’s all i have for this lesson so you can now mark this lesson as complete and let’s move on to the next one [Music] welcome back in this short lesson i’m going to be covering an overview of the cloud sdk and the command line interface as it is an essential component of interacting with google cloud for the exam you will need to get familiar with the command line and the commands needed in order to create modify and delete resources this is also an extremely valuable tool for your tool belt in the world of being a cloud engineer as i have found that is a very common and easy way to implement small operations within google cloud as well as automating the complex ones so what exactly is the cloud sdk well the cloud sdk is a set of command line tools that allows you to manage resources through the terminal in google cloud and includes commands such as gcloud gsutil bq and cubectl using these commands allow you to manage resources such as compute engine cloud storage bigquery kubernetes and so many other resources these tools can be run interactively or through automated scripts giving you the power and flexibility that you need to get the job done the cloud sdk is so powerful that you can do everything that the console can do yet has more options than the console you can use it for infrastructure as code autocompletion helps you finish all of your command line statements and for those of you who run windows the cloud sdk has got you covered with availability for powershell now in order to access google cloud platform you will usually have to authorize google cloud sdk tools so to grant authorization to cloud sdk tools you can either use a user account or a service account now a user account is a google account that allows end users to authenticate directly to your application for most common use cases on a single machine using a user account is best practice now going the route of a service account this is a google account that is associated with your gcp project and not a specific user a service account can be used by providing a service account key to your application and is recommended to script cloud sdk tools for use on multiple machines now having installed the cloud sdk it comes with some built-in commands that allow you to configure different options using gcloud init this initializes and authorizes access and performs other common cloud sdk setup steps using some optional commands gcloud auth login authorizes your access for gcloud with google user credentials and sets the current account as active gcloud config is another optional configuration that allows you to configure accounts and projects as well gcloud components allow you to install update and delete optional components of the sdk that give you more flexibility with different resources now after having installed the cloud sdk almost all gcloud commands will follow a specific format shown here is an example of this format and is broken down through component entity operation positional arguments and flags and i’ll be going through some specific examples in the demonstration a little bit later on and so that’s all i wanted to cover in this overview of the cloud sdk and the cli so you can now mark this lesson as complete and you can join me in the next one where i go ahead and demonstrate installing the cloud sdk [Music] back in this demonstration i will show you how to download install and configure the cloud sdk and i will be using the quick start guide that lies in the cloud sdk documentation which holds all the steps for installing the cloud sdk on different operating systems and i will make sure to include it in the lesson text below this demo will show you how to install the cloud sdk on each of the most common operating systems windows mac os and ubuntu linux all you need to do is follow the process on each of the pages and you should be well on your way so with that being said let’s get this demo started and bring the cloud sdk to life by getting it all installed and configured for your specific operating system so as i explained before i’m gonna go ahead and install the cloud sdk on each of the three different operating systems windows mac os and ubuntu linux and i will be installing it with the help of the quick start guide that you see here and as i said before i’ll be including this link in the lesson text and so to kick off this demo i wanted to start by installing the cloud sdk on windows so i’m going to move over to my windows virtual machine and i’m going to open up a browser and i’m going to paste in the link for the quick start guide and you can click on either link for the quick start for windows and each quick start page will give me the instructions of exactly what i need to do for each operating system so now it says that we need to have a project created which i did in the last lesson which is project tony so next i’m going to download the cloud sdk installer so i’m going to click on there and i’ll see a prompt in the bottom left hand corner that the installer has been downloaded i’m going to click on it to open the file and i’m going to be prompted to go through this wizard and so i’m just going to click on next i’m going to agree to the terms of the agreement it’s going to be for just me anthony and my destination folder i’ll keep it as is and here’s all the components that it’s going to install i’m going to keep the beta commands unchecked as i don’t really need them and if i need them later then i can install that component for those who are more experienced or even a bit curious you could click on the beta commands and take it for a test drive but i’m going to keep it off and i’m going to click install and depending on the power of your machine it should take anywhere from two to five minutes to install and the google cloud sdk has been installed and so i’m just going to click on next and as shown here in the documentation you want to make sure that you have all your options checked off is to create a start menu shortcut a desktop shortcut you want to start the google cloud sdk shell and lastly you want to run gcloud init in order to initialize and configure the cloud sdk now i’m going to click on finish to exit the setup and i’m going to get a command shell that pops up and i’m just going to zoom in for better viewing and so it says here my current configuration has been set to default so when it comes to configuration this is all about selecting the active account and so my current active account is going to be set as the default account it also needed to do a diagnostic check just to make sure that it can connect to the internet so that it’s able to verify the account and so now the prompt is saying you must log in to continue would you like to log in yes you can just click on y and then enter and it’s going to prompt me with a new browser window where i need to log in using my current account so that i can authorize the cloud sdk so i’m going to log in with my tony bowtie ace account click on next type in my password again it’s going to ask me for my two-step verification and i’m going to get a prompt saying that the google sdk wants to access my google account i’m going to click on allow and success you are now authenticated with the google cloud sdk and if i go back to my terminal i am prompted to enter some values so that i can properly configure the google cloud sdk so i’m going to pick a cloud project to use and i’m going to use project tony that i created earlier so i’m going to enter 1 and hit enter and again whatever project that you’ve created use that one for your default configuration and it states here that my current project has been set to project tony and again this configuration is called default so if i have a second configuration that i wanted to use i can call it a different configuration but other than that my google cloud sdk is configured and ready to use so just to make sure that it’s working i’m going to run a couple commands i’m going to run the gcloud help command and as you can see it’s given me a list of a bunch of different commands that i can run and to exit you can just hit ctrl c i’m going to run gcloud config list and this will give me my properties in my active configuration so my account is tony bowtie ace gmail.com i’ve disabled usage reporting and my project is project tony and my active configuration is set as default now don’t worry i’m going to be covering all these commands in the next lesson and i’m going to be going into detail on how you can configure and add other users within your cloud sdk configuration so as we go deeper into the course i’m going to be using a lot more command line just so you can get familiar with the syntax and become a bit more comfortable with it so now that i’ve installed the cloud sdk on windows the process will be a little bit different when it comes to installation on the other operating systems but will be very similar when it comes to the configuration so now let’s head over to mac os and install the cloud sdk there and so here we are in mac os and so the first thing i want to do is i want to open up a web browser and i want to go to the cloud sdk quick start page so i’m just going to paste in the url here and we’re looking for the quick start for mac os and so you can either click on the menu from the left hand side or the menu here on the main page and so like i said before this installation is going to be a little bit different than what it was in windows and so there’s a few steps here to follow and so the first step asks us if we have a project already created which we’ve already done and is project tony and so the next step tells us that the cloud sdk requires python and so we want to check our system to see if we have a supported version so in order to check our version we’re going to use this command here python minus v and i’m going to copy that to my clipboard and then open up a terminal and i’m going to zoom in for better viewing and so i’m going to paste the command in here and simply click on enter and as you can see here i’m running python 2.7 but the starred note here says that the cloud sdk will soon move to python 3 and so in order to avoid having to upgrade later you’d want to check your version for python 3 and so you can use a similar command by typing in python 3 space minus capital v and as you can see i’m running version 3.7.3 and so moving back to the guide i can see here that it is a supportive version if you do not have a supportive version i will include a link on how to upgrade your version in the lesson text below and so now that i’ve finished off this step let’s move on to the next one where i can download the archive file for the google cloud sdk again most machines will run the 64-bit package so if you do have the latest operating system for mac os you should be good to go so i’m going to click on this package and it’ll start downloading for me and once it’s finished you can click on downloads and click on the file itself and it should extract itself in the same folder with all the files and folders within it and so just as another quick note google prefers that you keep the google cloud sdk in your home directory and so following the guide i’m going to do exactly that and so the easiest way to move the folder into your home directory is to simply drag and drop it into the home folder on the left hand menu it should be marked with a little house icon and nested under favorites i can now move into my home folder and confirm that it is indeed in here and so now moving to the last step which shows as optional the guide asks us to install a script to add cloud sdk tools to our path now i highly recommend that you install this script so that you can add the tools for command completion and i will get into command completion a little bit later on in the next couple of lessons and so here is the command that i need to run so i’m going to copy that to my clipboard again and i’m going to move back over to my terminal i’m going to clear my screen and so to make sure i’m in my home directory where the cloud sdk folder is i’m going to simply type ls and so for those who don’t know ls is a linux command that will list all your files and folders in your current path and as you can see here the google cloud sdk is in my path and therefore i can run that script so i’m going to paste it in here and i’m going to hit enter and so a prompt comes up asking me whether or not i want to disable usage reporting and because i want to help improve the google cloud sdk i’m going to type in y for yes and hit enter and so as i was explaining before the cloud sdk tools will be installed in my path and so this is the step that takes care of it and so i’m going to type y and enter for yes to continue and usually the path that comes up is the right one unless you’ve changed it otherwise so i’m going to leave this blank and just hit enter and that’s it i’ve installed the tools so now in order for me to run gcloud init i have to start a new shell as it says here for the changes to take effect so i’m going to go up here to the top left hand menu click on terminal and quit terminal and so now i can restart the terminal again i’m going to zoom in for better viewing and now i’m able to run gcloud init in order to initialize the installation again the prompt to do the diagnostic tests and i can see i have no network issues but it shows me that i have to login to continue i would like to log in so i’m going to type y for yes and hit enter and so a new browser has popped open prompting me to enter my email and password and so i’m going to do that now i’m going to authorize my account with two-step verification i’m not going to save this password and yes i want to allow the google cloud sdk to access my google account so i’m going to click on allow and it shows that i’ve been authenticated so now i’m going to move back to my terminal and so just as a note before we move forward in case you don’t get a browser pop-up for you to log into your google account you can simply highlight this url copy it into your browser and it should prompt you just the same so moving right ahead it shows that i’m logged in as tonybowtieace gmail.com which is exactly what i wanted and it’s asking me to pick a cloud project to use now i want to use project tony so i’m going to type in 1 and enter and that’s it the cloud sdk has been configured and just to double check i’m going to run the gcloud config list command to show me my configuration and as you can see here my account is tonybowties gmail.com my disable usage reporting is equal to false and my project is project tony and again my active configuration is set as default and so that about covers the cloud sdk install for mac os and so finally i’m going to move over to ubuntu linux and configure the cloud sdk there and so here we are in ubuntu and like i did in the other operating systems i’m going to open up the browser and i’m going to paste in the url for the quick start guide and so we want to click on the quick start for debian and ubuntu and so again you have your choice from either clicking on the link on the left hand menu or the one here in the main menu and so following the guide it is telling us that when it comes to an ubuntu release it is recommended that the sdk should be installed on an ubuntu release that has not reached end of life the guide also asks to create a project if we don’t have one already which we have already done and so now we can continue on with the steps and so since we are not installing it inside a docker image we’re gonna go ahead and use the commands right here now you can copy all the commands at once by copying this to the clipboard but my recommendation is to install each one one by one so i’m going to copy this and i’m going to open up my terminal i’m going to zoom in for better viewing and i’m going to paste that command in and click on enter it’s going to prompt me for my password and it didn’t come up with any errors so that means it was successfully executed and so i’m going to move on to the next command i’m going to copy this go back over to my terminal and paste it in now for those of you who do not have curl installed you will be prompted to install it and given the command to run it so i’m going to copy and paste this command and click on enter i’m going to type in y for yes to continue and it’s going to install it after a couple of minutes okay now that curl has been installed i’m able to run that command again i’m going to clear the screen first and that executed with no errors as well and so now moving on to the last command this command will download and install the google cloud sdk i am prompted to install some packages and so i’m going to type y for yes to continue so now it’s going to download and install the necessary packages needed for the google cloud sdk and depending on the speed of your internet and the speed of your machine this could take anywhere from two to five minutes okay and the google cloud sdk has been installed and so now that the cloud sdk has been installed we can now initialize the configuration so i’m going to type in gcloud init again the prompt with the network diagnostics i’m going to type y for yes to log in and i’m going to get the prompt for my email and password i’m going to take care of my two-step verification and i’m going to allow the google cloud sdk to access my google account and success i am now authenticated and moving back to the terminal just to verify it and again i’m going to pick project tony as the cloud project to use and the cloud sdk has been configured as always i’m going to do a double check by running a gcloud config list and as expected the same details has come up and so this is a quick run through on all three operating systems windows mac os and ubuntu linux on how to install the google cloud sdk and this will help you get started with becoming more familiar and more comfortable using the command line interface and so that about wraps up for this lesson so you can now mark this lesson as complete and let’s move on to the next one [Music] welcome back in the last demo we went through a complete install of the cloud sdk and configured our admin account to be used within it in this demonstration i will be walking through how to manage the cloud sdk and this will involve how to utilize it and how to customize it to your environment as well as configuring our other user account so that we are able to apply switching configurations from one user to another and so i will be going through initializing and authorization configurations and properties installing and removing components as well as a full run through of the gcloud interactive shell so let’s kick off this demo by diving into a pre-configured terminal with the sdk installed and configured with my second user tony bowtie ace gmail.com and so here i am in the mac os terminal and just be aware that it doesn’t matter which operating system you’re running as long as the sdk is installed and you have your user configured and so as you saw in the last lesson after you install the cloud sdk the next step is typically to initialize the cloud sdk by running the gcloud init command and this is to perform the initial setup tasks as well as authorizing the cloud sdk to use your user account credentials so that it can access google cloud and so in short it sets up a cloud sdk configuration and sets a base set of properties and this usually covers the active account the current project and if the api is enabled the default google compute engine region and zone now as a note if you’re in a remote terminal session with no access to a browser you can still run the gcloud init command but adding a flag of dash dash console dash only and this will prevent the command from launching a browser-based authorization like you saw when setting up your last user so now even though i have a user already set up i can still run gcloud init and it will give me a couple different options to choose from so i can re-initialize this configuration with some new settings or i can create a new configuration now for this demo since we already have two users and to demonstrate how to switch between different users i want to create a new configuration with my very first user so i’m going to type in 2 and hit enter and it’s going to ask me for a configuration name now it asks me for a configuration name because when setting up your first configuration it’s set as default and because i know that this user account has full access to billing as well as administration privileges i’m going to call this configuration master and i’m going to hit enter it did the necessary network checks and now it’s asking me for which account i want to use this configuration for now if tony bowtie ace had access to two different google cloud accounts i would be able to add a different configuration here and so because i’m going to log in with a new account i’m going to put in two and hit enter and so again it brought me to my browser window and i’m going to log in using another account and so here you can type in the first account that you created and for me it was antony gcloud ace gmail.com i hit next and i’m going to enter my password it’s going to ask me for my two-step verification and i don’t want to save this password and i’m going to allow the google cloud sdk to access my google account and i am now authenticated so moving back to the console you can see here that i am currently logged in and it’s asking me to pick a cloud project to use now since i only have one project in that google cloud account which is subtle poet i’m going to choose one and since i have the compute engine api enabled i am now able to configure a default compute region and zone and so i’m going to hit y for yes to configure it and as you can see there are 74 different options to choose from and if you scroll up a little bit you should be able to find the zone that you’re looking for and so for this course we are going to be using us central one dash a and so this is number eight so i’m going to scroll back down and type in eight and so now my master configuration has been configured with my antony g cloud ace account using us central 1a as the compute engine zone now touching back on authorization if i didn’t want to set up a whole configuration i can simply type in gcloud auth login and this will allow me to authorize just the user account only so gcloud init would authorize access and perform the cloud sdk setup steps and gcloud auth login will authorize the access only now as i mentioned in a previous lesson you can use a service account for authorization to the cloud sdk tools and this would be great for a compute instance or an application but would need a service account key file in order to authorize it and so moving back to our user accounts when running the cloud sdk you can only have one active account at any given time and so to check my active account i can type in the command gcloud auth list and this will give me a list of all the accounts that have been authorized and so whenever you run a gcloud init it will use that account as the active account and as you can see here the antony gcloud ace gmail.com has a star beside it and this is marked as the active account and so in essence the account with the star beside it is the active account and so i’m looking to change my active account back to tony bowtie ace and in order for me to do that the command is conveniently shown here and so i’m going to go ahead and run that and the account would be the user shown above and so when i do a gcloud auth list i can see that my active account is now back to tony bowtie bowtieace gmail.com now if you wanted to switch the account on a per command basis you can always do that using the flag dash dash account after the command and put in the user account that you want to use and so let’s say i wanted to revoke credentials from an account that i don’t need anymore i can simply use the command gcloud auth revoke followed by the username and it will revoke the credentials for that account and so doing this would remove your credentials and any access tokens for any specific account that you choose that’s currently on your computer and so if we’re looking for that specific account we can always use the gcloud info command and it will give us the path for the user config directory and it is this directory that holds your encrypted credentials and access tokens alongside with your active configurations and any other configurations as well now as you can see here running the gcloud info command will also give you some other information everything from the account the project the current properties and where the logs can be found so now moving on to configurations a configuration is a named set of gcloud cli properties and it works kind of like a profile and so earlier on i demonstrated how to set up another configuration through gcloud init so now if i run a gcloud config list command it would give me all the information of the active configuration so as you can see here my user has changed but my configuration has stayed the same now as seen previously in a different lesson tony bow tie ace does not have access to the project subtle poet this project belongs to antony g cloud ace and the configuration was set for that account now if tony bowtie ace did have access to the subtle poet project then i could use this configuration but it doesn’t and so i want to switch back to my other configuration and how i would do this is type in the command gcloud config configurations activate and the configuration that i set up for tony bowtie ace is the default configuration and so now that it has been activated i can now run a gcloud config list and as you can see here the configuration is back to default setup during the initialization process for tony bowtie ace now if i wanted to create multiple configurations for the same user account i can simply type in the command gcloud config configurations create but if i wanted to just view the configuration properties i can always type in the command gcloud config configurations describe and as you can see after the describe i needed the configuration name to complete the command and so i’m going to do that now and i’ve been given all the properties for this configuration now another thing that i wanted to
share when it comes to properties is that you can change the project or the compute region and zone by simply typing in the command gcloud config set now if i wanted to change the project i can simply type in project and the project name if it was for the compute instance i can simply type in compute forward slash zone for the specific zone and just as a note only the properties that are not in the core property section are the ones that can be set as well when you are setting the properties this only applies to the active configuration if you want to change the configuration of one that is not active then you’d have to switch to it and run the gcloud config set command and so moving on i wanted to touch on components which are the installable parts of the sdk and when you install the sdk the components gcloud bq gsutil and the core libraries are installed by default now you probably saw a list of components when you ran the gcloud init command and so to see all the components again you can simply type in the gcloud components list command and if you scroll up you’re able to see all the components that are available that you can install at your convenience and so if i wanted to install the cubectl component i can type in the command gcloud components install cubectl and a prompt will come up asking me if i want to continue with this i want to say yes and now it will go through the process of installing these components and so just to verify if i run the command gcloud components list you can see here that i have the cube ctl component installed now if i wanted to remove that component i can simply type in gcloud components remove and then the component that i want to remove which is cubectl i’m going to be prompted if i want to do this i’m going to say yes and it’s going to go through the stages of removing this component and it’s been successfully uninstalled and so if you’re working with a resource that you need a component for you can simply install or uninstall it using the gcloud components command and so one last thing about components before we move on is that you can update your components to make sure you have the latest version and so in order to update all of your installed components you would simply run the command gcloud components update and so before i go ahead and finish off this demonstration i wanted to touch on the gcloud interactive shell the gcloud interactive shell provides a richer shell experience simplifying commands and documentation discovery with as you type autocompletion and help text snippets below it produces suggestions and autocompletion for gcloud bq gsutil and cubectl command line tools as well as any command that has a man page sub commands and flags can be completed along with online help as you type the command and because this is part of the beta component i need to install it and so i’m going to run the command gcloud components install beta and i want to hit yes to continue and this will go ahead and kick off the installation of the gcloud beta commands and so now that it’s installed i’m going to simply clear the screen and so now in order to run the gcloud interactive shell i need to run the command gcloud beta interactive and so now for every command that i type i will get auto suggestions that will help me with my commands and so to see it in all of its glory i’m going to start typing and as you can see it’s giving me the option between g cloud or gsutil and i can use the arrow to choose either one and below it it’ll also show me the different flags that i can use for these specific commands and how to structure them and so for now i’m going to run gsutil version minus l and as you can see here it’s giving me all the information about this command and what it can do and so i’m going to hit enter and as you can see my gsutil version is 4.52 and along with the version number i’m also given all the specific information with regards to this gsutil version and this can be used with absolutely any command used on the google cloud platform and so i’m going to go ahead and do that again but running a different command so i’m just going to first clear the screen and i’m going to type gcloud compute instances and as you can see the snippet on the bottom of the screen is showing me not only the command and how it’s structured but also the url for the documentation so continuing on gcloud compute instances i’m going to do a list and i’m going to filter it by using the flag dash dash filter and i’m going to filter the us east one a zone and i’m going to hit enter and as expected there are no instances in us east 1a and as you’ve just experienced this is a great tool and i highly recommend that you use it whenever you can now i know this is a lot to take in and a lot of these commands will not show up on the exam but again getting comfortable with the command line and the sdk will help you on your path to becoming a cloud engineer as well it will help you get really comfortable with the command line and before you know it you’ll be running commands in the command line and prefer it over using the console and so that’s all i have for this demo on managing the cloud sdk so you can now mark this lesson as complete and let’s move on to the next one welcome back in this demonstration i’m going to be talking about the always available browser-based shell called cloud shell cloud shell is a virtual machine that is loaded with development tools and offers a persistent five gigabyte home directory that runs on google cloud cloud shell is what provides you command line access to your google cloud resources within the console cloud shell also comes with a built-in code editor that i will be diving into and allows you to browse file directories as well as view and edit files while still accessing the cloud shell the code editor is available by default with every cloud shell instance and is based on the open source editor thea now cloud shell is available from anywhere in the console by merely clicking on the icon showed here in the picture and is positioned in the top right hand corner of the console in the blue toolbar so let’s get started with the cloud shell by getting our hands dirty and jumping right into it and so here we are back in the console and i am logged in as tony bowtie ace gmail.com and as you can see up here in the right hand corner as mentioned earlier you will find the cloud shell logo and so to open it up you simply click on it and it’ll activate the cloud shell here at the bottom and because it’s my first time using cloud shell i’ll get this prompt quickly explaining an overview of what cloud shell is and i’m going to simply hit continue and i’m going to make the terminal a little bit bigger by dragging this line up to the middle of the screen and so when you start cloud shell it provisions an e2 small google compute engine instance running a debian-based linux operating system now this is an ephemeral pre-configured vm and the environment you work with is a docker container running on that vm cloud shell instances are provisioned on a per user per session basis the instance persists while your cloud shell session is active and after an hour of inactivity your session terminates and the vm is discarded you can also customize your environment automatically on boot time and it will allow you to have your preferred tools when cloud shell boots up so when your cloud shell instance is provision it’s provisioned with 5 gigabytes of free persistent disk storage and it’s mounted at your home directory on the virtual machine instance and you can check your disk storage by simply typing in the command df minus h and here where it shows dev disk by id google home part one it shows here the size as 4.8 gigabytes and this would be the persistent disk storage that’s mounted on your home directory now if you’ve noticed it shows here that i’m logged in as tony bowtie ace at cloud shell and that my project id is set at project tony so the great thing about cloud shell is that you’re automatically authenticated as the google account you’re logged in with so here you can see i’m logged in as tony bowtie ace and so picture it like running gcloud auth login and specifying your google account but without having to actually do it now when the cloud shell is started the active project in the console is propagated to your gcloud configuration inside cloud shell so as you can see here my project is set at project tony now if i wanted to change it to a different project i could simply use the command stated up here gcloud config set project along with the project id and this will change me to a different project now behind the scenes cloud shell is globally distributed across multiple regions so when you first connect to cloud shell you’ll be automatically assigned to the closest available region and thus avoiding any unnecessary latency you do not have the option to choose your own region and so cloud shell does that for you by optimizing it to migrate to a closer region whenever it can so if you’re ever curious where your cloud shell session is currently active you can simply type in this command curl metadata slash compute metadata slash version one slash instance slash zone and this will give me the zone where my instance is located and as shown here it is in us east 1b now as you’ve probably been seeing every time i highlight something that there is a picture of scissors coming up the cloud shell has some automated and available tools that are built in and so one of those available tools is that whenever i highlight something it will automatically copy it to the clipboard for me cloud shell also has a bunch of very powerful pre-installed tools that come with it such as the cloud sdk bash vim helm git docker and more as well cloud shell has support for a lot of major different programming languages like java go python node.js ruby and net core for those who run windows now if you’re looking for an available tool that is not pre-installed you can actually customize your environment when your instance boots up and automatically run a script that will install the tool of your choice and the script runs as root and you can install any package that you please and so in order for this environment customization to work there needs to be a file labeled as dot customize underscore environment now if we do an ls here you can see that all we have is the readme dash cloud shell text file if we do ls space minus al to show all the hidden files as well you can see that the dot customize underscore environment file does not exist and this is because we need to create it ourselves and so for this example i want terraform installed as an available tool when my instance boots up and so i have to create this file so i’m going to do so by using the touch command and then the name of the file dot customize underscore environment hit enter and if i clear the screen and do another ls space minus al i can see that my dot customize underscore environment file has been created and so now i’m going to need the script to install terraform which means i would have to edit it and so another great feature of cloud shell is that it comes with a code editor and i can do it one of two ways i can either come up here and click on the open editor button which will open up a new tab or i can simply use the edit command with the file name and i’m going to do just that so edit dot customize underscore environment and i’m just going to hit enter and as you can see i got a prompt saying that it’s unable to load the code editor and this is because when using code editor you need cookies enabled on your browser and because i am using a private browser session cookies are disabled and because my cloud shell environment persists i’m going to open up a regular browser window and i’m going to continue where i left off and so here i am back with a new browser window again logged in as tony bowtie ace and so just to show you the persistence that happens in cloud shell i’m going to run the command ls space minus al and as you can see here the customize environment is still here and so again i wanted to install terraform as an extra tool to have in my environment and so i’m going to open up the editor by typing in edit dot customize underscore environment and i’m going to hit enter and here is the editor that popped up as you can see here it’s built with eclipse thea and this is an open source code editor that you can download from eclipse and this is what the editor is built on now this menu here on the left i can make it a little bit bigger and because the only viewable file on my persistent disk is the readme cloud shell dot text file i’m not able to see my dot customize underscore environment so in order to open it and edit it i’m going to go to the menu at the top of the editor and click on file open and here i’ll be able to select the file that i need so i’m going to select customize environment and click on open and so i’m going to paste in my script to install terraform and i’m just going to paste in my script from my clipboard and i’ll be including the script in the github repo for those of you who use terraform and i’m going to move over to the menu on the left click on file and then hit save and so now in order for me to allow this to work the customize environment needs to be loaded into my cloud shell so i’m going to have to restart it and so in order to accomplish this i’m going to move over to the menu on the right i’m going to click on the icon with the three dots and click on restart and you’ll be presented with a prompt it’s saying that it will immediately terminate my session and then a new vm will be provisioned for me and you’ll also be presented with an optional response from google telling them why you’re restarting the vm and this is merely for statistical purposes so i’m going to click on restart and i’m going to wait till a new cloud shell is provisioned and my new cloud shell is provisioned and up and running and so i want to double check to see if terraform has been installed so i’m going to go over here to the open terminal button on the right hand side toolbar and i’m going to move back to my terminal and i’m going to simply run the command terraform dash dash version and so it looks like terraform has been installed and as you can see i’m running version.12 but it says my terraform version is out of date and that the latest version is dot 13. and so because i really want to be up to date with terraform i want to be able to go into my customize environment file and edit my version of terraform so that when my cloud shell is initiated terraform.13 can be installed and so i’m going to simply type in the command edit dot customize underscore environment and i’m back to my editor and i’m going to change the terraform version from dot 12 to dot 13 and then go over here to the left-hand menu click on file and then save and now i’m going to restart my machine again and come back when it’s fully provisioned and i’m back again my machine has been provisioned and i’m going to go back to my terminal by clicking on the open terminal button and so i’m going to type in the command terraform dash dash version and as you can see i’m at version dot 13 and i’m going to run a simple terraform command to see if it’s working and as you can see i am successful in running terraform on cloud shell now customizing the environment is not on the exam but it is such an amazing feature that i wanted to highlight it for you with a real world example like terraform in case you’re away from your computer and you’re logged into a browser and you need some special tools to use in cloud shell this is the best way to do it now as i mentioned before the cloud sdk is pre-installed on this and so everything that i’ve showed you in the last lesson with regards to cloud sdk can be done in the cloud shell as well so if i run the command gcloud beta interactive i’d be able to bring up the interactive cloud shell and i’ll be able to run the same commands so now if i go ahead and run the command gcloud components list i’ll be able to see all the components installed and as you can see with the cloud shell there are more components installed than what’s installed on the default installation of the sdk i can also run the gcloud config list command to see all the properties in my active configuration and so this goes to show you that the sdk installation that’s on cloud shell is just as capable as the one that you’ve installed on your computer the only difference here is that the sdk along with all the other tools that come installed in cloud shell is updated every week and so you can always depend that they’re up to date and so moving on to a few more features of cloud shell i wanted to point out the obvious ones up here in the cloud shell toolbar right beside the open terminal i can open brand new tabs opening up different projects or even the same project but just a different terminal and moving over to the right hand menu of cloud shell this keyboard icon can send key combinations that you would normally not have access to moving on to the gear icon with this you’re able to change your preferences and looking at the first item on the list when it comes to color themes you can go from a dark theme to a light theme or if you prefer a different color in my case i prefer the dark theme as well you have the options of changing your text size we can go to largest but i think we’ll just keep things back down to medium and as well we have the different fonts the copy settings from which i showed you earlier as well as keyboard preferences you also have the option of showing your scroll bar now moving on to this icon right beside the gear is the web preview button and so the web preview button is designed so that you can run any web application that listens to http requests on the cloud shell and be able to view it in a new web browser tab when running these web applications web preview also supports applications run in app engine now mind you these ports are only available to the secure cloud shell proxy service which restricts access over https to your user account only and so to demonstrate this feature i am going to run a simple http server running a hello world page so first i’m going to clear my screen and then i’m going to exit the interactive shell and again i’m going to paste in for my clipboard a simple script that will run my simple http server and as you can see it’s running on port 8080 and now i’m able to click on the web preview button and i’m able to preview it on port 8080 and a new web browser tab will open up and here i’ll see my hello world page now this is just a simple example and so i’m sure that many of you can find great use for this and so i’m going to stop this http server now by hitting ctrl c and just as a quick note web preview can also run on a different port anywhere from port 2000 all the way up to 65 000. now moving on to the rest of the features hitting on the more button here with the three dots starting from the top we covered restart earlier when we had to restart our cloud shell you’re able to both upload and download a file within cloud shell when the demands are needed as well if i have a misconfigured configuration i can boot into safe mode and fix the issue instead of having to start from scratch again moving on to boost cloud shell also known as boost mode is a feature that increases your cloud shell vm from the default e2 small to an e2 medium so in essence a memory bump from 2 gigabytes to 4 gigabytes and once it’s activated all your sessions will be boosted for the next 24 hours and just as a quick note enabling boost mode restarts your cloud shell and immediately terminates your session but don’t worry the data in your home directory will persist but any of the processes that you are running will be lost now when it comes to usage quota cloud shell has a 50 hour weekly usage limit so if you reach your usage limit you’ll need to wait until your quota is reset before you can use cloud shell again so it’s always good to keep your eyes on this in case you’re a heavy user of cloud shell and moving back to the menu again you have your usage statistics which collects statistics on commands that come pre-installed in the vm and you can turn them on or off and as well help for cloud shell is available here as well if you wanted to give feedback to the google cloud team with regards to cloud shell this is the place to do it and so one last thing about cloud shell before we end this demo is that if you do not access cloud shell for 120 days your home disk will be deleted now don’t worry you’ll receive an email notification before its deletion and if you just log in and start up a session you’ll prevent it being removed now moving ahead in this course i will be using cloud shell quite a bit and so feel free to use either cloud shell or the cloud sdk installed on your computer or feel free to follow along with me in the cloud shell within your google cloud environment and so if you are following along please make sure that you keep an eye on your quota and so i hope this demonstration has given you some really good insight as to what you can do with cloud shell and its limitations and so that’s pretty much all i wanted to cover in this demonstration of cloud shell so you can now mark this as complete and let’s move on to the next one [Music] welcome back in this lesson and demonstration i am going to go over limits and quotas and how they affect your cloud usage within google cloud i’m going to quickly go over some theory followed by a demonstration on where to find the quotas and how to edit them accordingly so google cloud enforces quotas on resource usage for project owners setting a hard limit on how much of a particular google cloud resource your project can use and so there are two types of resource usage that google limits with quota the first one is rate quota such as api requests per day this quota resets after a specified time such as a minute or a day the second one is allocation quota an example is the number of virtual machines or load balancers used by your project and this quota does not reset over time but must be explicitly released when you no longer want to use the resource for example by deleting a gke cluster now quotas are enforced for a variety of reasons for example they protect other google cloud users by preventing unforeseen usage spikes quotas also help with resource management so you can set your own limits on service usage within your quota while developing and testing your applications each quota limit is expressed in terms of a particular countable resource from requests per day to an api to the number of load balancers used by your application not all projects have the same quotas for the same services and so using this free trial account you may have very limited quota compared to a higher quota on a regular account as well with your use of google cloud over time your quotas may increase accordingly and so you can also request more quota if you need it and set up monitoring and alerts and cloud monitoring to warn you about unusual quota usage behavior or when you’re actually running out of quota now in addition to viewing basic quota information in the console google cloud lets you monitor quota usage limits and errors in greater depth using the cloud monitoring api and ui along with quota metrics appearing in the metrics explorer you can then use these metrics to create custom dashboards and alerts letting you monitor quota usage over time and receive alerts when for example you’re near a quota limit only your services that support quota metrics are displayed and so popular supported services include compute engine data flow cloud spanner cloud monitoring and cloud logging common services that are not supported include app engine cloud storage and cloud sql now as a note be aware that quota limits are updated once a day and hence new limits may take up to 24 hours to be reflected in the google cloud console if your project exceeds a particular quota while using a service the platform will return an error in general google cloud will return an http 429 error code if you’re using http or rest to access the service or resource exhausted if you’re using grpc if you’re using cloud monitoring you can use it to identify the quota associated with the error and then create custom alerts upon getting a quota error and we will be going into greater depth with regards to monitoring later on in the course now there are two ways to view your current quota limits in the google cloud console the first is using the quotas page which gives you a list of all of your project’s quota usage and limits the second is using the api dashboard which gives you the quota information for a particular api including resource usage over time quota limits are also accessible programmatically through the service usage api and so let’s head into the console where i will provide a demonstration on where to look for quotas and how to increase them when you need to and so here we are back in the console and so as i explained before there are two main ways to view your current quota limits in the console and so the first one is using the quotas page and so in order to get to the quotas page i need to go to iam so i’m going to do that now by going up to the navigation menu in the top left hand corner i’m going to go to i am and admin and over to quotas and so here i am shown all the quotas of the current apis that i have enabled as you can see here it shows me the service the limit name the quota status and the details in this panel here on the right hand side shows me a little bit more information with regards to the service and the quota itself and so let’s say i wanted to increase my quota on the compute engine api within networks so i’m going to select this service and over here on the right hand panel i’m going to tick the box that says global and i’m going to go back over here to the top left and click on the edit quotas button and a panel will pop up and i am prompted to enter a new quota limit along with a description explaining to google why i need this quota limit increase and so once i’ve completed my request i can click on done and then submit request and like i said before once the request has been submitted it will go to somebody at google to evaluate the requests for approval and don’t worry these quota limit increases are usually approved within two business days and can often times be sooner than that also a great way to enter multiple quota changes is to click on the selected apis let’s do bigquery api and cloud data store api and so i’ve clicked off three and now i can go back up to the top and click on the edit quotas button and as you can see in the panel i have all three apis that i want to increase my quotas on so i can enter all my new limit requests for each api and then i can submit it as a bulk request with all my new quota limit changes and so doing it this way would increase the efficiency instead of increasing the quotas for each service one by one and because i’m not going to submit any quota changes i’m going to close this panel and so again using the quotas page will give you a list of all your project quota usage and its limits and allow you to request changes accordingly and so now moving on to the second way which you can view your current quota limits i’m going to go to the api dashboard which will give me a more granular view including the resource usage over time so to get there i’m going to go back up to the left hand side to the navigation menu i’m going to go to apis and services and click on dashboard and here i will see all the names of the apis and i’m going to click on compute engine api for this demonstration and over here on the left hand menu you will see quotas and in here as i said before you can get some really granular data with regards to queries read requests list requests and a whole bunch of other requests i’m going to drill down into queries here and i can see my queries per day per 100 seconds per user and per 100 seconds and i can see here that my queries per 100 seconds is at a limit of 2 000 so if i wanted to increase that limit i can simply click on the pencil icon and a panel on the right hand side will prompt me to enter a new quota limit but i currently see that my quota limit is at its maximum and that i need to apply for a higher quota so when i click on the link it will bring me back to my iam page where my services are filtered and i can easily find the service that i was looking at to raise my quota limit and i can increase the quota by checking off this box and clicking on the edit quotas button at the top of the page and so as you can see the quotas page as well as the api dashboard work in tandem so that you can get all the information you need with regards to quotas and limits and to edit them accordingly and so i hope this gave you a good idea and some great insight on how you can view and edit your quotas and quota limits according to the resources you use and so that about wraps up this brief yet important demo on limits and quotas so you can now mark this as complete and let’s move on to the next section [Music] welcome back and in this section we’re going to be going through in my opinion one of the most important services in google cloud identity and access management also known as iam for short and i’ll be diving into identities roles and the architecture of policies that will give you a very good understanding of how permissions are granted and how policies are inherited so before i jump into i am i wanted to touch on the principle of least privilege just for a second now the principle of least privilege states that a user program or process should have access to the bare minimum privileges necessary or the exact resources it needs in order to perform its function so for example if lisa is performing a create function to a cloud storage bucket lisa should be restricted to create permissions only on exactly one cloud storage bucket she doesn’t need read edit or even delete permissions on a cloud storage bucket to perform her job and so this is a great illustration of how this principle works and this is something that happens in not only google cloud but in every cloud environment as well as any on-premises environment so note that the principle of least privilege is something that i have previously and will continue to be talking about a lot in this course and this is a key term that comes up quite a bit in any major exam and is a rule that most apply in their working environment to avoid any unnecessary granted permissions a well-known and unsaid rule when it comes to security hence me wanting to touch on this for a brief moment so now with that out of the way i’d like to move on to identity and access management or i am for short so what is it really well with iam you manage access control by defining who the identity has what access which is the role for which resource and this also includes organizations folders and projects in iam permission to access a resource isn’t granted directly to the end user instead permissions are grouped into roles and roles are then granted to authenticated members an iam policy defines and enforces what roles are granted to which members and this policy is attached to a resource so when an authenticated member attempts to access a resource iam checks the resources policy to determine whether the action is permitted and so with that being said i want to dive into the policy architecture breaking it down by means of components in this policy architecture will give you a better understanding of how policies are put together so now what is a policy a policy is a collection of bindings audit configuration and metadata now the binding specifies how access should be granted on resources and it binds one or more members with a single role and any contact specific conditions that change how and when the role is granted now the metadata includes additional information about the policy such as an etag and version to facilitate policy management and finally the audit config field specifies the configuration data of how access attempts should be audited and so now i wanted to take a moment to dive deeper into each component starting with member now when it comes to members this is an identity that can access a resource so the identity of a member is an email address associated with a user service account or google group or even a domain name associated with a g suite or cloud identity domains now when it comes to a google account this represents any person who interacts with google cloud any email address that is associated with a google account can be an identity including gmail.com or other domains now a service account is an account that belongs to your application instead of an individual end user so when you run your code that is hosted on gcp this is the identity you would specify to run your code a google group is a named collection of google accounts and can also include service accounts now the advantages of using google groups is that you can grant and change permissions for the collection of accounts all at once instead of changing access one by one google groups can help you manage users at scale and each member of a google group inherits the iam roles granted to that group the inheritance means that you can use a group’s membership to manage users roles instead of granting iam roles to individual users moving on to g suite domains this represents your organization’s internet domain name such as antonyt.com and when you add a user to your g suite domain a new google account is created for the user inside this virtual group such as antony antonyt.com a g suite domain in actuality represents a virtual group of all of the google accounts that have been created like google groups g suite domains cannot be used to establish identity but they simply enable permission management now a cloud identity domain is like a g suite domain but the difference is that domain users don’t have access to g suite applications and features so a couple more members that i wanted to address is the all authenticated users and the all users members the all authenticated users is a special identifier that represents anyone who is authenticated with a google account or a service account users who are not authenticated such as anonymous visitors are not included and finally the all users member is a special identifier that represents anyone and everyone so any user who is on the internet including authenticated and unauthenticated users and this covers the slew of the different types of members now touching on the next component of policies is roles now diving into roles this is a named collection of permissions that grant access to perform actions on google cloud resources so at the heart of it permissions are what determines what operations are allowed on a resource they usually but not always correspond one-to-one with rest methods that is each google cloud service has an associated permission for each rest method that it has so to call a method the caller needs that permission now these permissions are not granted to the users directly but grouped together within the role you would then grant roles which contain one or more permissions you can also create a custom role by combining one or more of the available iam permissions and again permissions allow users to perform specific actions on google cloud resources so you will typically see a permission such as the one you see here compute.instances.list and within google cloud iam permissions are represented in this form service.resource.verb so just as a recap on roles this is a collection of permissions and you cannot grant a permission directly to the user but you grant a role to a user and all the permissions that the role contains so an example is shown here where the compute instances permissions are grouped together in a role now you can grant permissions by granting roles to a user a group or a service account so moving up into a more broader level there are three types of roles in iam there are the primitive roles the predefined roles and the custom roles with the primitive roles these are roles that existed prior to the introduction of iam and they consist of three specific roles owner editor and viewer and these roles are concentric which means that the owner role includes the permissions in the editor role and the editor role includes the permissions in the viewer role and you can apply primitive roles at the project or service resource levels by using the console the api and the gcloud tool just as a note you cannot grant the owner role to a member for a project using the iam api or the gcloud command line tool you can only add owners to a project using the cloud console as well google recommends avoiding these roles if possible due to the nature of how much access the permissions are given in these specific roles google recommends that you use pre-defined roles over primitive roles and so moving into predefined roles these are roles that give granular and finer-grained access control than the primitive roles to specific google cloud resources and prevent any unwanted access to other resources predefined roles are created and maintained by google their permissions are automatically updated as necessary when new features or services are added to google cloud now when it comes to custom roles these are user defined and allow you to bundle one or more supported permissions to meet your specific needs unlike predefined roles custom roles are not maintained by google so when new permissions features or services are added to google cloud your custom roles will not be updated automatically when you create a custom role you must choose an organization or project to create it in you can then grant the custom role on the organization or project as well as any resources within that organization or project and just as a note you cannot create custom roles at the folder level if you need to use a custom role within a folder define the custom role on the parent of that folder as well the custom roles user interface is only available to users who have permissions to create or manage custom roles by default only project owners can create new roles now there is one limitation that i wanted to point out and that is that some predefined roles contain permissions that are not permitted in custom roles so i highly recommend that you check whether you can use a specific permission when making a custom role custom roles also have a really cool feature that includes a launch stage which is stored in the stage property for the role the stage is informational and helps you keep track of how close each role is to being generally available and these launch stages are available in the stages shown here alpha which is in testing beta which is tested and awaiting approval and of course ga which is generally available and i’ll be getting hands-on later with these roles in an upcoming demonstration so now moving on to the next component is conditions and so a condition is a logic expression and is used to define and enforce conditional attribute-based access control for google cloud resources conditions allow you to choose granting resource access to identities also known as members only if configured conditions are met for example this could be done to configure temporary access for users that are contractors and have been given specific access for a certain amount of time a condition could be put in place to remove the access they needed once the contract has ended conditions are specified in the role bindings of a resources im policy so when a condition exists the access request is only granted if the condition expression is true so now moving on to metadata this component carries both e tags and version so first touching on e when multiple systems try to write to the same im policy at the same time there is a risk that those systems might overwrite each other’s changes and the risk exists because updating an im policy involves multiple operations so in order to help prevent this issue iam supports concurrency control through the use of an etag field in the policy the value of this field changes each time a policy is updated now when it comes to a version this is a version number that is added to determine features such as a condition and for future releases of new features it is also used to avoid breaking your existing integrations on new feature releases that rely on consistency in the policy structure also when new policy schema versions are introduced and lastly we have the auditconfig component and this is used in order to configure audit logging for the policy it determines which permission types are logged and what identities if any are exempted from logging and so to sum it up this is a policy in all its entirety each component as you can see plays a different part and i will be going through policies and how they are assembled in statements in a later lesson and so there is one more thing that i wanted to touch on before ending this lesson and that is the policy inheritance when it comes to resource hierarchy and so as explained in an earlier lesson you can set an im policy at any level in the resource hierarchy the organization level the folder level the project level or the resource level and resources inherit the policies of all their parent resources the effective policy for a resource is the union of the policy set on that resource and the policies inherited from higher up in the hierarchy and so again i wanted to reiterate that this policy inheritance is transitive in other words resources inherit policies from the project which inherit policies from folders which inherit policies from the organization therefore the organization level policies also apply at the resource level and so just a quick example if i apply a policy on project x on any resources within that project the effective policy is going to be a union of these policies as the resources will inherit the policy that is granted to project x so i hope this gave you a better understanding of how policies are granted as well as the course structure and so that’s all i have for this lesson so you can now mark this lesson as complete and let’s move on to the next one [Music] welcome back and in this lesson i wanted to build on the last lesson where we went through iam and policy architecture and dive deeper into policies and conditions when it comes to putting them together in policy statements as cloud engineers you should be able to read and decipher policy statements and understand how they’re put together by using all the components that we discussed earlier so just as a refresher i wanted to go over the policy architecture again now as i discussed previously a policy is a collection of statements that define who has what type of access it is attached to a resource and is used to enforce access control whenever that resource is accessed now the binding within that policy binds one or more members with a single role and any context specific conditions so in other words the member roles and conditions are bound together using a binding combined with the metadata and audit config we have a policy so now taking all of this and putting it together in a policy statement shown here you can see the bindings which have the role the members and conditions the first member being tony beauties gmail.com holding the role of storage admin and the second member as larkfetterlogin at gmail.com holding the role of storage object viewer now because lark only needs to view the files for this project in cloud storage till the new year a condition has been applied that does not grant access for lark to view these files after january the 1st an e tag has been put in and the version is numbered 3 due to the condition which i will get into a little bit later this policy statement has been structured in json format and is a common format used in policy statements moving on we have the exact same policy statement but has been formatted in yaml as you can see the members roles and conditions in the bindings are exactly the same as well as the etag and version but due to the formatting it is much more condensed so as you can see policy statements can be written in both json or yaml depending on your preference my personal preference is to write my policy statements in yaml due to the shorter and cleaner format so i will be moving ahead in this course with more statements written in yaml when you are looking to query your projects for its granted policies an easy way to do this would be to query it from the command line as shown here here i’ve taken a screenshot from tony bowtie ace in the cloud shell and have used the command gcloud projects get dash iam policy with the project id and this brought up all the members and roles within the bindings as well as the etag and version for the policy that has been attached to this project and as you can see here i have no conditions in place for any of my bindings and so again using the command gcloud projects get dash iam dash policy along with the project id will bring up any policies that are attached to this resource and the resource being the project id if the resource were to be the folder id then you could use the command gcloud resource dash manager folders get dash iam-policy with the folder id and for organizations the command would be gcloud organizations get dash iam-policy along with the organization id now because we don’t have any folders or organizations in our environment typing these commands in wouldn’t bring up anything and just as a note using these commands in the cloud shell or in the sdk will bring up the policy statement formatted in yaml so now i wanted to just take a second to dive into policy versions now as i haven’t covered versions in detail i wanted to quickly go over it and the reasons for each numbered version now version one of the i am syntax schema for policies supports binding one role to one or more members it does not support conditional role bindings and so usually with version 1 you will not see any conditions version 2 is used for google’s internal use and so querying policies usually you will not see a version 2. and finally with version 3 this introduces the condition field in the role binding which constrains the role binding via contact space and attributes based rules so just as a note if your request does not specify a policy version iam will assume that you want a version 1 policy and again if the policy does not contain any conditions then iam always returns a version one policy regardless of the version number in the request so moving on to some policy limitations each resource can only have one policy and this includes organizations folders and projects another limitation is that each iam policy can contain up to 1500 members and up to 250 of these members can be google groups now when making policy changes it will take up to seven minutes to fully propagate across the google cloud platform this does not happen instantaneously as iam is global as well there is a limit of 100 conditional role bindings per policy now getting a little bit deeper into conditions these are attributes that are either based on resource or based on details about the request and this could vary from time stamp to originating or destination ip address now as you probably heard me use the term earlier conditional role bindings are another name for a policy that holds a condition within the binding conditional role bindings can be added to new or existing iam policies to further control access to google cloud resources so when it comes to resource attributes this would enable you to create conditions that evaluate the resource in the access request including the resource type the resource name and the google cloud service being used request attributes allow you to manage access based on days or hours of the week a conditional role binding can be used to grant time bounded access to a resource ensuring that a user can no longer access that resource after the specified expiry date and time and this sets temporary access to google cloud resources using conditional role bindings in iam policies by using the date time attributes shown here you can enforce time-based controls when accessing a given resource now showing another example of a time-based condition it is possible to get even more granular and scope the geographic region along with the day and time for access in this policy lark only has access during business hours to view any objects within cloud storage lark can only access these objects from monday to friday nine to five this policy can also be used as a great example for contractors coming into your business yet only needing access during business hours now an example of a resource-based condition shown here a group member has a condition tied to it where dev only access has been implemented any developers that are part of this group will only have access to vm resources within project cat bowties and tied to any resources that’s name starts with the word development now some limitations when it comes to conditions is that conditions are limited to specific services primitive roles are unsupported and members cannot be of the all users or all authenticated users members conditions also hold a limit of 100 conditional role bindings per policy as well as 20 role bindings for the same role and same member and so for the last part of the policy statements i wanted to touch on audit config logs and this specifies the audit configuration for a service the configuration determines which permission types are logged and what identities if any are exempted from logging and when specifying audit configs they must have one or more audit log configs now as shown here this policy enables data read data write and admin read logging on all services while exempting tony bowtie ace gmail.com from admin read logging on cloud storage and so that’s pretty much all i wanted to cover in this lesson on policies policy statements and conditions and so i highly recommend as you come across more policy statements take the time to read through it and get to know exactly what the statement is referring to and what type of permissions that are given and this will help you not only in the exam but will also help you in reading and writing policy statements in future and so that’s all i have for this lesson so you can now mark this lesson as complete and let’s move on to the next one [Music] welcome back and in this demonstration i’m going to do a hands-on tour working with iam here in the google cloud console we’re going to go through the available services in the iam console as well as touching on the command line in the cloud shell to show how policies can be both added and edited we’re also going to be bringing in another new user to really bring this demo to life and to show you how to edit existing policies so with that being said let’s dive in so if i go over here to my user icon in the top right hand corner i can see that i am logged in as tony bowtie ace gmail.com and as you can see at the top i’m here in project tony so now to get to iam i’m going to go over to the navigation menu and i’m going to go to i am in admin and over to iam now moving over here to the menu on the left i wanted to go through the different options that we have in iam so under iam itself this is where you would add or edit permissions with regards to members and roles for the policy added to your given project which in my case is project tony and i’ll be coming back in just a bit to go greater in depth with regards to adding and editing the policy permissions moving on to identity and organization now although we haven’t touched on cloud identity yet i will be covering this in high level detail in a different lesson but for now know that cloud identity is google cloud’s identity as a service solution and it allows you to create and manage users and groups within google cloud now if i was signed into cloud identity i would have a whole bunch of options here but since this is a personal account i cannot create or manage any users as well i do not have a domain tied to any cloud identity account as well as any g suite account so just know that if you had cloud identity or g suite set up you would have a bunch of different options to choose from in order to help you manage your users and groups and here under organization policies i’m able to manage organization policies but since i am not an organization policy administrator and i don’t have an organization there’s not much that i can do here just know that when you have an organization set up you are able to come here in order to manage and edit your organization policies now moving under quotas we went over this in a little bit of detail in a previous lesson and again this is to edit any quotas for any of your services in case you need a limit increase moving on to service accounts i will be covering this topic in great depth in a later lesson and we’ll be going through a hands-on demonstration as well now i know i haven’t touched much on labels as of yet but know that labels are a key value pair that helps you organize and then filter your resources based on their labels these same labels are also forwarded to your billing system so you can then break down your billing charges by label and you can also use labels based on teams cost centers components and even environments so for example if i wanted to label my virtual machines by environment i can simply use environment as the key and as the value i can use anything from development to qa to testing to production and i could simply add this label and add all the different environments and later i’d be able to query based on these specific labels now a good rule of thumb is to label all of your resources so that this way you’re able to find them a lot easier and you’re able to query them a lot easier so moving forward with any of your resources that you are creating be sure to add some labels to give you maximum flexibility so i’m going to discard these changes and we’re going to move on to settings and we touched on settings in an earlier lesson with regards to projects and so here i could change the project name it’ll give me the project id the project number and i’m able to migrate or shut down the project now when it comes to access transparency this provides you with logs that capture the actions that google personnel take when they’re accessing your content for troubleshooting so they’re like cloud audit logs but for google support now in order to enable access transparency for your google cloud organization your google cloud account must have a premium support plan or a minimum level of a 400 a month support plan and because i don’t have this i wouldn’t be able to enable access transparency now although access transparency is not on the exam this is a great feature to know about in case you are working in any bigger environments that have these support plans and compliance is of the utmost importance now moving into privacy and security this is where google supplies all of their clients of google cloud the compliance that they need in order to meet regulations across the world and across various industries such as health care and education and because google has a broad base in europe google provides capabilities and contractual commitments created to meet data protection recommendations which is why you can see here eu model contract clauses and eu representative contacts as well under transparency and control i’m able to disable the usage data that google collects in order to provide better data insights and recommendations and this is done at the project level and as well i have the option of going over to my billing account and i could select a different billing account that’s linked to some other projects that you can get recommendations on and so continuing forward identity aware proxy is something that i will be covering in a later lesson and so i won’t be getting into any detail about that right now and so what i really wanted to dig into is roles now this may look familiar as i touched on this very briefly in a previous lesson and here’s where i can create roles i can create some custom roles from different selections and here i have access to all the permissions and if i wanted to i can filter down from the different types the names the permissions even the status so let’s say i was looking for a specific permission and i’m looking all the permissions for projects this could help me find exactly what it is that i’m looking for and these filters allow me to get really granular so i can find the exact permission and so you can get really granular with regards to your permissions and create roles that are custom to your environment now moving on to audit logs here i can enable the auto logs without having to use a specific policy by simply clicking on default autoconfig and here i can turn on and off all the selected logging as well as add any exempted users now i don’t recommend that you turn these on as audit logging can create an extremely large amount of data and can quickly blow through all of your 300 credit so i’m going to keep that off move back to the main screen of the audit logs and as well here i’m able to get really granular about what i want to log now quickly touching on audit logs in the command line i wanted to quickly open up cloud shell and show you an example of how i can edit the policy in order to enable audit logging just going to make this a little bit bigger and i’m going to paste in my command gcloud projects get dash iam dash policy with the project id which is project tony 286016 and i’m gonna just hit enter and as you can see here this is my current policy and as well as expected audit logs are not enabled due to the fact that the audit config field is not present so in order for me to enable the audit config logs i’m going to have to edit the policy and so the easiest way for me to do that is for me to run the same command and output it to a file where i can edit it and i’m going to call this new dash policy dot yaml and so now that my policy has been outputted to this file i’m going to now go into the editor and as you can see my new policy.yaml is right here and so for me to enable the autoconfig logs i’m going to simply append it to the file and then i’m going to go over here to the top menu and click on file and save and so now for me to apply this new policy i’m going to go back over to the terminal and now i’m going to paste in the command gcloud projects set dash iam-policy with the project id and the file name new dash policy dot yaml and i’m just going to hit enter and as you can see the audit log configs have been enabled for all services and because this may take some time to reflect in the console it will not show up right away but either way audit logs usually take up a lot of data and i don’t want to blow through my 300 credit and so i’m going to disable them now the easiest way for me to do this is to output this policy to another file edit it and set it again and so i’m going to go ahead and do that i’m going to first clear the screen and then i’m going to paste in my command while outputting it to a new file called updated dash policy dot yaml and i’m gonna hit enter and now i’m gonna go into the editor so i can edit the file now the one thing i wanted to point out is that i could have overwritten the file new dash policy but if you look here in the updated policy the e-tag is different than the e-tag in the old policy and so this allowed me to highlight e-tags when it comes to editing and creating new policies and so when editing policies make sure that the etag is correct otherwise you will receive an error and not be able to set the new policy so going back to the updated policy file i’m going to take out the audit log configs and i’m going to leave the auto configs field there and i’m going to go to the menu click on file and then save now i’m going to go back to the terminal and i’m going to paste in the new command and this will update my policy and as you can see the audit config logs have been disabled and the policy has been updated now this is the same process that you can use when you want to update any parts of the policy when it comes to your members or roles and even adding any conditions so now moving on to the last item on the menu is groups and as you can see here because i do not have an organization i’m not able to view any groups and so if i did have an organization i could manage my groups right here in this page now moving back over to iam i wanted to dig into policies in a little bit of further detail now what we see here are the permissions and roles that have been granted to selected members in this specific project which is project tony now remember an im policy is a total collection of members that have roles granted to them in what’s known as a binding and then the binding is applied to that layer and all other layers underneath it and since i’m at the project layer this policy is inherited by all the resources underneath it and so just to verify through the command line i’m going to open up cloud shell and i’m going to paste in the command gcloud projects get dash iam-policy with my project id and i’m going to hit enter and as you can see here the policy is a reflection of exactly what you see here in the console so as you can see here here’s the service agent which you will find here and the other two service accounts which you will find above as well as tony bowtie ace gmail.com and all the other roles that accompany those members so as i mentioned earlier i’ve gone ahead and created a new user and so for those who are following along you can go ahead and feel free to create a new gmail user now going ahead with this demonstration the user i created is named laura delightful now tony needed an extra hand and decided to bring her onto the team from another department now unfortunately in order for laura to help tony on the project she needs access to this project and as you can see she doesn’t have any access and so we’re going to go ahead and change that and give her access to this project so i’m going to go back over to my open tab for tony bowtie ace and we’re gonna go
ahead and give laura permissions and so i’m gonna go ahead and click on this add button at the top of the page and the prompt will ask me to add a new member so i’m gonna add laura in here now and here she is and i’m going to select the role as project viewer i’m not going to add any conditions and i’m simply going to click on save and the policy has been updated and as you can see here laura has been granted the role of project viewer so i’m going to move over to the other open tab where laura’s console is open and i’m going to simply do a refresh and now laura has access to view all the resources within project tony now laura is able to view everything in the project but laura isn’t actually able to do anything and so in order for laura to get things done a big part of her job is going to be creating files with new ideas for the fall winter line of bow ties in 2021 and so because laura holds the project viewer role she is able to see everything in cloud storage but she is unable to create buckets to upload edit or delete any files or even folders and as you can see here there is a folder marked bowtie inc fallwinter 2021 ideas but laura cannot create any new buckets because she doesn’t have the required permissions as well drilling down into this bucket laura is unable to create any folders as explained earlier and the same stands for uploading any files and so i’m going to cancel out of this and so in order to give laura the proper permissions for her to do her job we’re going to give laura the storage admin role and so moving back over to the open console for tony bowtie i’m going to give laura access by using the command line so i’m going to go up to the top right and open up cloud shell and so the command i need to run to give laura the role of storage admin would be the following gcloud projects add dash iam dash policy dash binding with the project id dash dash member user followed by colon and then the user name which is laura delightful gmail.com dash dash role and the role which is storage admin and i’m going to go ahead and hit enter and as you can see it has been executed successfully so if i do a refresh of the web page here i’m going to be able to see the changes reflected in the console and after a refresh you can see here storage admin has been added to the role for laura delightful gmail.com and so if i go over to the open tab where laura has her console open i can simply do a refresh and if i go back to the home page for cloud storage you can see here that laura now has the permissions to create a bucket laura also now has permissions to create new folders create edit and delete new files on top of being able to create new storage buckets and so that about wraps up this demonstration on getting hands-on with iam in both the console and the command line and i also hope that this demo has given you a bit more confidence on working in the shell running the commands needed in order to create new bindings along with editing existing policies and this will get you comfortable for when you need to assign roles to new and existing users that are added to your gcp environment and so you can now mark this lesson as complete and let’s move on to the next one welcome back in this lesson i’m going to take a deep dive into service accounts now service accounts play a powerful part in google cloud and can allow a different approach for application interaction with the resources in google cloud now service accounts being both an identity and a resource can cause some confusion for some and so i really wanted to spend some time breaking it down for better understanding and so i’m first going to start off by explaining what exactly is a service account and so a service account is a special kind of account that is used by an application or a virtual machine instance and not a person an application uses the service account to authenticate between the application and gcp services so that the users aren’t directly involved in short it is a special type of google account intended to represent a non-human user that needs to authenticate and be authorized to access data in google apis this way the service account is the identity of the service and the service accounts permissions control which resources the service can access and as a note a service account is identified by its email address which is unique to the account now the different service account types come in three different flavors user managed default and google managed service accounts when it comes to the user managed service accounts these are service accounts that you create you’re responsible for managing and securing these accounts and by default you can create up to 100 user managed service accounts in a project or you can also request a quota increase in case you need more now when you create a user managed service account in your project it is you that chooses a name for the service account this name appears in the email address that identifies the service account which uses the following format seen here the service account name at the project id dot iam.gserviceaccount.com now moving on to the default service accounts when you use some google cloud services they create user managed service accounts that enable the service to deploy jobs that access other google cloud resources these accounts are known as default service accounts so when it comes to production workloads google strongly recommends that you create your own user managed service accounts and grant the appropriate roles to each service account when a default service account is created it is automatically granted the editor role on your project now following the principle of lease privilege google strongly recommends that you disable the automatic role grant by adding a constraint to your organization policy or by revoking the editor role manually the default service account will be assigned an email address following the format you see here project id at appspot.gserviceaccount.com for any service accounts created by app engine and project number dash compute at developer.gserviceaccount.com for compute engine and so lastly when it comes to google managed service accounts these are created and managed by google and they are used by google services the display name of most google managed service accounts ends with a gserviceaccount.com address now some of these service accounts are visible but others are hidden so for example google api service agent is a service account named with an email address that uses the following format project number at cloudservices.gerisa and this runs internal google processes on your behalf and this is just one example of the many google managed services that run in your environment and just as a warning it is not recommended to change or revoke the roles that are granted to the google api service agent or to any other google managed service accounts for that matter if you change or revoke these roles some google cloud services will no longer work now when it comes to authentication for service accounts they authenticate using service account keys so each service account is associated with two sets of public and private rsa key pairs that are used to authenticate to google they are the google manage keys and the user manage keys with the google manage keys google stores both the public and private portion of the key rotates them regularly and the private key is always held in escrow and is never directly accessible iam provides apis to use these keys to sign on behalf of the service account now when using user managed key pairs this implies that you own both the public and private portions of a key pair you can create one or more user managed key pairs also known as external keys that can be used from outside of google cloud google only stores the public portion of a user managed key so you are responsible for the security of the private key as well as the key rotation private keys cannot be retrieved by google so if you’re using a user manage key please be aware that if you lose your key your service account will effectively stop working google recommends storing these keys in cloud kms for better security and better management user managed keys are extremely powerful credentials and they can represent a security risk if they are not managed correctly and as you can see here a user managed key has many different areas that need to be addressed when it comes to key management now when it comes to service account permissions in addition to being an identity a service account is a resource which has im policies attached to it and these policies determine who can use the service account so for instance lark can have the editor role on a service account and laura can have a viewer role on a service account so this is just like granting roles for any other google cloud resource just as a note the default compute engine and app engine service accounts are granted editor roles on the project when they are created so that the code executing in your app or vm instance has the necessary permissions now you can grant the service account user role at both the project level for all service accounts in the project or at the service account level now granting the service account user role to a user for a project gives the user access to all service accounts in the project including service accounts that may be created in the future granting the service account user role to a user for a specific service account gives a user access to only that service account so please be aware when granting the service account user role to any member now users who are granted the service account user role on a service account can use it to indirectly access all the resources to which the service account has access when this happens the user impersonates the service account to perform any tasks using its granted roles and permissions and is known as service account impersonation now when it comes to service account permissions there is also another method use called access scopes service account scopes are the legacy method of specifying permissions for your instance and they are used in substitution of iam roles these are used specifically for default or automatically created service accounts based on enabled apis now before the existence of iam roles access scopes were the only way for granting permissions to service accounts and although they are not the primary way of granting permissions now you must still set service account scopes when configuring an instance to run as a service account however when you are using a custom service account you will not be using scopes rather you will be using iam roles so when you are using a default service account for your compute instance it will default to using scopes instead of iam roles and so i wanted to quickly touch on how service accounts are used now one way of using a service account is to attach this service account to a resource so if you want to start a long-running job that authenticates as a service account you need to attach a service account to the resource that will run the job and this will bind the service account to the resource now the other way of using a service account is directly impersonating a service account which i had explained a little bit earlier so once granted they require permissions a user or a service can directly impersonate the identity of a service account in a few common scenarios you can impersonate the service account without requiring the use of a downloaded external service account key as well a user may get artifacts signed by the google managed private key of the service account without ever actually retrieving a credential for the service account and this is an advanced use case and is only supported for programmatic access now although i’m going to be covering best practices at the end of this section i wanted to go over some best practices for service accounts specifically so you should always look at auditing the service accounts and their keys using either the service account dot keys dot list method or the logs viewer page in the console now if your service accounts don’t need external keys you should definitely delete them you should always grant the service account only the minimum set of permissions required to achieve the goal service accounts should also be created for each specific service with only the permissions required for that service and finally when it comes to implementing key rotation you should take advantage of the iam service account api to get the job done and so that’s all i have for this lesson on service accounts so you can now mark this lesson as complete and please join me in the next one where we go hands-on in the console [Music] welcome back so in this demonstration i’m going to take a hands-on tour diving through various aspects of working with both default and custom-made service accounts we’re going to start off fresh observing a new service account being automatically created along with viewing scopes observing how to edit them and creating custom service accounts that get a little bit more granular with the permissions assigned so with that being said let’s dive in so as you can see here from the top right hand corner that i am logged in under tony bowtie ace gmail.com and looking over here from the top drop down menu you can see that i am in the project of cat bow ties fall 2021 and this is a brand new project that i had created specifically for this demo and so i currently have no resources created along with no apis enabled so now i want to navigate over to iam so i’m going to go up to the left hand corner to the navigation menu and i’m going to go to i am an admin and over to iam and as expected i have no members here other than myself tony bowtie ace gmail.com with no other members and if i go over here to the left hand menu under service accounts you can see that i have no service accounts created so now in order to demonstrate a default service account i’m going to go over to the navigation menu and go into compute engine and as you can see the compute engine api is starting up and so this may take a couple minutes to get ready okay and the compute engine api has been enabled so now if i go back over to iam to take a look at my service accounts as expected i have my compute engine default service account now again i did not create this manually this service account was automatically created when i had enabled the compute engine api along with the api’s service agent and the compute engine service agent and the same would happen to other various apis that are enabled as well and so now that i have my default service account i want to go back over to compute engine and i’m going to go ahead and create a vm instance so i’m going to just click on create i’m going to keep everything as the default except i’m going to change the machine type from an e2 medium to an e2 micro and so now i’m going to scroll down to where it says identity and api access now here under service account you can see that the compute engine default service account has been highlighted and this is because i don’t have any other service accounts that i am able to select from now when a default service account is the only service account you have access to access scopes are the only permissions that will be available for you to select from now remember access scopes are the legacy method of specifying permissions in google cloud now under access scopes i can select from the allow default access allow full access to all cloud apis and set access for each api and so i want to click on set access for each api for just a second and so as you can see here i have access to set permissions for each api the difference being is that i only have access to primitive roles and so now that i’m looking to grant access to my service account i’m going to grant access to cloud storage on a read-only capacity and so now that i have granted permissions for my service account i’m going to now create my instance by simply clicking on the create button and so now that my instance is created i want to head over to cloud storage to see exactly what my service account will have access to so i’m going to go over to my navigation menu and scroll down and click on storage and as you can see here i have created a bucket in advance called bow tie ink fall winter 2012 designs and this is due to bow tie ink bringing back some old designs from 2012 and making them relevant for today and within that bucket there are a few files of different design ideas that were best sellers back in 2012 that tony bowtie wanted to re-release for the fall winter 2012 collection and so with the new granted access to my default service account i should have access to view these files so in order to test this i’m going to go back over to the navigation menu and go back to compute engine and i’m going to ssh into my instance and so now that i’ve sshed into my virtual machine i wanted to first check to see who is it that’s running the commands is it my user account or is it my service account and so i’ll be able to do this very easily by checking the configuration and i can do this by running the command gcloud config list and as you can see my current configuration is showing that my service account is the member that is being used to run this command in the project of cat bow ties fall 2021 now if i wanted to run any commands using my tony bowtie ace gmail.com user account i can simply run the command gcloud auth login and it will bring me through the login process that we’ve seen earlier on in the course for my tony bowtie ace gmail.com account but now since i’m running all my commands using my service account from this compute engine instance i’m using the permissions granted to that service account that we saw earlier and so since i set the storage scope for the service account to read only we should be able to see the cloud storage bucket and all the files within it by simply running the gsutil command so to list the contents of the bucket i’m going to type in the command gsutil ls for list and the name of the bucket and the syntax for that would be gs colon forward slash forward slash followed by the name of the bucket which would be bowtie inc fw2012 designs and as you can see we’re able to view all the files that are in the bucket and so it is working as expected and so now because i’ve only granted viewing permissions for this service account i cannot create any files due to the lack of permissions so for instance if i was to create a file using the command touch file one i have now created that file here on the instance so now i want to copy this file to my bucket and so i’m going to run the gsutil command cp for copy file 1 which is the name of my file and gs colon forward slash forward slash along with the name of the bucket which is bow tie inc fw 2012 designs and as expected i am getting an access denied exception with a prompt telling me that i have insufficient permissions and so now that i’ve shown you how to create a default service account and give it permissions using access scopes let’s now create a custom service account and assign it proper permissions to not only read files from cloud storage but be able to write files to cloud storage as well so i’m going to now close down this tab and i’m going to go back over to the navigation menu and go back to iam where we can go in and create our new service account under service accounts and so as you can see here this is the default service account and since we want to create a custom one i’m going to go ahead and go up to the top here and click on the button that says create service account and so now i’m prompted to enter some information with regards to details of this service account including the service account name the account id along with a description and so i’m going to call this service account sa hyphen bowtie hyphen demo and as you can see it automatically propagated the service account id and i’m going to give this service account a description storage read write access and i’m going to click on the button create and so now i’ve been prompted to grant permissions to the service account and i can do that by simply clicking on the drop down and selecting a roll but i’m looking to get a little bit more granular and so i’m going to simply type in storage and as you can see i’m coming up with some more granular roles as opposed to the primitive roles that i only had access to prior to the search so i’m going to click on storage object viewer for read access to cloud storage i’m not going to add any conditions and i’m going to add another role and this time i’m going to add storage object creator and so those are all the permissions i need for read write access to cloud storage and so now i can simply click on continue and so now i’m being prompted to add another user to act as a service account and this is what we discussed in the last lesson about service accounts being both a member and a resource now notice that i have an option for both the service account users role and the service account admins role now as discussed earlier the service account and men’s role has the ability to grant other users the role of service account user and so because we don’t want to do that i’m going to leave both of these fields blank and simply click on done now i know in the last lesson i talked about creating custom keys for authentication in case you’re hosting your code on premise or on another cloud and so if i wanted to do that i can simply go to the actions menu and click on create key and it’ll give me the option on creating a private key either using json or p12 format and because i’m not creating any keys i’m going to simply click on cancel and so in order for me to apply this service account to our vm instance i’m going to now go back over to the navigation menu and go back into compute engine and so now in order for me to change this service account that’s currently assigned to this instance i’m going to go ahead and check off this instance and click on stop now please note that in order to change service accounts on any instance you must stop it first before you can edit the service account and so now that the instance has stopped i’m going to drill down into this instance one and i’m going to click on edit now i’m going to scroll down to the bottom and at the bottom you will find the service account field and clicking on the drop down i’ll find my custom service account as a bow tie demo so i want to select this and simply click on save and so now that i’ve selected my new service account to be used in this vm instance i can now start up the instance again to test out the permissions that were granted and so just as a quick note here i wanted to bring your attention to the external ip whenever stopping and starting an instance with an ephemeral ip in other words it is not assigned a static ip your vm instance will receive a new ip address and i’ll be getting into this in a lot deeper detail in the compute engine section of the course and so now i’m going to ssh into this instance now i’m going to run the same gsutil command that i did previously to list all the files in the bucket so i’m going to run the command gsutil ls for list and gs colon forward slash forward slash bow tie inc fw 2012 designs and as you can see i’m able to read all the files in the bucket now the difference in the permissions granted for the service account is that i’m able to write files to cloud storage and so in order to test that i’m going to use the touch command again and i’m going to name the file file2 and so now i’m going to copy this file to the cloud storage bucket by using the command gsutil cp file2 and the bucket name gs colon forward slash forward slash bow tie inc fw 2012 designs and as expected the file copied over successfully as we do have permissions to write to cloud storage and so before i end this demonstration i wanted to quickly go over exactly how to create service accounts using the command line and so i’m going to close down this tab and i’m going to head up to the top right hand corner and activate my cloud shell i’m going to make this window a little bit bigger and so now in order to view the service accounts i currently have i’m going to run the command gcloud iam service dash accounts list and so as expected the compute engine default service account along with the custom service account that i created earlier called sa bowtie demo is now displaying and in order to just verify that i’m going to go over to iam under service accounts and as you can see it is reflecting exactly the same in the console so now in order for me to create a new service account using the command line i’m going to run the command gcloud iam service accounts create and the name of the service account which i’m going to call sa-tony bowtie along with the display name as essay tony bowtie as well and i’m going to hit enter and my service account has been created so now if i run the command gcloud i am service accounts list i should see my new service account and as well if i did a refresh here on the console i can see that it is reflecting the same so now that we’ve created our new service account we need to assign some permissions to it in order for us to be able to use it and so if i go over here to iam in the console i can see here that my service account has not been assigned any permissions and so in order to do that i am going to simply run the command gcloud projects add dash iam-policy-binding so we’re adding a policy binding and then the name of the project catbow ties fall 2021 we need to add the member which is the new service account email address along with the role of storage object viewer i’m going to hit enter and as you can see my member sa tony bowtie has been assigned the storage object viewer role and so if i wanted to grant some other roles to the service account i can do that as well and so if i did a refresh here i can see that the console reflects exactly the same and so in order for me to use this account in my instance i’m going to first have to stop my instance attach my service account and then start up my instance again so i’m going to go over to my cloud shell i’m just going to clear the screen and i’m going to paste in the command gcloud compute instances stop the name of the instance along with the zone and now that the instance has stopped i can now add my surface account to the instance and so i’m going to use the command gcloud compute instances set service account instance 1 along with the zone and the service account email address i’m going to go ahead and hit enter and it has now been successfully added and so now that that’s done i can now start up the instance by using the command gcloud compute instances start along with the instance name and the zone and so now if i go over to my navigation menu and go over to compute engine and drill down on the instance if i scroll down to the bottom i’ll be able to see that my new service account has been added and so this is a great demonstration for when you want to add different service accounts for your different applications on different instances or even on different resources and so that’s pretty much all i wanted to cover in this demonstration so you can now mark this lesson as complete and let’s move on to the next one welcome back in this lesson i’m going to dive into cloud identity google’s identity as a service offering for google cloud that maximizes end user efficiency protect company data and so much more now cloud identity as i said before is an identity as a service solution that centrally manages users and groups this would be the sole system for authentication and that provides a single sign-on experience for all employees of an organization to be used for all your internal and external applications cloud identity also gives you more control over the accounts that are used in your organization for example if developers in your organization use personal accounts such as gmail accounts those accounts are outside of your control so when you adopt cloud identity you can manage access and compliance across all the users in your domain now when you adopt cloud identity you create a cloud identity account for each of your users and groups you can then use iam to manage access to google cloud resources for each cloud identity account and you can also configure cloud identity to federate identities between google and other identity providers such as active directory and azure active directory and i’ll be getting more into that a little bit later so now when it comes to cloud identity it gives you so much more than just user and group management it provides a slew of features such as device management security single sign-on reporting and directory management and i will be diving deeper into each one of these features of cloud identity now starting with device management this lets people in any organization access their work accounts from mobile devices while keeping the organization’s data more secure in today’s world employees want to access business applications from wherever they are whether at home at work or even traveling and many even want to use their own devices which is also known as bring your own device or byod for short using mobile device management there are several ways that you can provide the business applications employees need on their personal devices while implementing policies that keep the corporate data safe you can create a white list of approved applications where users can access corporate data securely through those applications you can enforce work profiles on android devices and requiring managed applications on ios devices policies can also be pushed out on these devices to protect corporate data and identities as well as keeping inventory of devices with corporate data present then when these devices are either no longer being used for corporate use or stolen the device can then be wiped of all its corporate data device management also gives organizations the power to enforce passcodes as well as auditing now moving into the security component of cloud identity this is where two-step verification steps in now as explained earlier two-step verification or to sv is a security feature that requires users to verify their identity through something they know such as a password plus something they have such as a physical key or access code and this can be anything from security keys to google prompt the authenticator app and backup codes so cloud identity helps by applying security best practices along with being able to deploy two-step verification for the whole company along with enforcement controls and can also manage passwords to make sure they are meeting the enforced password requirements automatically so single sign-on is where users can access many applications without having to enter their username and password for each application single sign-on also known as sso can provide a single point of authentication through an identity provider also known as idp for short you can set up sso using google as an identity provider to access a slew of third-party applications as well as any on-premise or custom in-house applications you can also access a centralized dashboard for conveniently accessing your applications so now when lisa logs in with her employee credentials she will then have access to many cloud applications that bowtie inc it department has approved through a catalog of sso applications and this will increase both security and productivity for lisa and bowtie inc as lisa won’t have to enter a separate username and password for separate applications now getting into reporting this covers audit logs for logins groups devices and even tokens you’re even able to export these logs to bigquery for analysis and then you can create reports from these logs that cover security applications and activity now moving on to the last component of cloud identity is directory management and this provides profile information for users in your organization email and group addresses and shared external contacts in the directory using google cloud directory sync or gcds you can synchronize the data in your google account with your microsoft active directory or ldap server gcds doesn’t migrate any content such as your email your calendar events or your files to your google account gcds is used to synchronize all your users groups and shared contacts to match the information in your ldap server which could be your active directory server or your azure active directory domain now getting deeper into google cloud directory sync i’d like to touch on active directory for just a minute now active directory is a very common directory service developed by microsoft and is a cornerstone in most big corporate on-premises environments it authenticates and authorizes all users and computers in a windows domain type network signing and enforcing security policies for all computers and installing or updating software as necessary now as you can see here in the diagram the active directory forest contains the active directory domain a bowtieinc.co and the active directory federation services of bowtieinc.co where the active directory forest is the hierarchical structure for active directory the active directory domain is responsible for storing information about members of the domain including devices and users and it verifies their credentials and defines their access rights active directory federation services or adfs is a single sign-on service where federation is the means of linking a person’s electronic identity and attributes stored across multiple distinct identity management systems so you can think of it as a subset of sso as it relates only to authentication technologies used for federated identity include some common terms that you may hear me or others in the industry use from time to time such as saml which stands for security assertion markup language oauth open id and even security tokens such as simple web tokens json web tokens and saml assertions and so when you have identities already in your on-premises environment that live in active directory you need a way to tie these identities to the cloud and so here’s where you would use google cloud directory sync to automatically provision users and groups from active directory to cloud identity or g suite google cloud directory sync is a free google provided tool that implements the synchronization process and can be run on google cloud or in your on-premises environment synchronization is one way so that active directory remains the source of truth cloud identity or g suite uses active directory federation services or adfs for single sign-on any existing corporate applications and other sas services can continue to use your adfs as an identity provider now i know this may be a review for some who are advanced in this topic but for those who aren’t this is a very important topic to know as google cloud directory sync is a big part of cloud identity and is a common way that is used in many corporate environments to sync active directory or any other ldap server to google cloud especially when you want to keep your active directory as the single source of truth and so that’s pretty much all i wanted to cover when it comes to cloud identity and google cloud directory sync so you can now mark this lesson as complete and let’s move on to the next one [Music] welcome back now i wanted to close out this section by briefly going over the best practices to follow when working with identity and access management so the phrase that was discussed in the beginning of this lesson that will continuously come up in the exam is the principle of least privilege and again this is where you would apply only the minimal access level required for what is needed to be done and this can be done using predefined roles which is a more granular level role than using primitive roles which are very wide scoped roles that are applied to the whole project roles should also be granted at the smallest scope necessary so for instance when assigning somebody the permissions needed for managing pre-existing compute instances assigning a compute instance admin role might be sufficient for what they need to do as opposed to assigning them the compute instance role that has full control of all compute engine instance resources now when it comes to child resources they cannot restrict access granted on its parent so always remember to check the policy granted on every resource and make sure you understand the hierarchical inheritance you also want to make sure that you restrict access to members abilities to create and manage service accounts as users who are granted the service account actor role for a service account can access all the resources for which the service account has access and granting someone with the owner role should be used with caution as they will have access to modify almost all resources project-wide including iam policies and billing granting an editor role might be more sufficient for the needs of most when using primitive roles now when dealing with resource hierarchy to make it easy on how to structure your environment you should look at mirroring your google cloud resource hierarchy structure to your organizational structure in other words the google cloud resource hierarchy should reflect how your company is organized you should also use projects to group resources that share the same trust boundary as well as setting policies at the organization level and at the project level rather than at the resource level now going back to what we discussed earlier about the principle of least privilege you should use this guideline to grant iam roles that is only give the least amount of access necessary to your resources and when granting roles across multiple projects it is recommended to grant them at the folder level instead of at the project level now diving back into service accounts a separate trust boundary should always be applied for any given application in other words create a new service account when multiple components are involved in your application you also want to make sure that you don’t delete any service accounts that are in use by running instances as your application is likely to fail so you will want to schedule this during plan down time to avoid any outages now earlier on in this section we discussed service account keys and how they interact with google cloud and that is the main authentication mechanism used for keys so you want to make sure that any user managed keys are rotated periodically to avoid being compromised you can rotate a key by creating a new key switching applications to use the new key and then deleting the old key but be sure to create the new key first before deleting the old one as this will result in parts or even your entire application failing and also when working with service account keys it’s always good practice to name your service keys and this will reflect your use for those keys and permissions for those keys so you know what they are used for when you’re looking at them now when you are giving access to service accounts you want to make sure that only those who truly need access are the ones that have it others in your environment should be restricted to avoid any misuse now when it comes to keeping your service account keys safe i can’t stress this enough you never want to check in these keys source code or leave them in your downloads directory as this is a prime way of not only getting your keys compromised but compromising your entire environment to be accessed publicly now we touched a bit on auditing but we haven’t really gone into it in detail and we’ll be going into it later on in the course but touching on best practices you want to be sure to check your cloud audit logs regularly and audit all i am policy changes whenever you edit any iam policies a log is generated that records that change and so you always want to periodically check these logs to make sure that there are no changes that are out of your security scope you also want to check to see who has editing permissions on these iam policies and make sure that those who hold them have the rights to do so point being is that you want to restrict who has the ability to edit policies and once these audit logs have been generated you want to export them to cloud storage so that you’re able to store them for long term retention as these logs are typically held for weeks and not years getting back to service account keys service account key access should be periodically audited for viewing of any misuse or unauthorized access and lastly audit logs should also be restricted to only those who need access and others should have no permissions to view them and this can be done by adding a role to be able to view these logs now when touching on policy management you want to grant access to all projects in your organization by using an organization level policy you also want to grant roles to a google group instead of individual users as it is easier to add or remove members from a google group instead of updating an im policy and finally when you need to grant multiple roles to a task you should create a google group as it is a lot easier to grant the roles to that group and then add the users to that group as opposed to adding roles to each individual user and so that’s all i wanted to cover on this short yet very important lesson on best practices when it comes to iam now i know this is not the most exciting topic but will become extremely necessary when you are dealing with managing users groups and policies in environments that require you to use iam securely and so please keep this in mind whenever you are working in any environment as it will help you grant the proper permissions when it comes to these different topics so now i highly recommend that you take a break grab a tea or coffee before moving on into the next section and so for now you can mark this lesson as complete and whenever you’re ready please join me in the next section [Music] welcome back now i wanted to make this as easy as possible for those students who do not have a background in networking or any networking knowledge in general which is why i wanted to add this quick networking refresher to kick off the networking section of this course so with that being said let’s dive in so before the internet computers were standalone and didn’t have the capabilities to send emails transfer files or share any information fast forward some time people started to connect their computers together to share and be able to do the things that modern networks can do today part of being in this network is being able to identify each computer to know where to send and receive files this problem was solved by using an address to identify each computer on the network like humans use a street address to identify where they live so that mail and packages can be delivered to them an ip address is used to identify a computer or device on any network so communication between machines was done by the use of an ip address a numerical label assigned to each device connected to a computer network that uses the internet protocol for communication also known as ip for short so for this system to work a communication system was put in place that defined how the network would function this system was put together as a consistent model of protocol layers defining interoperability between network devices and software in layers to standardize how different protocols would communicate in this stack this stack is referred to as the open systems interconnection model or you may hear many refer to it as the seven layer osi model now this is not a deep dive networking course but i did feel the need to cover that which is necessary for the understanding of the elements taught in this course for those wanting to learn more about the osi model and the layers within it please check out the links that i have included in the lesson text below so for this lesson and the next i will be covering the specific layers with its protocols that are highlighted here and will help you understand the networking concepts in this course with a bit better clarity so i’ll be covering a layer 3 being the network layer layer 4 being the transport layer and layer 7 being the application layer so first up i will be covering layer 3 which is the networking layer along with the internet protocol now there are two versions of the internet protocol and are managed globally by the regional internet registries also known as the rir the first one which is ipv4 is the original version of the internet protocol that first came on the scene in 1981 the second version is ipv6 which is a newer version designed in 2017 to deal with the problem of ipv4 address exhaustion meaning that the amount of usable ips were slowly being used up and i will be covering both versions of the internet protocol in a little bit of depth so let’s first dive into ipv version 4. so ipv4 can be read in a human readable notation represented in dotted decimal notation consisting of four numbers each ranging from 0 to 255 separated by dots each part between the dots represents a group of 8 bits also known as an octet a valid range for an ip address starts from 0.0.0.0 and ends in 255.255.255.255. and this would give you a total number of over 4.2 billion ip addresses now this range was viewed as extremely large back then until the number of ip addresses available were quickly dwindling due to the many ipconnected devices that we have today and this is when a new addressing architecture was introduced called classful addressing where the address was split into smaller ranges and this was originally assigned to you when you needed an ip address by one of the registries noted before so for any given ip address they’re typically made of two separate components the first part of the address is used to identify the network that the address is a part of the part that comes afterwards is used to specify a specific host within that network now the first part was assigned to you and your business by the registries and the second part was for you to do it as you’d like and so these ip addresses were assigned from the smaller ranges explained earlier called classes the first range of classes is class a and it started at 0.0.0.0 and ended at 127.255 and this would give a total number of over 2.1 billion addresses with 128 different networks class a ip addresses can support over 16 million hosts per network and those who were assigned addresses in this class had a fixed value of the first octet the second third and fourth octet was free for the business to assign as they choose class a ip addresses were to be used by huge networks like those deployed by internet service providers and so when ips started to dwindle many companies return these class a network blocks back to the registries to assist with extending addressing capacity and so the next range is class b and this is half the size of the class a network the class b network range started at one at 128.0.0.0 and end it at 191.255.255.255 and carries a total number of over 1 billion ip addresses with over 16 000 networks the fixed value in this class is of the first and second octet the third and fourth octet can be done with as you like ip addresses in this class were to be used for medium and large size networks in enterprises and organizations the next range is class c and this is half the size of the class b network the class c network range starts at 192 and ends at 223.255.255.255 and carries a total of over half a billion addresses with over two million networks and can support up to 256 hosts the fixed value of this class is the first second and third octet and the fourth can be done with as you like ip addresses in this class were the most common class and were to be used in small business and home networks now there’s a couple more classes that were not commonly used called class d and class e and this is beyond the scope of this course so we won’t be discussing this and so this was the way that was used to assign public ip addresses to devices on the internet and allowed communication between devices now the problem with classful addressing was that with businesses that needed larger address blocks than a class c network provided they received a class b block which in most cases was much larger than required and the same thing happened with requiring more ips than class b and getting a class a network block this problem introduced a lot of wasted ips as there was no real middle ground and so this was a way to address any publicly routable ips now there were certain ranges that were allocated for private use and were designed to be used in private networks whether on-premises or in cloud and again they are not designed for public use and also didn’t have the need to communicate over the public internet and so these private ip address spaces were standardized using the rfc standard 1918 and again these ip addresses are designed for private use and can be used anywhere you like as long as they are still kept private chances are a network that you’ve come across whether it be a cloud provider your home network or public wi-fi will use one of these classes to define their network and these are split into three ranges first one being single class a with 10.0.0 ending in 10.255.255.255. the class b range ranging from 172.16.0.0 to 172.31 dot and lastly class c which was ranging from 192.168.0.0 to 192.168.255.255. now for those networks that use these private ips over the public internet the process they would use is a process called network address translation or nat for short and i will be covering this in a different lesson later on in the section this method of classful addressing has been replaced with something a bit more efficient where network blocks can be defined more granularly and was done due to the internet running out of ipv4 addresses as we needed to allocate these ips more efficiently now this method is called classless inter domain routing or cider for short now with cider based networks you aren’t limited to only these three classes of networks class a b and c have been removed for something more efficient which will allow you to create networks in any one of those ranges cider ranges are represented by its starting ip address called a network address followed by what is called a prefix which is a slash and then a number this slash number represents the size of the network the bigger the number the smaller the network and the smaller the number the bigger the network given the example here 192.168.0.0 is the network address and the prefix is a slash 16. now at this high level it is not necessary to understand the math behind this but i will include a link in the lesson text for those of you who are interested in learning more about it all you need to keep in mind is as i said before the bigger the prefix number the smaller the network and the smaller the prefix number the bigger the network so just as an example the size of this slash 16 network is represented here by this circle its ip range is 192.168.0.0 ending in 192.168.255.255. and once you understand the math you will be able to tell that a slash 16 range means that the network is the fixed value in the first and second octet the hosts on the network or the range are the values of anything in the third or fourth octets so this network in total will provide us with 65 536 ip addresses now let’s say you decided to create a large network such as this and you wanted to allocate part of it to another part of your business you can simply do so by splitting it in two and be left with two slash 17 networks so instead of one slash 16 network you will now have 2 17 networks and each network will be assigned 32 768 ip addresses so just to break it down the previous network which was 192.16 forward slash 16 with the first two octets being the network which is 192.168 it leaves the third and fourth octet to distribute as you like and these third and fourth octets are what you’re having to create these two networks so looking at the blue half the address range will start at 0.0 and will end at 127.255. the green half will start halfway through the slash 16 network which will be 128.0 and end at 255.255. so now what if i was looking to break this network down even further and break it into four networks well using cider ranges this makes things fairly easy as i can have it again and as shown here i would split the two slash 17 networks to create four slash 18 networks so if i took the blue half circle and split it into two and then splitting the green half circle into this would leave me with four slash 18 networks as seen here the blue quarter would start from 192.168.0.0 ending with the last two octets of 63.255 and the red quarter which starts from where the blue left off starting at the last two octets of 64.0 and ending in 127.255. the green quarter again starting off with the previously defined 128.0 network which is where the red quarter left off and ending with the last two octets being 191.255 and lastly the yellow quarter starting off from where the green quarter left off at 192.0 with the last two octets ending with 255.255 and so this would leave us with four smaller slash 18 networks broken down from the previous two 17 networks with each of these networks consisting of 16 384 ip addresses and we can continue this process continuously having networks and breaking them down into smaller networks this process of dividing each network into two smaller networks is known as subnetting and each time you subnet a network and create two smaller networks the number in the prefix will increase and so i know this is already a lot to take in so this would be a perfect time for you to grab a coffee or a tea and i will be ending part one here and part two will be continuing immediately after part one so you can now mark this lesson as complete and i’ll see you in the next one for part two [Music] welcome back and in this lesson i’m going to be covering the second part of the networking refresher now part two of this lesson is starting immediately from the end of part one so with that being said let’s dive in now i know this network refresher has been filled with a ton of numbers with an underlying current of math but i wanted you to focus on the why so that things will make sense later i wanted to introduce the hard stuff first so that over the length of this course you will be able to digest this information and understand where this fits into when discussing the different network parts of google cloud this will also help you immensely in the real world as well as the exam when configuring networks and knowing how to do the job of an engineer so getting right into it i wanted to just do a quick review on classless inter-domain routing or cider so as discussed in the first refresher an ipv4 address is referenced in dotted decimal notation alongside the slash 16 is the prefix and defines how large the network is and so before i move on i wanted to give you some references that i found helpful in order to determine the size of a network and so here i have referenced three of the most common prefixes that i continuously run into that i think would be an extremely helpful reference for you so if you look at the first i p address 192.168.0.0 with slash 8 as the prefix slash 8 would fall under a class a network 192 being the first octet as well as being the network part of the address would be fixed and so the host part of it would be anything after that so the address could be 192 dot anything and this cider range would give you over 16 million ip addresses the second most common network that i see is a slash 16 network and this would make this ip fall under a class b network making the first two octets fixed and being the network part meaning that anything after 192.168 would be the host part meaning that the address could be 192.168.anything and this would give you 65536 ip addresses and so for the third ip address which is probably the most common one that i see is a slash 24 network which falls under a class c network meaning that the first three octets are fixed and the fourth octet could be anything from zero to two five five and this would give you 256 ip addresses and another common one which is the smallest that you will see is a slash 32 prefix and this is one that i use constantly for white listing my ip address and because a slash 32 is one ip address this is a good one to know when you are configuring vpn for yourself or you’re whitelisting your ip address from home or work and for the last reference as well as being the biggest network is the ip address of 0.0.0.1 forward slash 0 which covers all ip addresses and you will see this commonly used for the internet gateway in any cloud environment and so these are some common prefixes that come up very frequently and so i hope this reference will help you now moving back to the osi model i’ve covered ipv4 in the network layer and so now it’s time to discuss ipv6 now as i noted earlier ipv4 notation is called dotted decimal and each number between the dots is an octet with a value of 0 to 255. now underneath it all each octet is made up of an 8-bit value and having four numbers in an ip address that would make it a 32-bit value ipv6 is a much longer value and is represented in hexadecimal and each grouping is two octets which is 16 bits and is often referred to as a hextet now as these addresses are very long as you can see you’re able to abbreviate them by removing redundant zeros so this example shown here is the same address as the one above it so if there is a sequence of zeros you can simply replace them with one zero so in this address each grouping of four zeros can be represented by one zero and if you have multiple groups of zeros in one address you can remove them all and replace them with double colons so each of these ipv6 addresses that you see here are exactly the same now each ipv6 address is 128 bits long and is represented in a similar way to ipv4 starting with the network address and ending with the prefix each hextet is 16 bits and the prefix number is the number of bits that represent the network with this example slash 64 refers to the network address underlined in green which is 2001 colon de3 each hextet is 16 bits and the prefix is 64. so that’s four groups of 16 and so this is how we know which part is the network part of the address and which is the host part of the address again notice the double colon here and as i explained previously any unneeded zeros can be replaced by a double colon and so this address would represent a slew of zeros and so adding in all the zeros the ipv6 starting network address would look like this now because the network address starts at 2001 colon de3 with another two hextets of zeros as the network address that was determined by the slash 64 prefix which is four hextets it means a network finishes at that network address followed by all fs and so that’s the process of how we can determine the start and end of every ipv6 network now as i’ve shown you before with all ipv4 addresses they are represented with a 0.0.0.0.0 and because ipv6 addresses are represented by the same network address and prefix we can represent ipv6 addresses as double colon slash zero and you will see this frequently when using ipv6 and so i know this is really complicated but i just wanted to give you the exposure of ipv6 i don’t expect you to understand this right away in the end it should become a lot clearer as we go through the course and i promise you it will become a lot easier i had a hard time myself trying to understand this network concept but after a few days i was able to digest it and as i went back and did some practice it started to make a lot more sense to me and so i know as we move along with the course that it will start making sense to you as well so now that we’ve discussed layer 3 in the osi model i wanted to get into layer 4 which is the transport layer with ip packets discussing tcp and udp and so in its simplest form a packet is the basic unit of information in network transmission so most networks use tcpip as the network protocol or set of rules for communication between devices and the rules of tcpip require information to be split into packets that contain a segment of data to be transferred along with the protocol and its port number the originating address and the address of where the data is to be sent now udp is another protocol that is sent with ip and is used in specific applications but mostly in this course i will be referring to tcpip and so as you can see in this diagram of the ip packet this is a basic datagram of what a packet would look like again with this source and destination ip address the protocol port number and the data itself now this is mainly just to give you a high level understanding of tcpip and udpip and is not a deep dive into networking now moving on to layer 7 of the osi model this layer is used by networked applications or applications that use the internet and so there are many protocols that fall under this layer now these applications do not reside in this layer but use the protocols in this layer to function so the application layer provides services for networked applications with the help of protocols to perform user activities and you will see many of these protocols being addressed as we go through this course through resources in google cloud like http or https for load balancing dns that uses udp on port 53 and ssh on port 22 for logging into hosts and so these are just a few of the many scenarios where layer 7 and the protocols that reside in that layer come up in this course and we will be diving into many more in the lessons to come and so that about wraps up this networking refresher lesson and don’t worry like i said before i’m not expecting you to pick things up in this first go things will start to make more sense as we go through the course and we start putting these networking concepts into practice also feel free to go back and review the last
couple of lessons again if things didn’t make sense to you the first time or if you come across some networking challenges in future lessons and so that’s everything i wanted to cover so you can now mark this lesson as complete and let’s move on to the next one [Music] welcome back in this lesson we will be discussing the core networking service of gcp virtual private cloud or vpc for short it is the service that allows you to create networks inside google cloud with both private and public connectivity options both for in-cloud deployments and on-premise hybrid cloud deployments this is a service that you must know well as there are many questions that come up on the exam with regards to vpcs so with that being said let’s dive in now vpcs are what manages the networking functionality for your google cloud resources this is a software defined network and is not confined to the physical limitations of networking in a data center this has been abstracted for you vpc networks including their associated routes and firewall rules are global resources they are not associated with any particular region or zone they are global resources and span all available regions across the globe as explained earlier vpcs are also encapsulated within projects projects are the logical container where your vpcs live now these vpcs do not have ip ranges but are simply a construct of all of the individual ip addresses and services within that network the ip addresses and ranges are defined within the subnetworks that i will be diving into a bit later as well traffic to and from instances can be controlled with network firewall rules rules are implemented on the vms themselves so traffic can be controlled and logged as it leaves or arrives at a vm now resources within a vpc network can communicate with one another by using internal or private ipv4 addresses and these are subject to applicable network firewall rules these resources must be in the same vpc for communication otherwise they must traverse the public internet with an assigned public ip or use a vpc peering connection or establish a vpn connection another important thing to note is that vpc networks only support ipv4 unicast traffic they do not support ipv6 traffic within the network vms in the vpc network can only send to ipv4 destinations and only receive traffic from ipv4 sources however it is possible to create an ipv6 address for a global load balancer now unless you choose to disable it each new project starts with a default network in a vpc the default network is an auto mode vpc network with predefined subnets a subnet is allocated for each region with non-overlapping cider blocks also each default network has a default firewall rule these rules are configured to allow ingress traffic for icmp rdp and ssh traffic from anywhere as well as ingress traffic from within the default network for all protocols and ports and so there are two different types of vpc networks auto mode or custom mode an auto mode network also has one subnet per region the default network is actually an auto mode network as explained earlier now these automatically created subnets use a set of predefined ip ranges with a slash 20 cider block that can be expanded to a slash 16 cider block all of these subnets fit within the default 10.128.0.0 ford slash 9 cider block and as new gcp regions become available new subnets in those regions are automatically added to auto mode networks using an ip range on that block now a custom owned network does not automatically create subnets this type of network provides you with complete control over its subnets and ip ranges as well as another note an auto mode network can be converted to a custom mode network to gain more control but please be aware this conversion is one way meaning that custom networks cannot be changed to auto mode networks so when deciding on the different types of networks you want to use make sure that you review all of your considerations now custom mode vpc networks are more flexible and better suited to production and google recommends that you use custom mode vpc networks in production so here is an example of a project that contains three networks all of these networks span multiple regions across the globe as you can see here on the right hand side and each network contains separate vms and so this diagram is to demonstrate that vms that are in the same network or vpc can communicate privately even when placed in separate regions because vms in network a are in the same network they can communicate over internal ip addresses even though they’re in different regions essentially your vms can communicate even if they exist in different locations across the globe as long as they are within the same network the vms in network b and network c are not in the same network therefore by default these vms must communicate over external ips even though they’re in the same region as no internal ip communication is allowed between networks unless you set up vpc network peering or use a vpn connection now i wanted to bring back the focus to the default vpc for just a minute unless you create an organizational policy that prohibits it new projects will always start with a default network that has one subnet in each region and again this is an auto mode vpc network in this particular example i am showing a default vpc with seven of its default regions displayed along with their ip ranges and again i want to stress that vpc networks along with their associated routes and firewall rules are global resources they are not associated with any particular region or zone so the subnets within them are regional and so when an auto mode vpc network is created one subnet from each region is automatically created within it these automatically created subnets use a set of predefined ip ranges that fit within the cider block that you see here of 10.128.0.049 and as new google cloud regions become available new subnets in those regions are automatically added to auto mode vpc networks by using an ip range from that block in addition to the automatically created subnets you can add more subnets manually to auto mode vpc networks in regions that you choose by using ip ranges outside of 10.128.0.049 now if you’re using a default vbc or have already created an auto mode vpc you can switch the vpc network from auto mode to custom mode and this is a one-way conversion only as custom mode vpc networks cannot be changed to auto mode vpc networks now bringing this theory into practice with regards to the default vpc i wanted to take the time to do a short demo so whenever you’re ready join me in the console and so here we are back in the console and if i go here in the top right hand corner i am logged in as tony bowties at gmail.com and in the top drop down project menu i’m logged in under project tony and because this demo is geared around the default vpc i want to navigate to vpc networks so i’m going to go over here to the top left hand corner to the navigation menu and i’m going to click on it and scroll down to vpc network under networking and so as you can see here in the left hand menu there are a bunch of different options that i can choose from but i won’t be touching on any of these topics as i have other lessons that will deep dive into those topics so in this demo i’d like to strictly touch on the default vpc and as you can see in project tony it has created a default vpc for me with a one subnet in every region having its own ip address range and so just as a reminder whenever you create a new project a default vpc will be automatically created for you and when these subnets were created each of them have a route out to the public internet and so the internet gateway is listed here its corresponding firewall rules along with global dynamic routing and flow logs are turned off and again i will be getting deeper into routing and flow logs in later lessons in the section now earlier i had pointed out that an auto mode vpc can be converted to a custom vpc and it’s as simple as clicking this button but we don’t want to do that just yet and what i’d like to do is drill down into the default vbc and show you all the different options as you can see here the dns api has not been enabled and so for most of you a good idea would be to enable it and so i’m going to go ahead and do that now as well you can see here that i can make adjustments to each of the different subnets or i can change the configuration of the vpc itself so if i click on this edit button here at the top i’m able to change the subnet creation mode along with the dynamic routing mode which i will get into in a later lesson and the same thing with the dns server policy and so to make this demo a little bit more exciting i want to show you the process on how to expand a subnet so i’m going to go into us central one i’m going to drill down here and here’s all the configuration settings for the default subnet in the us central one region and so for me to edit this subnet i can simply click on the edit button up here at the top and so right below the ip address range i am prompted with a note saying that the ip ranges must be unique and non-overlapping as we stated before and this is a very important point to know when you’re architecting any vpcs or its corresponding sub networks and so i’m going to go ahead and change the subnet from a cider range of 20 and i’m going to change it to 16. i’m not going to add any secondary ip ranges i’m going to leave private google access off and so i’m going to leave everything else as is and simply click on save and so once this has completed i’ll be able to see that my subnet range will go from a slash 20 to a slash 16. and so here you can see the ip address range has now changed to a slash 16. if i go back to the main page of the vpc network i can see that the ip address range is different from all the other ones now you’re probably asking why can’t i just change the ip address range on all the subnets at once and so even though i’d love to do that unfortunately google does not give you the option each subnet must be configured one by one to change the ipa address range now i wanted to quickly jump into the default firewall rules and as discussed earlier the rules for incoming ssh rdp and icmp have been pre-populated along with a default rule that allows incoming connections for all protocols and ports among instances within the same network so when it comes to routes with regards to the vpc network the only one i really wanted to touch on is the default route to the internet and so without this route any of the subnets in this vpc wouldn’t have access to route traffic to the internet and so when the default vpc is created the default internet gateway is also created and so now going back to the main page of the vpc network i wanted to go through the process of making the ip address range bigger but doing it through the command line and so i’m going to go up to the right hand corner and open up cloud shell i’m going to make this a little bit bigger and so for this demo i’m going to increase the address range for the subnet in us west one from a slash 20 to a slash 16 and so i’m going to paste in the command which is gcloud compute networks subnets expand ip dash range and then the name of the network which is default as well as the region and i’m going to do uswest1 along with the prefix length which is going to be 16. so i’m going to hit enter i’ve been prompted to make sure that this is what i want to do and so yes i do want to continue so i’m going to type in y for yes and hit enter and so within a few seconds i should get some confirmation and as expected my subnet has been updated and so because i like to verify everything i’m going to now clear the screen and i’m going to paste in the command gcloud compute networks subnets describe and then the subnet name which is default along with the region which would be uswest1 i’m going to click on enter and as you can see here the ipsider range is consistent with what we have changed and if i do a quick refresh on the browser i’ll be able to see that the console has reflected the same thing and as expected the ip address range here for us west one in the console reflects that which we see here in cloud shell and so now to end this demo i wanted to quickly show you how i can delete the default vpc and recreate it so all i need to do is to drill into the settings and then click on delete vpc network right here at the top i’m going to get a prompt to ask me if i’m sure and i’m going to simply click on delete now just as a note if you have any resources that are in any vpc networks you will not be able to delete the vpc you would have to delete the resources first and then delete the vpc afterwards okay and it has been successfully deleted and as you can see there are no local vpc networks in this current project and so i want to go ahead and recreate the default vpc so i’m going to simply click on create vpc network and so here i’m prompted to enter in a bunch of information for creating this new vpc network and so keeping with the spirit of default vpcs i’m going to name this vpc default i’m going to put default in the description and under subnet creation mode i’m going to click on automatic and as you can see a prompt came up telling me these ip address ranges will be assigned to each region in your vpc network and i’m able to review the ip address ranges for each region and as stated before the ip address ranges for each region will always be the same every time i create this default vpc or create a vpc in the automatic subnet creation mode now as a note here under firewall rules if i don’t select these firewall rules none will actually be created so if you’re creating a new default vpc be sure to check these off and so i’m going to leave everything else as is and i’m going to simply go to the bottom and click on the create button and within about a minute i should have the new default vpc created okay and we are back in business the default vpc has been recreated with all of these subnets in its corresponding regions all the ip address ranges the firewall rules everything that we saw earlier in the default vpc and so that’s pretty much all i wanted to cover in this demo on the default vpc network along with the lesson on vpcs so you can now mark this lesson as complete and let’s move on to the next one welcome back and in this lesson i’m going to be discussing vpc network subnets now the terms subnet and sub network are synonymous and are used interchangeably in google cloud as you’ll hear me using either one in this lesson yet i am referring to the same thing now when you create a resource in google cloud you choose a network and a subnet and so because a subnet is needed before creating resources some good knowledge behind it is necessary for both building and google cloud as well as in the exam so in this lesson i’ll be covering subnets at a deeper level with all of its features and functionality so with that being said let’s dive in now each vpc network consists of one or more useful ip range partitions called subnets also known in google cloud as sub networks each subnet is associated with the region and vpc networks do not have any ip address ranges associated with them ip ranges are defined for the subnets a network must have at least one subnet before you can use it and as mentioned earlier when you create a project it will create a default vpc network with subnets in each region automatically auto mode will run under this same functionality now custom vpc networks on the other hand start with no subnets giving you full control over subnet creation and you can create more than one subnet per region you cannot change the name or region of a subnet after you’ve created it you would have to delete the subnet and replace it as long as no resources are using it primary and secondary ranges for subnets cannot overlap with any allocated range any primary or secondary range of another subnet in the same network or any ip ranges of subnets in peered networks in other words they must be a unique valid cider block now when it comes to ip addresses of a subnet google cloud vpc has an amazing feature that lets you increase the ip space of any subnets without any workload shutdown or downtime as demonstrated earlier in the previous lesson and this gives you the flexibility and growth options to meet your needs but unfortunately there are some caveats the new subnet must not overlap with other subnets in the same vpc network in any region also the new subnets must stay inside the rfc 1918 address space the new network range must be larger than the original which means the prefix length must be smaller in number and once a subnet has been expanded you cannot undo an expansion now auto mode network starts with a slash 20 range that can be expanded to a 16 ip range but not larger you can also convert the auto mode network to a custom mode network to increase the ip range even further and again this is a one-way conversion custom mode vpc networks cannot be changed to auto mode vpc networks now in any network that is created in google cloud there will always be some ip addresses that you will not be able to use and these are reserved for google and so every subnet has four reserved ip addresses in its primary ip range and just as a note there are no reserved ip addresses in the secondary ip ranges and these reserved ips can be looked at as the first two and the last two ip addresses in the cider range now the first address in the primary ip range for the subnet is reserved for the network the second address in the primary ip range for the subnet is reserved for the default gateway and allows you access to the internet the second to last address in the primary ip range for the subnet is reserved for google cloud for potential future use and the last address and the ip range for the subnet is for broadcast and so that about covers this short yet important lesson on vpc network subnets these features and functionalities of subnets that have been presented to you will help you make better design decisions that will give you a bit more knowledge and flexibility when it comes to assigning ipspace within your vpc networks and so that’s all i have to cover for this lesson so you can now mark this lesson as complete and let’s move on to the next one [Music] welcome back and in this lesson i’m going to be going through routing and private google access now although routing doesn’t really show up in the exam i wanted to give you an inside look on how traffic is routed so when you’re building in google cloud you’ll know exactly what you will need to do if you need to edit these routes in any way or if you need to build new ones to satisfy your particular need now private google access does pop its head in the exam but only at a high level but i wanted to get just a bit deeper with the service and get into the data flow of when the service is enabled so with that being said let’s dive in now google cloud routes define the paths that network traffic takes from a vm instance to other destinations these destinations can be inside your google cloud vpc network for example in another vm or outside it in a vpc network a route consists of a single destination and a single next hop when an instance in a vpc network sends a packet google cloud delivers the packet to the route’s next hop if the packet’s destination address is within the route’s destination range and so all these routes are stored in the routing table for the vpc now for those of you who are not familiar with a routing table in computer networking a routing table is a data table stored in a router or a network host that lists the routes to particular network destinations and so in this case the vpc is responsible for storing the routing table as well each vm instance has a controller that is kept informed of all applicable routes from the network’s routing table each packet leaving a vm is delivered to the appropriate next hop of an applicable route based on a routing order now i wanted to take a couple minutes to go through the different routing types that are available on google cloud now in google cloud there are two types of routing there is the system generated which offers the default and subnet route and then there are the custom routes which support static routes and dynamic routes and so i first wanted to cover system generated routes in a little bit of depth and so every new network whether it be an automatic vpc or a custom vpc has two types of system generated routes a default route which you can remove or replace and one subnet route for each of its subnets now when you create a vpc network google cloud creates a system generated default route and this route serves two purposes it defines the path out of the vpc network including the path to the internet in addition to having this route instances must meet additional requirements if they need internet access the default route also provides a standard path for private google access and if you want to completely isolate your network from the internet or if you need to replace the default route with the custom route you can delete the default route now if you remove the default route and do not replace it packets destined to ip ranges that are not covered by other routes are dropped lastly the system generated default route has a priority of 1000 because its destination is the broadest possible which covers all ip addresses in the 0.0.0.0.0 range google cloud only uses it if a route with a more specific destination does not apply to a packet and i’ll be getting into priorities in just a little bit and so now that we’ve covered the default route i wanted to get into the subnet route now subnet routes are system generated routes that define paths to each subnet in the vpc network each subnet has at least one subnet route whose destination matches the primary ip range of the subnet if the subnet has secondary ip ranges google cloud creates a subnet route with a corresponding destination for each secondary range no other route can have a destination that matches or is more specific than the destination of a subnet route you can create a custom route that has a broader destination range that contains the subnet route’s destination range now when a subnet is created a corresponding subnet route for the subnet’s primary and secondary ip range is also created auto mode vpc networks create a subnet route for the primary ip ranges of each of their automatically created subnets you can delete these subnets but only if you convert the auto mode vpc network to custom mode and you cannot delete a subnet route unless you modify or delete the subnet so when you delete a subnet all subnet routes for both primary and secondary ranges are deleted automatically you cannot delete the subnet route for the subnet’s primary range in any other way and just as a note when networks are connected by using vpc network peering which i will get into a little bit later some subnet routes from one network are imported into the other network and vice versa and cannot be removed unless you break the peering relationship and so when you break the peering relationship all imported subnet routes from the other network are automatically removed so now that we’ve covered the system generated routes i wanted to get into custom routes now custom routes are either static routes that you can create manually or dynamic routes maintained automatically by one or more of your cloud routers and these are created on top of the already created system generated routes destinations for custom routes cannot match or be specific than any subnet route in the network now static routes can use any of the static route next hops and these can be created manually if you use the google cloud console to create a cloud vpn tunnel that uses policy-based routing or one that is a route based vpn static routes for the remote traffic selectors are created for you and so just to give you a little bit more clarity and a little bit of context i’ve included a screenshot here for all the different routes that are available for the next hop we have the default internet gateway to define a path to external ip addresses specify an instance and this is where traffic is directed to the primary internal ip address of the vm’s network interface in the vpc network where you define the route specify ip address is where you provide an internal ip address assigned to a google cloud vm as a next hop for cloud vpn tunnels that use policy based routing and route-based vpns you can direct traffic to the vpn tunnel by creating routes whose next hops refer to the tunnel by its name and region and just as a note google cloud ignores routes whose next hops are cloud vpn tunnels that are down and lastly for internal tcp and udp low balancing you can use a load balancer’s ip address as a next hop that distributes traffic among healthy back-end instances custom static routes that use this next hop cannot be scoped to specific instances by network tags and so when creating static routes you will always be asked for different parameters that are needed in order to create this route and so here i’ve taken a screenshot from the console to give you a bit more context with regards to the information that’s needed so first up is the name and description so these fields identify the route a name is required but a description is optional and every route in your project must have a unique name next up is the network and each route must be associated with exactly one vpc network in this case it happens to be the default network but if you have other networks available you’re able to click on the drop down arrow and choose a different network the destination range is a single ipv4 cider block that contains the ip addresses of systems that receive incoming packets and the ip range must be entered as a valid ipv4 cider block as shown in the example below the field now if multiple routes have identical destinations priority is used to determine which route should be used so a lower number would indicate a higher priority for example a route with a priority value of 100 has a higher priority than one with a priority value of 200 so the highest route priority means the smallest possible non-negative number as well another great example is if you look back on your default routes all your subnet routes are of a priority of zero and the default internet gateway is of a priority of 1000 and therefore the subnet routes will take priority over the default internet gateway and this is due to the smaller number so remember a good rule of thumb is that the lower the number the higher the priority the higher the number the lower the priority now to get a little bit more granular you can specify a list of network tags so that the route only applies to instances that have at least one of the listed tags and if you don’t specify any tags then google cloud applies the route to all instances in the network and finally next hop which was shown previously this is dedicated to static routes that have next hops that point to the options shown earlier so now that i’ve covered static routes in a bit of detail i want to get into dynamic routes now dynamic routes are managed by one or more cloud routers and this allows you to dynamically exchange routes between a vpc network and an on-premises network with dynamic routes their destinations always represent ip ranges outside of your vpc network and their next hops are always bgp peer addresses a cloud router can manage dynamic routes for cloud vpn tunnels that use dynamic routing as well as cloud interconnect and don’t worry i’ll be getting into cloud routers in a bit of detail in a later lesson now i wanted to take a minute to go through routing order and the routing order deals with priorities that i touched on a little bit earlier now subnet routes are always considered first because google cloud requires that subnet routes have the most specific destinations matching the ip address ranges of their respective subnets if no applicable destination is found google cloud drops the packet and replies with a network unreachable error system generated routes apply to all instances in the vpc network the scope of instances to which subnet routes apply cannot be altered although you can replace the default route and so just as a note custom static routes apply to all instances or specific instances so if the route doesn’t have a network tag the route applies to all instances in the network now vpc networks have special routes that are used for certain services and these are referred to as special return paths in google cloud these routes are defined outside of your vpc network in google’s production network they don’t appear in your vpc network’s routing table you cannot remove them or override them or if you delete or replace a default route in your vpc network although you can control traffic to and from these services by using firewall rules and the services that are covered are load balancers internet aware proxy or iap as well as cloud dns and so before i end this lesson i wanted to touch on private google access now vm instances that only have internal ip addresses can use private google access and this allows them to reach the external ip addresses of google’s apis and services the source ip address of the packet can be the primary internal ip address of the network interface or an address in an alias ip range that is assigned to the interface if you disable private google access the vm instances can no longer reach google apis and services and will only be able to send traffic within the vpc network private google access has no effect on instances that have external ip addresses and can still access the internet they don’t need any special configuration to send requests to the external ip addresses of google apis and services you enable private google access on a subnet by subnet basis and it’s a setting for subnets in a vpc network and i will be showing you this in an upcoming demo where we’ll be building our own custom vpc network now even though the next hop for the required routes is called the default internet gateway and the ip addresses for google apis and services are external requests to google apis and services from vms that only hold internal ip addresses in subnet 1 where private google access is enabled are not sent through the public internet those requests stay within google’s network as well vms that only have internal ip addresses do not meet the internet access requirements for access to other external ip addresses beyond those for google apis and services now touching on this diagram here firewall rules in the vpc network have been configured to allow internet access vm1 can access google apis and services including cloud storage because its network interface is located in subnet 1 which has private google access enabled and because this instance only has an internal ip address private google access applies to this instance now with vm2 it can also access google apis and services including cloud storage because it has an external ip address private google access has no effect on this instance as it has an external ip address and private google access has not been enabled on that subnet and because both of these instances are in the same network they are still able to communicate with each other over an internal subnet route and so this is just one way where private google access can be applied there are some other options for private access as well you can use private google access to connect to google apis and services from your on-premises network through a cloud vpn tunnel or cloud interconnect without having any external ip addresses you also have the option of using private google access through a vpc network peering connection which is known as private services access and finally the last option available for private google access is connecting directly from serverless google services through an internal vpc connection now i know this has been a lot of theory to take in but i promise it’ll become a lot easier and concepts will become less complicated when we start putting this into practice coming up soon in the demo of building our own custom vpc and so that’s pretty much all i wanted to cover when it comes to routing and private google access so you can now mark this lesson as complete and let’s move on to the next one [Music] welcome back and in this lesson i’m going to be discussing ip addressing now in the network refresher lesson i went into a bit of depth on how i p addresses are broken down and used for communication in computer networks in this lesson i’ll be getting into the available types of ip addressing in google cloud and how they are used in each different scenario please note for the exam a high level overview will be needed to know when it comes to ip addressing but the details behind it will give you a better understanding on when to use each type of ip address so with that being said let’s dive in now ip addressing in google cloud holds quite a few categories and really start by determining whether you are planning for communication internally within your vpc or for external use to communicate with the outside world through the internet once you determine the type of communication that you’re looking to apply between resources some more decisions need to be made with regards to the other options and i will be going through these options in just a sec now in order to make these options a little bit more digestible i wanted to start off with the options available for internal ip addresses now internal ip addresses are not publicly advertised they are used only within a network now every vpc network or on-premises network has at least one internal ip address range resources with internal ip addresses communicate with other resources as if they’re all on the same private network now every vm instance can have one primary internal ip address that is unique to the vpc network and you can assign a specific internal ip address when you create a vm instance or you can reserve a static internal ip address for your project and assign that address to your resources if you don’t specify an address one will be automatically assigned to the vm in either case the address must belong to the ip range of the subnet and so if your network is an auto mode vpc network the address comes from the region subnet if your network is a custom mode vpc network you must specify which subnet the ip address comes from now all subnets have a primary sider range which is the range of internal ip addresses that define the subnet each vm instance gets its primary internal ip address from this range you can also allocate alias ip ranges from that primary range or you can add a secondary range to the subnet and allocate alias ip ranges from the secondary range use of alias ip ranges does not require secondary subnet ranges these secondary subnet ranges merely provide an organizational tool now when using ip aliasing you can configure multiple internal ip addresses representing containers or applications hosted in a vm without having to define a separate network interface and you can assign vm alias ip ranges from either the subnet’s primary or secondary ranges when alias ip ranges are configured google cloud automatically installs vpc network routes for primary and alias ip ranges for the subnet of your primary network interface your container orchestrator or gke does not need to specify vpc network connectivity for these routes and this simplifies routing traffic and managing your containers now when choosing either an auto mode vpc or a custom vpc you will have the option to choose either an ephemeral ip or a static ip now an ephemeral ip address is an ip address that doesn’t persist beyond the life of the resource for example when you create an instance or forwarding rule without specifying an ip address google cloud will automatically assign the resource an ephemeral ip address and this ephemeral ip address is released when you delete the resource when the ip address is released it is free to eventually be assigned to another resource so is never a great option if you depend on this ip to remain the same this ephemeral ip address can be automatically assigned and will be assigned from the selected region subnet as well if you have ephemeral ip addresses that are currently in use you can promote these addresses to static internal ip addresses so that they remain with your project until you actively remove them and just as a note before you reserve an existing ip address you will need the value of the ip address that you want to promote now reserving a static ip address assigns the address to your project until you explicitly release it this is useful if you are dependent on a specific ip address for a specific service and need to prevent another resource from being able to use the same address static addresses are also useful if you need to move an ip address from one google cloud resource to another and you also have the same options when creating an internal load balancer as you do with vm instances and so now that we’ve covered all the options for internal ip addresses i would like to move on to cover all the available options for external ip addresses now you can assign an external ip address to an instance or a forwarding rule if you need to communicate with the internet with resources in another network or need to communicate with a public google cloud service sources from outside a google cloud vpc network can address a specific resource by the external ip address as long as firewall rules enable the connection and only resources with an external ip address can send and receive traffic directly to and from outside the network and like internal ip addresses external ip addresses have the option of choosing from an ephemeral or static ip address now an ephemeral external ip address is an ip address that doesn’t persist beyond the life of the resource and so follows the same rules as ephemeral internal ip addresses so when you create an instance or forwarding rule without specifying an ip address the resource is automatically assigned an ephemeral external ip address and this is something that you will see quite often ephemeral external ip addresses are released from a resource if you delete the resource for vm instances the ephemeral external ip address is also released if you stop the instance so after you restart the instance it is assigned a new ephemeral external ip address and if you have an existing vm that doesn’t have an external ip address you can assign one to it forwarding rules always have an ip address whether external or internal so you don’t need to assign an ip address to a forwarding rule after it is created and if your instance has an ephemeral external ip address and you want to permanently assign the ip to your project like ephemeral internal ip addresses you have the option to promote the ip address from ephemeral to static and in this case promoting an ephemeral external ip address to a static external ip address now when assigning a static ip address these are assigned to a project long term until they are explicitly released from that assignment and remain attached to a resource until they are explicitly detached for vm instances static external ip addresses remain attached to stopped instances until they are removed and this is useful if you are dependent on a specific ip address for a specific service like a web server or a global load balancer that needs access to the internet static external ip addresses can be either a regional or global resource in a regional static ip address allows resources of that region or resources of zones within that region to use the ip address and just as a note you can use your own publicly routable ip address prefixes as google cloud external ip addresses and advertise them on the internet the only caveat is that you must own and bring at the minimum a 24 cider block and so now that we’ve discussed internal and external ip addressing options i wanted to move into internal ip address reservations now static internal ips provide the ability to reserve internal ip addresses from the ip range configured in the subnet then assign those reserved internal addresses to resources as needed reserving an internal ip address takes that address out of the dynamic allocation pool and prevents it from being used for automatic allocations with the ability to reserve static internal ip addresses you can always use the same ip address for the same resource even if you have to delete and recreate the resource so when it comes to internal ip address reservation you can either reserve a static internal ip address before creating the associated resource or you can create the resource with an ephemeral internal ip address and then promote that ephemeral ip address to a static internal ip address and so just to give you a bit more context i have a diagram here to run you through it so in the first example you would create a subnet from your vpc network you would then reserve an internal ip address from that subnet’s primary ip range and in this diagram is marked as 10.12.4.3 and will be held as reserved for later use with a resource and then when you decide to create a vm instance or an internal load balancer you can use the reserved ip address that was created in the previous step that i p address then becomes marked as reserved and in use now touching on the second example you would first create a subnet from your vpc network you would then create a vm instance or an internal load balancer with either an automatically allocated ephemeral ip address or a specific ip address that you’ve chosen from within that specific subnet and so once the ephemeral ip address is in use you can then promote the ephemeral ip address to a static internal ip address and would then become reserved and in use now when it comes to the external ip address reservation you are able to obtain a static external ip address by using one of the following two options you can either reserve a new static external ip address and then assign the address to a new vm instance or you can promote an existing ephemeral external ip address to become a static external ip address now in the case of external ip addresses you can reserve two different types a regional ip address which can be used by vm instances with one or more network interfaces or by network load balancers these ip addresses can be created either in the console or through the command line with the limitation that you will only be allowed to create ipv4 ip addresses the other type is a global ip address which can be used for global load balancers and can be created either in the console or through the command line as shown here the limitation here is that you must choose the premium network service tier in order to create a global ip address and after reserving the address you can finally assign it to an instance during instance creation or to an existing instance and so as you can see there is a lot to take in when it comes to understanding ip addressing and i hope this lesson has given you some better insight as to which type of ips should be used in a specific scenario now don’t worry the options may seem overwhelming but once you start working with ip addresses more often the options will become so much clearer on what to use and when and as i said in the beginning only high level concepts are needed to know for the exam but knowing the options will allow you to make better decisions in your daily role as a cloud engineer and so that’s pretty much all i wanted to cover when it comes to ip addressing in google cloud and so now that we’ve covered the theory behind ip addressing in google cloud i wanted to bring this into the console for a demo where we will get hands-on with creating both internal and external static ip addresses so as i explained before there was a lot to take in with this lesson so now would be a perfect opportunity to get up and have a stretch grab yourself a tea or a coffee and whenever you’re ready join me back in the console so you can now mark this lesson as complete and i’ll see you in the next [Music] welcome back in this demonstration i’m going to be going over how to create and apply both internal and external static ip addresses i’m going to show how to create them in both the console and the command line as well as how to promote ip addresses from ephemeral ips to static ips and once we’re done creating all the ip addresses i’m going to show you the steps on how to delete them now there’s a lot to get done here so let’s dive in now for this demonstration i’m going to be using a project that has the default vpc created and so in my case i will be using project bowtieinc dev and so before you start make sure that your default vpc is created in the project that you had selected so in order to do that i’m going to head over to the navigation menu i’m going to scroll down to vpc network and we’re going to see here that the default vpc has been created and so i can go ahead and start the demonstration and so the first thing i wanted to demonstrate is how to create a static internal ip address and so in order for me to demonstrate this i’m going to be using a vm instance and so i’m going to head over to the navigation menu again and i’m going to scroll down to compute engine and so here i’m going to create my new instance by simply clicking on create instance and so under name i’m going to keep it as instance 1. under region you want to select us east one and i’m going to keep the zone as the default selected under machine type i’m going to select the drop down and select e2 micro and i’m going to leave everything else as the default i’m going to scroll down here to management security disks networking and soul tenancy and i’m going to select the networking tab from there and so under here i’m going to select under network interfaces the default network interface and here is where i can create my static internal ip and so clicking on the drop down under primary internal ip you will see ephemeral automatic ephemeral custom and reserve static internal ip address and so you’re going to select reserve static internal ip address and you’ll get a pop-up prompting you with some fields to fill out to reserve a static internal ip address and so under name i’m going to call this static dash internal and for the purposes of this demo i’m going to leave the subnet and the static ip address as the currently selected if i wanted to select a specific ip address i can click on this drop down and select let me choose and this will give me the option to enter in a custom ip address with the subnet range that is selected for this specific sub network and so because i’m not going to do that i’m going to select assign automatically i’m going to leave the purpose as non-shared and i’m going to simply click on reserve and this is going to reserve this specific ip address and now as you can see here i have the primary internal ip marked as static internal and so this is going to be my first static internal ip address and so once you’ve done these steps you can simply click on done and you can head on down to the bottom and simply click on create to create the instance and when the instance finishes creating you will see the internal static ip address and as you can see here your static internal ip address has been assigned to the default network interface on instance 1. and so in order for me to view this static internal ip address in the console i can view this in vpc networks and drill down into the specific vpc and find it under static internal ip addresses but i wanted to show you how to view it by querying it through the command line and so in order to do this i’m going to simply go up to the menu bar on the right hand side and open up cloud shell and once cloud shell has come up you’re going to simply paste in the command gcloud compute addresses list and this will give me a list of the internal ip addresses that are available and so now i’m going to be prompted to authorize this api call using my credentials and i definitely do so i’m going to click on authorize and as expected the static internal ip address that we created earlier has shown up it’s marked as internal in the region of us east one in the default subnet and the status is in use and so as we discussed in the last lesson static ip addresses persist even after the resource has been deleted and so to demonstrate this i’m going to now delete the instance i’m going to simply check off the instance and go up to the top and click on delete you’re going to be prompted to make sure if you want to delete this yes i do so i’m going to click on delete and so now that the instance has been deleted i’m going to query the ip addresses again by using the same command gcloud compute addresses list i’m going to hit enter and as you can see here the ip address static dash internal still persists but the status is now marked as reserved and so if i wanted to use this ip address for another instance i can do so by simply clicking on create instance up here at the top menu and then i can select static dash internal as my ip address so i’m going to quickly close down cloud shell and i’m going to leave the name as instance one the region can select us east one and we’re going to keep the zone as the default selected under machine type you’re going to select the e2 micro machine type going to scroll down to management security disks networking into soul tenancy and i’m going to select the networking tab from under here and under network interfaces i’m going to select the default network interface and under primary internal ip if i click on the drop down i have the option of selecting the static dash internal static ip address and so i wanted to move on to demonstrate how to promote an internal ephemeral ip address to an internal static ip address and so in order to do this i’m going to select on ephemeral automatic and i’m going to scroll down and click on done and i’m going to go ahead and create the instance and once the instance is ready i’ll be able to go in and edit the network interface and so the instance is up and ready and so i’m going to drill down into the instance and i’m going to go up to the top and click on edit i’m going to scroll down to network interfaces and i’m going to edit the default network interface so i’m going to scroll down a little bit more and here under internal iptype i’m going to click on the drop down and i’m going to select static and so here you are taking the current ip address which is 10.142.0.4 and promoting it to a static internal ip address and so you’re going to be prompted with a pop-up confirming the reservation for that static internal ip address and so notice that i don’t have any other options and so all i’m going to do is type in a name and i’m going to call this promoted static and i’m going to click on reserve and this will promote the internal ip address from an ephemeral ip address to a static ip address and so now i’m just going to click on done and i’m going to scroll down and click on save and so now because i want to verify the ip address i’m going to go ahead and open up the cloud shell again and i’m going to use the same command that i used earlier which is gcloud compute addresses list and i’m going to hit enter as expected the promoted static ip address is showing as an internal ip address in the region of us east 1 in the default subnet and its status is in use and so just as a recap we’ve created a static internal ip address for the first instance and for the second instance we promoted an ephemeral internal ip address into a static internal ip address and we were able to verify this through cloud shell using the gcloud compute addresses list command and so this is the end of part one of this demo it was getting a bit long so i decided to break it up and this would be a great opportunity for you to get up and have a stretch get yourself a coffee or tea and whenever you’re ready join me in part two where we will be starting immediately from the end of part one so you can now mark this as complete and i’ll see you in the next one [Music] welcome back this is part two of the creating internal and external ip addresses demo and we will be starting immediately from the end of part one so with that being said let’s dive in and so now that we’ve gone through how to both create static ip addresses and promote ephemeral ip addresses to static ip addresses for internal ips i want to go ahead and go through the same with external ips and so i’m going to first start off by deleting this instance i’m going to go ahead and click on delete and so instead of doing it through the compute engine interface i want to go into the external ip address interface which can be found in the vpc network menu so i’m going to go ahead up to the left hand corner click on the navigation menu and i’m going to scroll down to vpc network and from the menu here on the left hand side you can simply click on external ip addresses and here you will see the console where you can create a static external ip address and so to start the process you can simply click on reserve static address and so here you’ll be prompted with a bunch of fields to fill out to create this new external static ip address and so for the name of this static ip address you can simply call this external dash static i’m going to use the same in the description now here under network service tier i can choose from either the premium or the standard and as you can see i’m currently using the premium network service tier and if i hover over the question mark over here it tells me a little bit more about this network service tier and as you can see the premium tier allows me higher performance as well as lower latency routing but this premium routing comes at a cost whereas the standard network service tier offers a lower performance compared to the premium network service tier and is a little bit more cost effective but still delivering performance that’s comparable with other cloud providers and so i’m just going to leave it as the default selected and as we discussed in the previous lesson ipv6 external static ip addresses can only be used for global load balancers and so since we’re only using it for an instance an ipv4 address will suffice and so just as a note for network service tier if i click on standard ipv6 is grayed out as well as the global selection and this is because in order to use global load balancing you need to be using the premium network service tier so whenever you’re creating a global load balancer please keep this in mind as your cost may increase so i’m going to switch this back to premium and so under type i’m going to keep it as regional and under region i’m going to select the same region that my instance is going to be in which is us east 1 and because i haven’t created the instance yet there is nothing to attach it to and so i’m going to click on the drop down and click on none and so just as another note i wanted to quickly highlight this caution point that the static ip addresses not attached to an instance or low balancer are still billed at an hourly rate so if you’re not using any static ip addresses please remember to delete them otherwise you will be charged and so everything looks good here to create my external static ip address so i’m going to simply click on reserve and this will create my external static ip address and put the status of it as reserved so as you can see here the external static ip address has been created and you will find all of your external static ip addresses that you create in future right here in this menu and you will still be able to query all these external ip addresses from the command line and so now in order to assign this ip address to a network interface i’m going to go back over to the navigation menu and scroll down to compute engine and create a new instance so you can go ahead and click on create instance i’m going to go ahead and keep the name of this instance as instance one and in the region i’m going to select us east one i’m going to keep the zone as the selected default and under machine type i’m going to select the e2 micro machine type i’m going to scroll down to management security disks networking and soul tenancy and i’m going to select the networking tab and here under network interfaces i’m going to select the default network interface i’m going to scroll down a little bit here and under external ip ephemeral has been selected but if i click on the drop down i will have the option to select the ip that we had just created which is the external dash static ip and so i’m going to select that i’m going to click on done and you can go down and click on create and so now when the instance is created i will see the external ip address of external static as the assigned external ip and as expected here it is and because i always like to verify my work i’m going to go ahead and open up the cloud shell and verify it through the command line and so now i’m going to query all my available static ip addresses using the command gcloud compute addresses list i’m going to hit enter and as you can see here the external static ip address of 34.75.76 in the us east one region is now in use and this is because it is assigned to the network interface on instance one and so before we go ahead and complete this demo there’s one more step that i wanted to go through and this is to promote an ephemeral external ip address to a static external ip address and so i’m going to go up here to the top menu and create a new instance i’m going to leave the name here as instance two under the region i’m going to select us east one i’m going to keep the zone as the selected default under machine type i’m going to select the e2 micro machine type i’m going to leave everything else as the default and i’m going to scroll down to management security disks networking and soul tenancy and select the networking tab and i’m going to verify that i’m going to be using an ephemeral external ip upon the creation of this instance if i scroll down here a little bit i can see that an external ephemeral ip address will be used upon creation and this will be the ip address that i will be promoting to a static ip through the command line so i’m going to go ahead and scroll down click on done and then i’m going to scroll down and click on create and once this instance is created then i can go ahead and promote the ephemeral external ip address okay and the instance has been created along with its external ephemeral ip address and so now i can go ahead and promote this ephemeral ip address so in order for me to do this i’m going to move back to my cloud shell and i’m going to quickly clear my screen and i’m going to use the command gcloud compute addresses create and then the name that we want to use for this static external ip address so i’m going to call this promoted external i’m going to use the flag dash dash addresses and so here i will need the external ip address that i am promoting which is going to be 104.196.219.42 and so i’m going to copy this to my clipboard and i’m going to paste it here in the command line and now i’m going to add the region flag along with the region of us east one and i’m going to go ahead and hit enter and success my ephemeral external ip address has been promoted to a static external ip address and of course to verify it i’m going to simply type in the gcloud compute addresses list command i’m going to hit enter and as expected here it is the promoted external ip of 104.196.219.42 marked as external in the u.s east one region and the status is marked as in use and so i wanted to take a moment to congratulate you on making it through this demonstration of creating internal and external ip addresses as well as promoting them so just as a recap you’ve created a static internal ip address in conjunction with creating a new instance and assigning it to that instance you then created another instance and used an ephemeral ip and then promoted it to a static internal ip address you then created an external static ip address using the console and assigned it to a brand new instance you then created another instance using an external ephemeral ip address and promoted it to a static external ip address and you did this all using both the console and the command line so i wanted to congratulate you on a great job now before we end this demonstration i wanted to go through the steps of cleaning up any leftover resources so the first thing you want to do is delete these instances so you can select them all and go up to the top and click on delete it’s going to ask you if you want to delete the two instances yes we do click on delete and this will delete your instances and free up the external ip addresses so that you’re able to delete them and so now that the instances have been deleted i’m going to go over to the vpc network menu and i’m going to head on over to the external ip address console and here i’m able to delete the external ip addresses and so i’m going to select all of them and i’m going to go up to the top menu and click on release static address and you should get a prompt asking you if you want to delete both these addresses the answer is yes click on delete and within a few seconds these external ip addresses should be deleted and so now all that’s left to delete are the two static internal ip addresses and as i said before because there is no console to be able to view any of these static internal ip addresses i have to do it through the command line so i’m going to go back to my cloud shell i’m going to clear the screen and i’m going to list the ip addresses currently in my network and so here they are promoted static and static internal and so the command to delete any static ip addresses is as follows gcloud compute addresses delete the name of the ip address that i want to delete which is promoted static and then i will need the region flag and it’ll be the region of us east one and i’m going to go ahead and hit enter it’s going to prompt me if i want to continue with this and i’m going to type y for yes hit enter and success it has been deleted and so just a double check i’m going to do a quick verification and yes it has been deleted and so all that’s left to delete is the static internal
ip address and so i’m going to paste in the command gcloud compute addresses delete the name of the ip address that i want to delete which is static dash internal along with the region flag of us east one i’m going to go ahead and hit enter y for yes to continue and success and one last verification to make sure that it’s all cleared up and as you can see i have no more static i p addresses and so this concludes this demonstration on creating assigning and deleting both static internal and static external ip addresses and so again i wanted to congratulate you on a great job and so that’s pretty much all i wanted to cover in this demo on creating internal and external static ip addresses so you can now mark this as complete and i’ll see you in the next one [Music] welcome back in this lesson i will be diving into some network security by introducing vpc firewall rules a service used to filter incoming and outgoing network traffic based on a set of user-defined rules a concept that you should be fairly familiar with for the exam and comes up extremely often when working as an engineer in google cloud it is definitely an essential security layer that prevents unwanted access to your cloud infrastructure now vpc firewall rules apply to a given project and network and if you’d like you can also apply firewall rules across an organization but i will be sticking to strictly vpc firewall rules in this lesson now vpc firewall rules let you allow or deny connections to or from your vm instances based on a configuration that you specify and these rules apply to either incoming connections or outgoing connections but never both at the same time enabled vpc firewall rules are always enforced regardless of their configuration and operating system even if they have not started up now every vpc network functions as a distributed firewall when firewall rules are defined at the network level connections are allowed or denied on a per instance basis so you can think of the vpc firewall rules as existing not only between your instances and other networks but also between individual instances within the same network now when you create a vpc firewall rule you specify a vpc network and a set of components that define what the rule does the components enable you to target certain types of traffic based on the traffic’s protocol ports sources and destinations when you create or modify a firewall rule you can specify the instances to which it is intended to apply by using the target component of the rule now in addition to firewall rules that you create google cloud has other rules that can affect incoming or outgoing connections so for instance google cloud doesn’t allow certain ip protocols such as egress traffic on tcp port 25 within a vpc network and protocols other than tcp udp icmp and gre to external ip addresses of google cloud resources are blocked google cloud always allows communication between a vm instance and its corresponding metadata server at 169.254 and this server is essential to the operation of the instance so the instance can access it regardless of any firewall rules that you configure the metadata server provides some basic services to the instance like dhcp dns resolution instance metadata and network time protocol or ntp now just as a note every network has two implied firewall rules that permit outgoing connections and block incoming connections firewall rules that you create can override these implied rules now the first implied rule is the allow egress rule and this is an egress rule whose action is allow and the destination is all ips and the priority is the lowest possible and lets any instance send traffic to any destination except for traffic blocked by google cloud the second implied firewall rule is the deny ingress rule and this is an ingress rule whose action is deny and the source is all ips and the priority is the lowest possible and protects all instances by blocking incoming connections to them now i know we touched on this earlier on in a previous lesson but i felt the need to bring it up as these are pre-populated rules and the rules that i’m referring to are with regards to the default vpc network and as explained earlier these rules can be deleted or modified as necessary the rules as you can see here in the table allow ingress connections from any source to any instance on the network when it comes to icmp rdp on port 3389 for windows remote desktop protocol and for ssh on port 22. and as well the last rule allows ingress connections for all protocols and ports among instances in the network and it permits incoming connections to vm instances from others in the same network and all of these have a rule priority of six five five four which is the second to lowest priority so breaking down firewall rules there are a few characteristics that google put in place that help define these rules and the characteristics are as follows each firewall rule applies to incoming or outgoing connections and not both firewall rules only support ipv4 connections so when specifying a source for an ingress rule or a destination for an egress rule by address you can only use an ipv4 address or ipv4 block insider notation as well each firewall rules action is either allow or deny you cannot have both at the same time and the rule applies to connections as long as it is enforced so for example you can disable a rule for troubleshooting purposes and then enable it back again now when you create a firewall rule you must select a vpc network while the rule is enforced at the instance level its configuration is associated with a vpc network this means you cannot share firewall rules among vpc networks including networks connected by vpc network peering or by using cloud vpn tunnels another major thing to note about firewall rules is that they are stateful and so that means when a connection is allowed through the firewall in either direction return traffic matching this connection is also allowed you cannot configure a firewall rule to deny associated response traffic return traffic must match the five tuple of the accepted request traffic but with the source and destination addresses and ports reversed so just as a note for those who may be wondering what a five tuple is i was referring to the set of five different values that comprise a tcpip connection and this would be source ip destination ip source port destination port and protocol google cloud associates incoming packets with corresponding outbound packets by using a connection tracking table google cloud implements connection tracking regardless of whether the protocol supports connections if a connection is allowed between a source and a target or between a target and a destination all response traffic is allowed as long as the firewalls connections tracking state is active and as well as a note a firewall rules tracking state is considered active if at least one packet is sent every 10 minutes now along with the multiple characteristics that make up a firewall rule there are also firewall rule components that go along with it here i have a screenshot from the console with the configuration components of a firewall rule and i wanted to take a moment to highlight these components for better clarity so now the first component is the network and this is the vpc network that you want the firewall rule to apply to the next one is priority which we discussed earlier and this is the numerical priority which determines whether the rule is applied as only the highest priority rule whose other components match traffic is applied and remember the lower the number the higher the priority the higher the number the lower the priority now the next component is the direction of traffic and these are the ingress rules that apply to incoming connections from specified sources to google cloud targets and this is where ingress rules apply to incoming connections from specified sources to google cloud targets and egress rules apply to connections going to specify destinations from targets and the next one up is action on match and this component either allows or denies which determines whether the rule permits or blocks the connection now a target is what defines which instances to which the rule applies and you can specify a target by using one of the following three options the first option are all instances in the network and this is the firewall rule that does exactly what it says it applies to all the instances in the network the second option is instances by target tags and this is where the firewall rule applies only to instances with a matching network tag and so i know i haven’t explained it earlier but a network tag is simply a character string added to a tags field in a resource so let’s say i had a bunch of instances that were considered development i can simply throw a network tag on them using a network tag of dev and apply the necessary firewall rule for all the development servers holding the network tag dev and so the third option is instances by target service accounts this is where the firewall rule applies only to instances that use a specific service account and so the next component is the source filter and this is a source for ingress rules or a destination for egress rules the source parameter is only applicable to ingress rules and it must be one of the following three selections source ip ranges and this is where you specify ranges of ip addresses as sources for packets either inside or outside of google cloud the second one is source tags and this is where the source instances are identified by a matching network tag and source service accounts where source instances are identified by the service accounts they use you can also use service accounts to create firewall rules that are a bit more granular and so one of the last components of the firewall rule is the protocols and ports you can specify a protocol or a combination of protocols and their ports if you omit both protocols and ports the firewall rule is applicable for all traffic on any protocol and any port and so when it comes to enforcement status of the firewall rule there is a drop down right underneath all the components where you can enable or disable the enforcement and as i said before this is a great way to enable or disable a firewall rule without having to delete it and is great for troubleshooting or to grant temporary access to any instances and unless you specify otherwise all firewall rules are enabled when they are created but you can also choose to create a rule in a disabled state and so this covers the vpc firewall rules in all its entirety and i will be showing you how to implement vpc firewall rules along with building a custom vpc custom routes and even private google access all together in a demo following this lesson to give you some hands-on skills of putting it all into practice and so that’s pretty much all i wanted to cover when it comes to vpc firewall rules so you can now mark this lesson as complete and let’s move on to the next one where we dive in and build our custom vpc so now is a perfect time to grab a coffee or tea and whenever you’re ready join me in the console welcome back in this demonstration i want to take all the concepts that we’ve learned so far in this networking section and put it all into practice this diagram shown here is the architecture of exactly what we will be building in this demo we’re going to start by creating a custom vpc and then we’re going to create two subnets one public and one private in two separate regions we’re then going to create a cloud storage bucket with some objects in it and then we will create some instances to demonstrate access to cloud storage as well as communication between instances and finally we’re going to create some firewall rules for routing traffic to all the right places we’re also going to implement private google access and demonstrate accessibility to the files in cloud storage from the private instance without an external ip so this may be a little bit out of your comfort zone for some but don’t worry i’ll be with you every step of the way and other than creating the instances all the steps here have been covered in previous lessons now there’s a lot to get done here so whenever you’re ready join me in the console and so here we are back in the console and as you can see up here in the right hand corner i am logged in as tony bowtie ace gmail.com and currently i am logged in under project tony and so in order to start off on a clean slate i’m going to create a new project and so i’m going to simply click on the project menu drop-down and click on new project i’m going to call this project bowtie inc and i don’t have any organizations so i’m going to simply click on create and as well for those of you doing this lesson i would also recommend for you to create a brand new project so that you can start off anew again i’m going to go over to the project drop down and i’m going to select bow tie ink as the project and now that i have a fresh new project i can now create my vpc network so i’m going to go over to the left hand corner to the navigation menu and i’m going to scroll down to vpc network and so because vpc networks are tied in with the compute engine api we need to enable it before we can create any vpc networks so you can go ahead and enable this api so once this api has finished and is enabled we’ll be able to create our vpc network ok and the api has been enabled and as you can see the default vpc network has been created with a subnet in every region along with its corresponding ip address ranges and so for this demo we’re going to create a brand new vpc network along with some custom subnets and so in order to do that i’m going to go up here to the top and i’m going to click on create vpc network and so here i’m prompted with some fields to fill out so under name i’m going to think of a creative name that i can call my vpc network so i’m going to simply call it custom under description i’m going to call this custom vpc network and i’m going to move down here to subnets and because i’m creating custom subnets i’m going to keep it under custom under subnet creation mode and so i’m going to need a public subnet and a private subnet and you’ll be able to get the values from the text file in the github repository within the sub networks folder under networking services and so i’m going to create my public subnet first and i’m going to simply call the public subnet public for region i’m going to use us east one and the ip address range will be 10.0.0.0 forward slash 24 and i’m going to leave private google access off and i’m going to simply click on done and now i can create the private subnet so underneath the public subnet you’ll see add subnet you can simply click on that and the name of the new subnet will be as you guessed it private under region i’m going to use us east 4 and for the ip address range be sure to use 10.0.5.0.24 and we’re going to leave private google access off for now and we’ll be turning that on a little bit later in the demo and so you can now click on done and before we click on create we want to enable the dns api and clicking on enable will bring you to the dns api home page and you can click on enable to enable the api okay so now that we have our network configured along with our public and private subnets as well as dns being enabled we can now simply click on create but before i do that i wanted to give you some insight with regards to the command line so as i’ve shared before everything that can be done in the console can be done through the command line and so if ever you wanted to do that or you wanted to get to know the command line a little bit better after filling out all the fields with regards to creating resources in the console you will be given the option of a command line link that you can simply click on and here you will be given all the commands to create all the same resources with all the same preferences through the command line and i will be providing these commands in the lesson text so that you can familiarize yourself with the commands to use in order to build any networks using the command line but this is a great reference for you to use at any time and so i’m going to click on close and now i’m going to click on create and within a minute or two the custom vpc network will be created and ready to use okay and the custom vpc network has been created along with its public and private subnet and so just to get a little bit more insight with this custom vpc network i’m going to drill down into it and as you can see here the subnets are respectively labeled private and public along with its region ip address range the gateway and private google access the routes as you can see here are the system generated routes that i had discussed in an earlier lesson it has both the subnet routes to its respective ip range along with the default route with a path to the internet as well as a path for private google access now we don’t have any firewall rules here yet but we’ll be adding those in just a few minutes and so now that you’ve created the vpc network with its respective subnets we’re going to head on over to cloud storage and create a bucket along with uploading the necessary files so i’m going to go again over to the navigation menu and i’m going to scroll down to storage and so as expected there are no buckets present here in cloud storage and so we’re just going to go ahead and create our first bucket by going up here to the top menu and clicking on create bucket and so here i’ve been prompted to name my bucket and for those of you who are here for the first time when it comes to naming a storage bucket the name needs to be globally unique and this means that the name has to be unique across all of the google cloud platform now don’t worry i’m going to get into further detail with this in the cloud storage lesson with all of these specific details when it comes to names storage classes and permissions and so in the meantime you can come up with a name for your bucket something that resonates with you and so for me i’m going to name my bucket bowtie inc dash file dash access and so now i’m going to simply click continue and so just as a note for those who are unable to continue through it is because the name for your bucket is not globally unique so do try to find one that is now when it comes to location type i’m just going to click on region and you can keep the default location as used one and i’m going to leave all the other options as default and i’m going to go down to the bottom and click create and so for those of you who have created your bucket you can now upload the files and those files can be found in the github repository in the cloud storage bucket folder under networking services and so now i’m going to click on upload files and under the networking services section under cloud storage bucket you will find these three jpeg files and you can simply select them and click on open and so they are now uploaded into the bucket and so now i’m ready to move on to the next step so you should now have created the vpc network with a private and public subnet along with creating your own bucket in cloud storage and have uploaded the three jpeg files so now that this is done we can now create the instances that will have access to these files and so again i will go over to the navigation menu in the top left hand corner and scroll down to compute engine and here i will click on create and so again i will be prompted with some fields to fill out and so for this instance i’m going to first create the public instance again i’m going to get really creative and call this public dash instance under labels i’m going to add a label under key i’m going to type environment and under value i’m going to type in public i’m going to go down to the bottom and click on save and under region i’m going to select us east1 and you can leave the zone as us east 1b moving down under machine type i’m going to select the e2 micro as the machine type just because i’m being cost conscious and i want to keep the cost down and so i’m going to scroll down to identity and api access and under service account you should have the compute engine default service account already pre-selected now under access scopes i want to be able to have the proper permissions to be able to read and write to cloud storage along with read and write access to compute engine and so you can click on set access for each api and you can scroll down to compute engine click on the drop down menu and select read write and this will give the public instance the specific access that it needs to ssh into the private instance and so now i’m going to set the access for cloud storage so i’m going to scroll down to storage i’m going to click on the drop down menu and select read write and this will give the instance read write access to cloud storage scrolling down a little bit further i’m going to go to management security disks networking and sold tenancy and i’m going to click on that scroll up here just a little bit and you can click on the networking tab which will prompt you for a bunch of options that you can configure for the networking of the instance so under network tags i want to type in public and you can click enter you can then scroll down to where it says network interfaces and click on the current interface which is the default and here it’ll open up all your options and so under network you want to click on the drop down and set it from default to custom the public subnet will automatically be propagated so you can leave it as is and you also want to make sure that your primary internal ip as well as your external ip are set to ephemeral and you can leave all the other options as default and simply click on done and again before clicking on create you can click on the command line link and it will show you all the commands needed in order to create this instance through the command line so i’m going to go ahead and close this and so i’m going to leave all the other options as default and i’m going to click on create and so now that my public instance is being created i’m going to go ahead and create my private instance using the same steps that i did for the last instance so i’m going to go ahead and click on create instance here at the top and so the first thing i’m going to be prompted for is the name of the instance and so i’m going to call this instance private dash instance and here i’m going to add a label the key being environment and the value being private i’m going to go down here to the bottom and click on save and under region i’m going to select us east 4 and you can keep the zone as the default selected under machine type we’re going to select the e2 micro and again scrolling down to the identity and api access under the access scopes for the default service account i’m going to click on the set access for each api and i’m going to scroll down to storage i’m going to click on the drop down menu and i’m going to select access for read write and for the last step i’m going to go into the networking tab under management security disks networking and soul tenancy and under network tags i’m going to give this instance a network tag of private and under network interfaces we want to edit this and change it from default over to the custom network and as expected it selected the private subnet by default and because this is going to be a private instance we are not going to give this an external ip so i’m going to click on the drop down and select none and with all the other options set as default i’m going to simply click on create and this will create my private instance along with having my public instance so just as a recap we’ve created a new custom vpc network along with a private and public subnet we’ve created a storage bucket and added some files in it to be accessed and we’ve created a private and public instance and assigning the service account on the public instance read write access to both compute engine and cloud storage along with a public ip address and assigning the service account on the private instance read write access only for cloud storage and no public ip and so this is the end of part one of this demo and this would be a great opportunity for you to get up and have a stretch get yourself a coffee or tea and whenever you’re ready you can join me in part two where we will be starting immediately from the end of part one so you can go ahead and complete this video and i will see you in part two [Music] welcome back this is part two of the custom vpc demo and we will be starting exactly where we left off from part one so with that being said let’s dive in and so now the last thing that needs to be done is to simply create some firewall rules and so with these firewall rules this will give me ssh access into the public instance as well as allowing private communication from the public instance to the private instance as well as giving ssh access from the public instance to the private instance and this will allow us to access the files in the bucket from the private instance and so in order to create these firewall rules i need to go back to my vpc network so i’m going to go up to the left hand corner again to the navigation menu and scroll down to vpc network over here on the left hand menu you’ll see firewall i’m going to click on that and here you will see all the default firewall rules for the default network so for us to create some new ones for the custom vpc i’m going to go up here to the top and click on create firewall and so the first rule i want to create is for my public instance and i want to give it public access as well as ssh access and so i’m going to name this accordingly as public dash access i’m going to give this the same description always a good idea to turn on logs but for this demonstration i’m going to keep them off under network i’m going to select the custom network i’m going to keep the priority at 1000 the direction of traffic will be ingress and the action on match will be allow and so here is where the target tags come into play when it comes to giving access to the network so targets we’re going to keep it as specified target tags and under target tags you can simply type in public under source filter you can keep it under ip ranges and the source ip range will be 0.0.0.0 forward slash 0. and we’re not going to add a second source filter here so moving down to protocols and ports under tcp i’m going to click that off and add in port 22. and because i want to be able to ping the instance i’m going to have to add another protocol which is icmp and again as explained earlier the disable rule link will bring up the enforcement and as you can see it is enabled but if you wanted to create any firewall rules in future and have them disabled you can do that right here but we’re gonna keep this enabled and we’re gonna simply click on create and this will create the public firewall rule for our public instance in our custom vpc network and so we’re going to now go ahead and create the private firewall rule and so i’m going to name this private dash access respectively i’m going to put the description as the same under network i’m going to select our custom network keep the priority at 1000 direction of traffic should be at ingress and the action on match should be allow for target tags you can type in private and then hit enter and because i want to be able to reach the private instance from the public instance the source ip range will be 10.0.0.1 forward slash 24. we’re not going to add a second source filter and under protocols and ports we’re going to simply add tcp port 22 and again i want to add icmp so that i’m able to ping the instance and i’m going to click on create and so we now have our two firewall rules private access and public access and if i go over to the custom vpc network and i drill into it i’ll be able to see these selective firewall rules under the respective firewall rules tab and so now that we’ve created our vpc network along with the public and private subnet we’ve created the cloud storage bucket with the files that we need to access the instances that will access those files along with the firewall rules that will allow the proper communication we can now go ahead to test everything that we built and make sure that everything is working as expected so let’s kick things off by first logging into the public instance so you can head on over to the navigation menu and scroll down to compute engine and you can ssh into the public instance by clicking on ssh under connect and this should open up a new tab or a new window logging you in with your currently authenticated credentials okay and we are logged into our instance and i’m going to zoom in for better viewing and so just to make sure that everything is working as expected we know that our firewall rule is correct because we are able to ssh into the instance and now i want to see if i have access to my files in the bucket and so in order to do that i’m going to run the gsutil command ls for list and then gs colon forward slash forward slash along with my bucket name which is bow tie inc hyphen file iphone access and i’m going to hit enter and as you can see i have access to all the files in the bucket and the last thing i wanted to check is if i can ping the private instance so i’m going to first clear my screen and i’m going to head on over back to the console i’m going to copy the ip address of the private instance to my clipboard and then i’m going to head back on over to my terminal and i’m going to type in ping i’m going to paste the ip address and success i am able to successfully ping the private instance from the public instance using the icmp protocol and you can hit control c to stop the ping so now that i know that my public instance has the proper permissions to reach cloud storage as well as being able to ping my private instance i want to be able to check if i can ssh into the private instance from my public instance and so i’m going to first clear my screen and next i’m going to paste in this command in order for me to ssh into the private instance g cloud compute ssh dash dash project and my project name which is bow tie inc dash dash zone and the zone that my instance is in which is us east 4c along with the name of the instance which is private dash instance and along with the flag dash dash internal dash ip stating that i am using the internal ip in order to ssh into the instance and i’m going to hit enter and so now i’ve been prompted for a passphrase in order to secure my rsa key pair as one is being generated to log into the private instance now it’s always good practice when it comes to security to secure your key pair with a passphrase but for this demo i’m just going to leave it blank and so i’m just going to hit enter i’m going to hit enter again now i don’t want to get too deep into it but i did want to give you some context on what’s happening here so when you log into an instance on google cloud with os login disabled google manages the authorized keys file for new user accounts based on ssh keys in metadata and so the keys that are being generated that are being used for the first time are currently being stored within the instance metadata so now that i’m logged into my private instance i’m going to quickly clear my screen and just as a note you’ll be able to know whether or not you’re logged into your private instance by looking here at your prompt and so now i want to make sure that i can ping my public instance so i’m going to quickly type the ping command i’m going to head on over to the console i’m going to grab the ip address of the public instance i’m going to go back to my terminal and paste it in and as expected i’m able to ping my public instance from my private instance i’m just going to go ahead and hit control c to stop and i’m going to clear the screen so now we’d like to verify whether or not we have access to the files in the cloud storage bucket that we created earlier and so now i’m going to use the same command that i used in the public instance to list all the files in the cloud storage bucket so i’m going to use the gsutil command ls for list along with gs colon forward slash forward slash and the bucket name which is bow tie ink hyphen file if an access and i’m going to hit enter and as you can see here i’m not getting a response and the command is hanging and this is due to the fact that external access is needed in order to reach cloud storage and this instance only has an internal or private ip so accessing the files in the cloud storage bucket is not possible now in order to access cloud storage and the set of external ip addresses used by google apis and services we can do this by enabling private google access on the subnet used by the vms network interface and so we’re going to go ahead and do that right now so i’m going to hit control c to stop and i’m going to go back into the console i’m going to go to the navigation menu and i’m going to scroll down to vpc network and then i’m going to drill down into the private subnet and i’m going to edit it under private google access i’m going to turn it on and i’m going to go down to the bottom and click on save and by giving this subnet private google access i will allow the private instance and any instances with private ip addresses to access any public apis such as cloud storage so now when i go back to my instance i’m going to clear the screen here and i’m going to run the gsutil command again and success we are now able to access cloud storage due to enabling private google access on the respective private subnet so i first wanted to congratulate you on making it to the end of this demo and hope that this demo has been extremely useful as this is a real life scenario that can come up and so just as a recap you’ve created a custom network with two custom subnets you’ve created a cloud storage bucket and uploaded some files to it you’ve created a public instance and a private instance and then created some firewall rules to route the traffic you then tested it all by using the command line for communication you also enable private google access for the instance with only the internal ip to access google’s public apis so that it can access cloud storage and so again fantastic job on your part as this was a pretty complex demo and you can expect things like what you’ve experienced in this demo to pop up in your role of being a cloud engineer at any time so before you go be sure to delete all the resources you’ve created and again congrats on the great job so you can now mark this as complete and i’ll see you in the next one welcome back in this lesson i will be going over vpc network peering and how you can privately communicate across vpcs in the same or different organization vpc network peering and vpc peering are used interchangeably in this lesson as they are used to communicate the same thing now for instances in one vpc to communicate with an instance in another vpc they would route traffic via the public internet however to communicate privately between two vpcs google cloud offers a service called vpc peering and i will be going through the theory and concepts of vpc peering throughout this lesson so with that being said let’s dive in now vpc peering enables you to peer vpc networks so that workloads in different vpc networks can communicate in a private space that follows the rfc 1918 standard thus allowing private connectivity across two vpc networks traffic stays within google’s network and never traverses the public internet vpc peering gives you the flexibility of peering networks that are of the same or different projects along with being able to peer with other networks in different organizations vpc peering also gives you several advantages over using external ip addresses or vpns to connect the first one is reducing network latency as all peering traffic stays within google’s high-speed network vpc peering also offers greater network security as you don’t need to have services exposed to the public internet and deal with greater risks of having your traffic getting compromised or if you’re trying to achieve compliance standards for your organization vpc peering will allow you to achieve the standards that you need and finally vpc network peering reduces network costs as you save on egress costs for traffic leaving gcp so in a regular network google charges you for traffic communicating using public ips even if the traffic is within the same zone now you can bypass this and save money by using internal ips to communicate and keeping the traffic within the gcp network now there are certain properties or characteristics that peered vpcs follow and i wanted to point these out for better understanding first off peer vpc networks remain administratively separate so what exactly does this mean well it means that routes firewalls vpns and other traffic management tools are administered and applied separately in each of the vpc networks so this applies to each vpc independently which also means that each side of a peering association is set up independently as well so when you connect one vpc to the other you have to go into each vpc that you are connecting to both initiate and establish the connection peering becomes active only when the configuration from both sides match this also means that each vpc can delete the peering association at any given time now during vpc peering the vpc peers always exchange all subnet routes you also have the option of exchanging custom routes subnet and static routes are global and dynamic routes can be regional or global a given vpc network can peer with multiple vpc networks but there is a limit that you can reach in which you would have to reach out to google and ask the limit to be increased now when peering with vpc networks there are certain restrictions in place that you should be aware of first off a subnet cider range in one peered vpc network cannot overlap with a static route in another peered network this rule covers both subnet routes and static routes so when a vpc subnet is created or a subnet ip range is expanded google cloud performs a check to make sure that the new subnet range does not overlap with ip ranges of subnets in the same vpc network or in directly peered vpc networks if it does the creation or expansion will fail google cloud also ensures that no overlapping subnet ip ranges are allowed across vpc networks that have appeared network in common and again if it does the creation or expansion will fail now speaking of routing when you create a new subnet in appeared vpc network vpc network peering doesn’t provide granular route controls to filter out which subnet cider ranges are reachable across pure networks these are handled by firewall rules so to allow ingress traffic from vm instances in a peer network you must create ingress allow firewall rules by default ingress traffic to vms is blocked by the implied deny ingress rule another key point to note is that transitive peering is not supported and only directly peered networks can communicate so they have to be peered directly in this diagram network a is peered with network b and network b is peered with network c and so if one instance is trying to communicate from network a to network c this cannot be done unless network a is directly peered with network c an extremely important point to note for vpc peering another thing to note is that you cannot use a tag or service account from one peered network in the other peered network they must each have their own as again they are each independently operated as stated earlier and so the last thing that i wanted to cover is that internal dns is not accessible for compute engine in peered networks as they must use an ip to communicate and so that about covers this short yet important lesson on the theory and concepts of vpc peering and so now that we’ve covered all the theory i’m going to be taking these concepts into a demo where we will be pairing two networks together and verifying the communication between them and so you can now mark this lesson as complete and whenever you’re ready join me in the console welcome back in this hands-on demonstration we’re going to go through the steps to create a peering connection from two vpcs in two separate projects as shown here in the diagram and then to verify that the connection works we’re going to create two instances one in each network and ping one instance from the other instance this demo is very similar to the custom vpc demo that you had done earlier but we are adding in another layer of complexity by adding in vpc network peering and so there’s quite a bit to do here so let’s go ahead and just dive in okay so here we are back in the console as you can see up in the top right hand corner i am logged in as tony bowties gmail.com and for this specific demo i will be using two projects both project tony and project bowtie inc and if you currently do not have two projects you can go ahead and create yourself a new project or the two projects if you have none and so i’m going to continue here with project tony and the first thing i want to do is create the two networks in the two separate projects so i’m going to go up to the navigation menu in the top left hand corner and i’m going to scroll down to vpc network here i’m going to create my first vpc network and i’m going to name this bowtie ink dash a i’m going to give it the same description and then under subnets i’m going to leave the subnet creation mode under custom under the subnet name you can call this subnet dash a i’m going to use the us east one region and for the ip address range i’m going to use 10.0 that’s 0.0 forward slash 20. and i’m going to leave all the other options as default and i’m going to go down to the bottom and click on create now as this network is being created i’m going to go over to the project bowtie inc and i’m going to create the vpc network there so under name i’m going to call this bowtie inc b and under description i’m going to use the same under subnets i’m going to keep subnet creation mode as custom and under new subnet i’m going to call this subnet subnet b the region will be used 4 and the ip address range will be 10.4.0.0 forward slash 20. you can leave all the other options as default and scroll down to the bottom and click on create as this network is being created i’m going to go back to project tony and i’m going to create the firewall rule for bow tie ink dash a in this firewall rule as explained in the last lesson we’ll allow communication from one instance to the other and so i’m going to click on create firewall and under name i’m going to call this project tony dash a under description i’m going to use the same under the network i’m going to choose the source network which will be bowtie inc dash a priority i’m going to keep at 1000 direction of traffic should be ingress and action on match should be allow under targets i’m going to select all instances in the network and under source filter i’m going to keep ip ranges selected and the source ip range specifically for this demo is going to be 0.0.0.0 forward slash 0. and again this is specifically used for this demo and should never be used in a production-like environment in production you should only use the source ip ranges that you are communicating with and under protocols and ports because i need to log into the instance to be able to ping the other instance i’m going to have to open up tcp on port 22. under other protocols you can add icmp and this will allow the ping command to be used i’m going to leave all the other options as default and i’m going to click on create and now that this firewall rule has been created i need to go back over to project bowtie inc and create the firewall rule there as well i’m going to call this firewall rule bowtie inc dash b i’m going to give it the same description under network i’m going to select bow tie ink dash b i’m going to keep the priority as 1000 and the direction of traffic should be ingress as well the action on match should be allow scrolling down under targets i’m going to select all instances in the network and again under source filter i’m going to keep ip ranges selected and under source ip ranges i’m going to enter in 0.0.0.0 forward slash 0. and under protocols and ports i’m going to select tcp with port 22 as well under other protocols i’m going to type in icmp i’m going to leave everything else as default and i’m going to click on create now once you’ve created both networks and have created both firewall rules you can now start creating the instances so because i’m already in project bowtie inc i’m going to go to the left-hand navigation menu and i’m going to scroll down to compute engine and create my instance so i’m just going to click on create and to keep with the naming convention i’m going to call this instance instance b i’m not going to add any labels for now under region i’m going to choose us east 4 and you can leave the zone as the default selection and under machine type i’m going to select e2 micro and i’m going to scroll down to the bottom and i’m going to click on management security disks networking and sold tenancy so that i’m able to go into the networking tab to change the network on the default network interface so i’m going to click on the default network interface and under network i’m going to select bowtie inc b and the subnet has already been selected for me and then i’m going to scroll down click on done and i’m going to leave all the other options as default and click on create and so as this is creating i’m going to go over to project tony and i’m going to create my instance there and i’m going to name this instance instance a under region i am going to select us east1 you can leave the zone as the default selected under machine type i’m going to select e2 micro and scrolling down here to the bottom i’m going to go into the networking tab under management security disks networking and soul and here i’m going to edit the network interface and change it from the default network to bow tie ink dash a and as you can see the subnet has been automatically selected for me so now i can just simply click on done i’m going to leave all the other options as default and i’m going to click on create so just as a recap we’ve created two separate networks in two separate projects along with its corresponding subnets and the firewall rules along with creating an instance in each network and so now that we have both environments set up it’s now time to create the vbc peering connection and so because i’m in project tony i’m going to start off with this project and i’m going to go up to the navigation menu and scroll down to vpc network and under vpc network on the left hand menu you’re going to click on vpc network peering and through the interface shown here we’ll be able to create our vpc network peering so now you’re going to click on create connection and i’m prompted with some information that i will need and because we are connecting to another vpc in another project you’re going to need the project id as well as the name of the vpc network you want to peer with and just as explained in the earlier lesson the subnet ip ranges in both networks cannot overlap so please make sure that if you are using ip ranges outside of the ones that are given for this demonstration the ip ranges that you are using do not overlap so once you have that information you can then click continue and so here you will be prompted with some fields to fill out with the information that you were asked to collect in the previous screen and so since we have that information already we can go ahead and start filling in the fields so i’m going to call this peering connection peering a b and under vpc network i’m going to select bow tie ink dash a under peered vpc network we’re going to select the other project which should be bowtie inc and the vpc network name will be bow tie inc dash b and i’m going to leave all the other options as default and so under vpc network name you will see exchange custom routes and here i can select to import and export custom routes that i have previously created so any special routes that i have created before the actual peering connection i can bring them over to the other network to satisfy my requirements and so i’m not going to do that right now i’m going to close this up and i’m going to simply click on create and so this is finished creating and is marked as inactive and this is because the corresponding peering connection in project bowtie has yet to be configured the status will change to a green check mark in both networks and marked as active once they are connected if this status remains as inactive then you should recheck your configuration and edit it accordingly so now i’m going to head on over to project bowtie inc and i’m going to create the corresponding peering connection i’m going to click on create connection once you have your project id and the vpc network you can click on continue and for the name of this peering connection i’m going to call this peering dash ba respectively under vpc network i’m going to select bowtie inc b and under peered vpc network i’m going to select in another project here you want to type in your project id for me i’m going to paste in my project tony project id and under vpc network name i’m going to type in bowtie inc a and i’m going to leave all the other options as default and i’m going to click on create and so now that we’ve established connections on each of the peering connections in each vpc if the information that we’ve entered is correct then we should receive a green check mark stating that the peering connection is connected and success here we have status as active and if i head on over to project tony i should have the same green check mark under status for the peering connection and as expected the status has a green check mark and is marked as active so now in order to do the pairing connectivity test i’m going to need to grab the internal ip of the instance in the other network that resides in project bowtie and so because it doesn’t matter which instance i log into as both of them have ssh and ping access i’m going to simply go over to the navigation menu i’m going to head on over to compute engine and i’m going to record the internal ip of instance a and now i’m going to head over to project bowtie and log into instance b and ping instance a and so in order to ssh into this instance i’m going to click on the ssh button under connect and it should open a new browser tab for me logging me into the instance okay i’m logged in here and i’m going to zoom in for better viewing and so now i’m going to run a ping command against instance a using the internal ip that i had copied earlier and i’m going to hit enter and as you can see ping is working and so now we can confirm that the vpc peering connection is established and the two instances in the different vpc networks are communicating over their private ips and you can go ahead and hit control c to stop the ping and so just as a recap you’ve created two separate vpc networks with their own separate subnets in two separate projects you’ve created the necessary firewall rules in each of these networks along with creating instances in each of those networks you then established a vpc peering connection establishing the configuration in each vpc you then did a connectivity test by logging into one of the instances and pinging the other instance and so i hope this helps cement the theory of vpc peering that you learned in the previous lesson and has given you some context when it comes to configuring each end of the peering connection so i wanted to take a moment to congratulate you on completing this demo and so all that’s left now is to clean up all the resources that we created throughout this demo and you can start by selecting the instances and deleting them in each network as well as the firewall rules and the networks themselves i’m going to go over to project tony and i’m going to do the same thing there and so you can do exactly what you did with the last instance here you can select it click on delete and delete the instance and so next we’re going to delete the peering connection so we’re going to go up to the navigation menu we’re going to scroll down to vpc network and on the left hand menu we’re going to scroll down to vpc network peering and so we’re going to select appearing connection we’re going to go to the top and click on delete and then delete the peering connection and so now we’re going to delete the firewall rule so we’re going to go up to firewall we’re going to select the firewall rule at the top we’re going to click delete and then delete the firewall rule and last but not least we want to delete the vpc network that we created so we’re going to go up to vpc networks we’re going to drill down into the custom vpc up at the top we’re going to click on delete vpc network and then we’re going to click on delete and so now that we’ve deleted all the resources in project tony we’re going to go back over to our second project project bowtie and do the same thing and so we’re first going to start off with the vpc peering connection so we’re going to go over to vpc network peering we’re going to select the appearing connection we’re gonna click on delete at the top and delete the peering connection next we’re gonna go into firewall we’re gonna select the firewall rule go up to the top and click on delete and then delete the firewall rule and finally we’re gonna go over to vpc networks we’re going to drill down into the custom network we’re going to click on delete vpc network at the top and delete the vpc network and so now that you’ve successfully deleted all your resources you can now mark this lesson as complete and i’ll see you in the next one and congrats again on the great job of completing this demo [Music] welcome back and in this lesson i’m going to be discussing the concepts and terminology of shared vpcs i’m also going to go into some detailed use cases and how shared vpcs would be used in different scenarios so with that being said let’s dive in now when a vpc is created it is usually tied to a specific project now what happens when you want to share resources across different projects but still have separate billing and access within the projects themselves this is where shared vpcs come into play shared vpcs allow an organization to connect resources from multiple projects to a common vpc network so that way they can communicate with each other securely and efficiently using internal ips from that network when you use shared vpcs you designate a project as a host project and attach one or more other service projects to it the vpc networks in the host project are considered the shared vpc networks so just as a reminder a project that participates in a shared vpc is either a host project or a service project a host project can contain one or more shared vpc networks a service project is any project that has been attached to a host project by a shared vpc admin this attachment allows it to participate in the shared vpc and just as a note a project cannot be both a host and a service project simultaneously it has to be one or the other and you can create and use multiple host projects however each service project can only be attached to a single host project it is also a common practice to have multiple service projects administered by different departments or teams in the organization and so just for clarity for those who are wondering a project that does not participate in a shared vpc is called a standalone project and this is to emphasize that it is neither a host project or a service project now when it comes to administering these shared vpcs we should be adhering to the principle of least privilege and only assigning the necessary access needed to specific users so here i broken down the roles that are needed to enable and administer the shared vpcs a shared vpc admin has the permissions to enable host projects attach service projects to host projects and delegate access to some or all of the subnets in shared vpc networks to service project admins when it comes to a service project admin this is a shared vpc admin for a given host project and is typically its project owner as well although when defining each service project admin a shared vpc admin can grant permission to use the whole host project or just some subnets and so when it comes to service project admins there are two separate levels of permissions that can be applied the first is project level permissions and this is a service project admin that can be defined to have permission to use all subnets in the host project when it comes to subnet level permissions a service project admin can be granted a more restrictive set of permissions to use only some subnets now i wanted to move into some use cases which will give you a bit more context on how shared vpcs are used in specific environments illustrated here is a simple shared vpc scenario here a host project has been created and attached to service projects to it the service project admin in service project a can be configured to access all or some of the subnets in the shared vpc network service project admin with at least subnet level permissions to the 10.0.2.0 24 subnet has created vm1 in a zone located in the us west one region this instance receives its internal ip address 10.0.2.15 from the 10.0.2.0 24 cider block now service project admins in service project b can be configured to access all or some of the subnets in the shared vpc network a service project admin with at least subnet level permissions to the 10.10.4.0 forward slash 24 subnet has created vm2 in a zone located in the us central 1 region this instance receives its internal ip address 10.10.4.1 from the 10.10.4.0 forward slash 24 cider block and of course the standalone project does not participate in the shared vpc at all as it is neither a host nor a service project and the last thing to note instances in service projects attached to a host project using the same shared vpc network can communicate with one another using either ephemeral or reserve static internal ip addresses and i will be covering both ephemeral and static ip addresses in a later section under compute engine external ip addresses defined in the host project are only usable by resources in that project they are not available for use in service projects moving on to the next use case is a multiple hosts project for this use case an organization is using two separate host projects development and production and each host project has two service projects attached to them both host projects have one shared vpc network with subnets configured to use the same cider ranges both the testing and production networks have been purposely configured in the same way so this way when you work with resources tied to a subnet range it will automatically translate over from one environment to the other moving on to the next use case is the hybrid environment now in this use case the organization has a single host project with a single shared vpc network the shared vpc network is connected via cloud vpn to an on-premises network some services and applications are hosted in gcp while others are kept on premises and this way separate teams can manage each of their own service projects and each project has no permissions to the other service projects as well each service project can also be billed separately subnet level or project level permissions have been granted to the necessary service project admins so they can create instances that use the shared vpc network and again instances in these service projects can be configured to communicate with internal services such as database or directory servers located on premises and finally the last use case is a two-tier web service here an organization has a web service that is separated into two tiers and different teams manage each tier the tier one service project represents the externally facing component behind an http or https load balancer the tier 2 service project represents an internal service upon which tier 1 relies on and it is balanced using an internal tcp or udp load balancer the shared vpc allows mapping of each tier of the web service to different projects so that they can be managed by different teams while sharing a common vpc network to host resources that are needed for both tiers now we cover quite a bit in this lesson when it comes to all the concepts of shared vpcs we covered both host and service projects and the roles that they play and their limitations we also went over the different roles that are needed to administrate these shared vpcs and we went over different use cases on how to use shared vpcs for different scenarios and so that about covers everything i wanted to discuss in this lesson so you can now mark this lesson as complete and let’s move on to the next one [Music] welcome back and in this lesson i’m going to be discussing vpc flow logs flow logs is an essential tool for monitoring and analyzing traffic coming in and going out of vpcs from vm instances flow logs are essential to know for the exam as you should know the capabilities and use cases and so with that being said let’s dive in so vpc flow logs records a sample of network flows sent from and received by vm instances including instances used as google kubernetes engine nodes these logs can be used for network monitoring forensics real-time security analysis and expense optimization when you enable vpc flow logs you enable for all vms in a subnet so basically you would be enabling vpc flow logs on a subnet by subnet basis flow logs are aggregated by connection from compute engine vms and exported in real time these logs can be exported to cloud logging previously known as stackdriver for 30 days if logs need to be stored for longer than 30 days they can be exported to a cloud storage bucket for longer term storage and then read and queried by cloud logging google cloud samples packets that leave and enter a vm to generate flow logs now not every packet is captured into its own log record about one out of every 10 packets is captured but this sampling rate might be lower depending on the vm’s load and just as a note you cannot adjust this rate this rate is locked by google cloud and cannot be changed in any way and because vpc flow logs do not capture every packet it compensates for missed packets by interpolating from the captured
packets now there are many different use cases for vpc flow logs and i wanted to take a quick minute to go over them the first one i wanted to mention is network monitoring vpc flow logs provide you with real-time visibility into network throughput and performance so you can monitor the vpc network perform network diagnostics understand traffic changes and help forecast capacity for capacity planning you can also analyze network usage with vpc flow logs and you can analyze the network flows for traffic between regions and zones traffic to specific countries on the internet and based on the analysis you can optimize your network traffic expenses now a great use case for vpc flow logs is network forensics so for example if an incident occurs you can examine which ips talked with whom and when and you can also look at any compromised ips by analyzing all the incoming and outgoing network flows and lastly vpc flow logs can be used for real-time security analysis you can leverage the real-time streaming apis using pub sub and integrate them with a sim or security information in event management system like splunk rapid7 or logarithm and this is a very common way to add an extra layer of security to your currently existing environment as well as a great way to meet any compliance standards that are needed for your organization now vpc flow logs are recorded in a specific format log records contain base fields which are the core fields of every log record and meta data fields that add additional information metadata fields may be omitted to save storage costs but base fields are always included and cannot be omitted some log fields are in a multi-field format with more than one piece of data in a given field for example the connection field that you see from the base is of the ip details format which contains the source and destination ip address and port plus the protocol in a single field flows that have an endpoint in a gke cluster can be annotated with gke annotations which can include details of the cluster pod and service of the endpoint gke annotations are only available with a custom configuration of metadata fields now when you enable vpc flow logs you can set a filter based on both base and metadata fields that only preserves logs that match the filter all other logs are discarded before being written to logging which saves you money and reduces the time needed to find the information you’re looking for shown here is a sample from the console in both the classic logs viewer as well as the logs viewer in preview and so in the classic logs viewer you can simply select the sub network from the first pull down menu and from the second pull down menu you can select the compute.googleapis.com forward slash vpc underscore flows and this will give you the information that you need to pull up all your vpc flow logs in the logs viewer preview it is done in a similar way but the query is shown here in the query builder and can be adjusted accordingly pulling up any vpc flow logs must be done within the console when viewing them in google cloud and so the last thing i wanted to show you before ending this lesson is a sample of the log itself the log shown here is a sample of what a vpc flow log looks like and as you can see here beside each field you will see a small arrow clicking on these arrows will expand the field and reveal many of the subfields that you saw on the last slide and will give you the necessary information you need to analyze your vpc flow logs in this example of the connection field it shows the five tuple that describes this connection which you can clearly see up here at the top and if i were to go further down and expand more of these fields i would find more information that could help me better analyze more logging info for my given problem that i am trying to solve now i didn’t want to go too deep into logging as i will be diving into a complete section on its own in a later section of the course but i did want you to get a feel for what type of data vpc flow logs can give you and how it can help you in your specific use case as well as on the exam and so that’s pretty much all i wanted to cover with regards to vpc flow logs so you can now mark this lesson as complete and let’s move on to the next one [Music] welcome back in this lesson i’m going to cover a high-level overview of a basic foundational service that supports the backbone of the internet as we know it today this foundation is called dns or the domain name system dns is used widely in google cloud from mostly an infrastructure perspective and is used in pretty much any other cloud environment or computer network on the planet now there is quite a bit to cover in this lesson with regards to dns so with that being said let’s dive in now dns or domain name system is a global decentralized distributed database that lets you store ip addresses and other data and look them up by name this system uses human readable names like google.com and translates it into a language that computers understand which are numeric ip addresses for example humans access information online through a domain name like google.com computers use ip addresses to access information online like 172.217. now whether you type google.com or the ip address into a web browser both will connect to google.com dns translates the domain name to an ip address so that the web browser knows where to connect to and we know what to enter into the web browser through dns you can connect a domain name to web hosting mail and other services now getting a bit deeper into it as ip addresses are at the core of communicating between devices on the internet they are hard to memorize and can change often even for the same service to get around these problems we gave names to ip addresses for example when it comes to our computer communicating with http://www.google.com it will use the dns system to do this now in the dns database contains the information needed to convert the http://www.google.com domain name to the ip address and this piece of information is stored in a logical container called a zone the way that the zone is stored is through what’s commonly known as a zone file now within this zone file is a dns record which links the name www and the ip address that your laptop needs to communicate with the specific website and this zone file is hosted by what’s known as a name server or ns server for short and i will be going into further detail on this in just a minute so in short if you can query the zone for the record http://www.google.com then your computer can communicate with the web server and dns is what makes it all happen now i wanted to go into a bit of history of how dns came about so in early computer networks a simple text file called a host file was created that mapped hostnames to ip addresses and this enabled people to refer to other computers by the name and their computers translated that name to an ip address when it needed to communicate with it the problem is as network sizes increased the host file approach became impractical due to the fact that it needed to be stored on each computer as each computer would have to resolve the same host names as well updates were difficult to manage as all of the computers would need to be given an updated file all in all this system was not scalable now to overcome these and other limitations the dns system was developed and the dns system essentially provided for a way to organize the names using a domain name structure it also provided a dynamic system for protocols services and methods for storing updating and retrieving ip addresses for host computers now that i’ve covered what dns is and why we use it i wanted to dive into the structure of the dns system now the structure all begins with a dot the root if you will and this can be found after every domain name that you type into your browser you will almost never see it and this is because your browser will automatically put it in without your knowing you can try it with any domain in any browser and you will almost always come up with the same result this dot is put in for you and will provide the route for you and this is where we start to break down the dns system now the domain name space consists of a hierarchical data structure like the one you have on your computer each node has a label and zero or more resource records which hold information associated with the domain name the domain name itself consists of the label concatenated with the name of its parent node on the right separated by a dot so when it comes to dns the domain name is always assembled from right to left this hierarchy or tree is subdivided into zones beginning at the root zone a dns zone may consist of only one domain or may consist of many domains and sub domains depending on the administrative choices of the zone manager now getting right into it the root server is the first step in translating human readable hostnames into ip addresses the root domain is comprised of 13 dns systems dispersed around the world known collectively as the dns root servers they are indicated by the letters a through m operated by 12 organizations such as verisign cogent and nasa while there are 13 ip addresses that represent these systems there are actually more than 13 servers some of the ip addresses are actually a cluster of dns servers and so each of these dns servers also consists of the root zone file which contains the address of the authoritative name server for each top level domain and because this is such a big undertaking to keep updated iana or the internet assigned numbers authority was appointed as the authority that manages and administrates this file and i will include a link in the lesson text for those of you who are looking to dive deeper into the contents of this root zone file as well as getting to know a little bit more about the iana organization now while the dns root servers establish the hierarchy most of the name resolution process is delegated to other dns servers so just below the dns route in the hierarchy are the top level domain servers also known as tld for short the top level domain takes the tld provided in the user’s query for example http://www.google and provides details for the dot-com tld name server the companies that administer these domains are named registries and they operate the authoritative name servers for these top level domains for example verisign is the registry for the dot com top level domain over a hundred million domains have been registered in the dot com top level domain and these top level dns servers handle top level domains such as com dot org dot net and dot io and this can also be referred to as the gtld which is the general top level domains and the cctld which is the country code top level domain like dot ca for canada dot uk for the united kingdom and dot it for italy the top level dns servers delegate to thousands of second level dns servers now second level domain names are sold to companies and other organizations and over 900 accredited registrars register and manage the second level domains in the dot com domain for end users the second level of this structure is comprised of millions of domain names second level dns servers can further delegate the zone but most commonly store the individual host records for a domain name this is the server at the bottom of the dns lookup chain where you would typically find resource records and it is these resource records that maps services and host names to ip addresses and will respond with the queried resource record ultimately allowing the web browser making the request to reach the ip address needed to access a website or other web resources now there is one more concept that i wanted to cover before we move on and this is the sub domain now some of you have noticed and wondered where does the sub domain come into play with regards to the dns structure well this is a resource record that falls under the second level domain and in dns hierarchy a sub domain is a domain that is a part of another main domain but i wanted to put it in here just to give you an understanding of where subdomains would fall so now that we understand how dns is structured i wanted to go through the breakdown of the data flow of dns to give you some better contacts now there are eight steps in a dns lookup first we start off with the dns client which is shown here as tony bowtie’s laptop and this is a client device which could also be a phone or a tablet and is configured with software to send name resolution queries to a dns server so when a client needs to resolve a remote host name into its ip address in most cases it sends a request to the dns recursive resolver which returns the ip address of the remote host to the client a recursive resolver is a dns server that is configured to query other dns servers until it finds the answer to the question it will either return the answer or an error message to the client if it cannot answer the query and the query will eventually be passed off to the dns client the recursive resolver in essence acts as the middle man between a client and a dns name server which is usually the internet service provider a service carrier or a corporate network now to make sure that a resolver is able to properly run dns a root hints file is supplied with almost every operating system and this file holds the ip addresses for the root name servers this also includes the dns resolver but in case it is unable to answer the query the client will be able to still make the query to the dns name servers now after receiving a dns query from a client this recursive resolver will either respond with cache data or send a request to a root name server and in this case the resolver queries a dns root name server the root server then responds to the resolver with the address of a top level domain or tld dns server such as com or dot net which stores the information for its domains now when searching for google.com the request is pointed towards the dot-com tld so naturally the resolver then makes a request to the com tld then the tld name server then responds with the ip address of the domain’s name server google.com and lastly the resolver then sends a query to the domain’s name server the ip address for google.com is then returned to the resolver from the name server this ip address is cache for a period of time determined by the google.com name server and this process is so that a future request for this hostname could be resolved from its cache rather than performing the entire process from beginning to end and so for those of you who are unaware cache is a component that stores data so that future requests for that data can be served faster the purpose of this caching is to temporarily store data in a location that results in improvements in performance and reliability for data requests dns caching involves storing the data closer to the requesting client so that the dns query can be resolved earlier and additional queries further down the dns lookup chain can be avoided and thus improving load times dns data can be cached in a variety of locations down the chain each of which will store dns records for a set amount of time determined by a time to live also known as ttl for short and this value is the time to live for that domain record a high ttl for a domain record means that local dns resolvers will cache responses for longer and give quicker responses however making changes to dns records can take longer due to the need to wait for all cash records to expire alternatively domain records with low ttls can change much more quickly but dns resolvers will need to refresh their records more often and so in this final step the dns resolver then responds to the web browser with the ip address of the domain requested initially and once these eight steps of the dns lookup have returned the ip address for http://www.google.com the browser is able to make the request for the webpage and so the browser will reach out to the ip address of the server and request the web page which will be loaded up in the browser now i know this probably has been a review for those who are a bit more advanced when it comes to understanding dns but for others who are fairly new to the underpinnings of dns i hope this has given you a basic understanding of what it is why we use it and how it works moving forward in the course i will be discussing dns with regards to different services and the needed resource records within zones that are used by these given services and so that’s pretty much all i wanted to cover when it comes to the fundamentals of dns so you can now mark this lesson as complete and let’s move on to the next one [Music] welcome back in this lesson i’m going to be diving into dns record types now dns resource records are the basic information elements of the domain name system they are entries in the dns database which provide information about hosts these records are physically stored in the zone files on the dns server this lesson will go through some of the most commonly used dns records that we will be coming across throughout this course so with that being said let’s dive in now the first record that i wanted to touch on are the name server records also known as ns records for short this record identifies which dns server contains the current records for a domain these servers are usually found at a registrar internet service provider or hosting company ns records are created to identify the name server used for each domain name within a given zone in this example we have the dot co zone that will have multiple name server records for bowtieinc.co now these name server records are how the dot co delegation happens for bowtieinc.co and they point at servers that host the inc.co zone that is managed by bowtie inc and the flow shown here of the query starts from the root zone going to the dot co zone where the record lies for the name servers for bowtieinc.com and flows down to the bowtieinc.cozone that contain all the necessary records for bowtieinc.co the next record that i wanted to touch on are the a and aaa records and this is short for address records for ipv4 and ipv6 ip addresses respectively and this record points a domain name to an ip address for example when you type wwe in a web browser the dns system will translate that domain name to the ip address of 52.54.92.195 using the a record information stored in the bowtieinc.co dns zone file the a record links a website’s domain name to an ipv4 address that points to the server where the website’s files live now when it comes to an aaa record this links a website’s domain to an ipv6 address that points to the same server where the website’s files live a records are the simplest type of dns records and one of the primary records used in dns servers you can do a lot with a records including using multiple a records for the same domain in order to provide redundancy the same can be said for aaa records additionally multiple domains could point to the same address in which case each would have its own a or aaa record pointing to that same ip address moving on to cname records a c name record short for canonical name record is a type of resource record that maps one domain name to another this can be really convenient when running multiple services like an ftp server and an e-commerce server each running on different ports from a single ip address you can for example point ftp http://ftp.bowtieinc.co and shop.bowtieinc.co to the dns entry for bowtieinc.co which in turn has an a record which points to the ip address so if the ip address ever changes you only have to change the record in one place in the dns a record for bow tie inc dot co and just as a note cname records must always point to another domain name and never directly to an ip address next up are txt records a text record or txt for short is a type of resource record that provides text information to sources outside your domain that can be used for a number of arbitrary purposes the records value can be either human or machine readable text in many cases text records are used to verify domain ownership or even to provide human readable information about a server a network or a data center it is also often used in a more structured fashion to record small amounts of machine readable data into the dns system a domain may have multiple tax records associated with it provided the dns server implementation supports this each record can in turn have one or more character strings in this example google wants to verify the bowtieinc.co domain so that g suite can be set up and needs verification through the domain to google through creating a text record and adding it to the zone google will then supply a text verification record to add to the domain host’s dns records and start to scan for the text record to verify the domain the supplied text record is then added by the domain administrator and behind the scenes google is doing a verification check at timed intervals when google finally sees the record exists the domain ownership is confirmed and g suite can be enabled for the domain and this is a typical example of how tax records are used now moving on to mx records a dns mx record also known as the mail exchange record is the resource record that directs email to a mail server the mx record indicates how email messages should be routed and to which server mail should go to like cname records an mx record must always point to another domain now mx records consist of two parts the priority and the domain name the priority are the numbers before the domains for these mx records and indicate the preference of the order in which the mail server should be used the lower the preference number the higher the priority so in this example laura is emailing tony bowtie at tony at bowtieinc.co the mx records are part of this process as dns needs to know where to send the mail to and we’ll look at the domain attached to the email address which is bowtieinc.co so the dns client will run a regular dns query by first going to the root then to the cotld and finally to bowtieinc.co it will then receive the mx record which in this example is two of them the first one being mail representing mail.bowtieinc.co and then the second one is a different mail server outside the current domain and in this case is a google mail server of aspmx.l.google.com and this is a fully qualified domain name as the dot on the right of this record suggests so here the server will always try mail.bowtieinc.co first because 5 is lower than 10. and this will give mail.bowtieinc.co the higher priority in the result of a message send failure the server will default to aspmx.l.google.com if both values are the same then it would be low balanced across both servers whichever is used the server gets the result of the query back and it uses this to connect to the mail server for bowtieinc.co via the smtp protocol and it uses this protocol to deliver all email and this is how mx records are used for email the next record i wanted to cover are the pointer records also known as ptr records for short and this provides the domain name associated with an ip address so a dns pointer record is exactly the opposite of the a record which provides the ip address associated with the domain name dns pointer records are used in reverse dns lookups as we discussed earlier when a user attempts to reach a domain name in their browser a dns lookup occurs matching the domain name to the ip address a reverse dns lookup is the opposite of this process and it is a query that starts with the ip address and looks up the domain name while dnsa records are stored under the given domain name dns pointer records are stored under the ip address reverse and ending in dot i n a d d r dot arpa so in this example the pointer record for the iap address 52.54.90 would be stored under 195.92.54.52 dot in addr dot arpa ipv6 addresses are constructed differently from ipv4 addresses and ipv6 pointer records exist in a different namespace within.arpa ipv6 pointer records are stored under the ipv6 address reversed and converted into 4-bit sections as opposed to 8-bit sections as in ipv4 and as well the domain.ip6.arpa is added at the end pointer records are used most commonly in reverse dns lookups for anti-spam troubleshooting email delivery issues and logging and so the last record that i wanted to cover are the soa records also known as the start of authority records and this resource record is created for you when you create your managed zone and specifies the authoritative information including global parameters about a dns zone the soa record stores important information about a domain or zone such as the email address of the administrator when the domain was last updated and how long the server should wait between refreshes every dns zone registered must have an soa record as per the rfc 1035 and there is exactly one soa record per zone the soa record contains the core information about your zone so it is not possible for your zone to work without that information and i will include a link in the lesson text for those who are interested in diving deeper and understanding all the information that is covered under these soa records a properly optimized and updated soa record can reduce bandwidth between name servers increase the speed of website access and ensure the site is alive even when the primary dns server is down and so that about covers everything that i wanted to discuss when it comes to resource records within dns so you can now mark this lesson as complete and let’s move on to the next one [Music] welcome back in this lesson i’m going to be covering network address translation also known as nat for short this is a common process used in home business and any cloud networks that you will encounter knowing and understanding that will help you achieve why you would use it and what makes it such a necessary process now there’s quite a bit to cover here so with that being said let’s dive in now at a high level nat is a way to map multiple local private ip addresses to a public ip address before transferring the information this is done by altering the network address data in the ip header of the data packet while traveling through a network towards the destination as packets pass through a nat device either the source or destination ip address is changed then packets returning in the other direction are translated back to the original addresses and this is a process that is typically used in most home routers that are provided by your internet service provider now originally nat was designed to deal with the scarcity of free ipv4 addresses increasing the number of computers that can operate off a single publicly routable ip address and so because devices in the private ip space such as 192.168.0.0 cannot traverse the public internet that is needed for those devices to communicate with the public internet now ipv6 was designed to overcome the ipv4 shortage and has tons of available addresses and therefore there is no real need for nat when it comes to ipv6 now nat has an additional benefit of adding a layer of security and privacy by hiding the ip address of your devices from the outside world and only allowing packets to be sent and received from the originating private device and so this is a high level of what nat is now there are multiple types of not that i will be covering which at a high level do the same thing which is translate private i p addresses to public ip addresses yet different types of nat handles the process differently so first we have static nat which maps a single private ip address to a public ip address so a one-to-one mapping that gives the device with the private ip address access to the public internet in both directions this is commonly used where one specific device with a private address needs access to the public internet the next type of nat is dynamic nan and this is similar to static nat but doesn’t hold the same static allocation a private ip address space is mapped to a pool of public ip addresses and are allocated randomly as needed when the ip address is no longer needed the ip address is returned back to the pool ready to be used by another device this method is commonly used where multiple internal hosts with private i p addresses are sharing an equal or fewer amount of public i p addresses and is designed to be an efficient use of public ips and finally there is port address translation or pat where multiple private ip addresses are translated using a single public ip address and a specific port and this is probably what your home router is using and will cover all the devices you use in your home network this method uses ports to help distinguish individual devices and is also the method that is used for cloudnat in google cloud which i will be covering in a later lesson and so i wanted to get into a bit more detail on how these methods work starting with static not now to set the stage for static not i’m going to start off with a private network here on the left and the public ip space here on the right and the router or not device in the middle in this example there is a server on the left that needs access to external services and for this example the external service we are using is the bowtress service an image sharing site for all sorts of awesome bow ties so the server on the left is private with a private ip address of 192.168.0.5 and this means it has an address in the ip version 4 private address space meaning that it cannot route packets over the public internet because it only has a private ip the beautress service on the other hand has a public ip address which is 54.5.4.9 so the issue we run into is that the private address can’t be routed over the public internet because it’s private and the public address of the beau trust service can’t directly communicate with any private address because public and private addresses can communicate over the public internet what we need is to translate the private address that the server on the left has to a public ip that can communicate with the service on the right and vice versa now then that device will map the private ip to public ip using and maintaining a nat table and in this case of static nat the nat device will have a one-to-one mapping of the private ip address to a public ip address and can be allocated to the device specified which in this case is the server marked as 192.168.0.15 and so in order for the server on the left to communicate with the beautress service the server will generate a packet as normal with the source ip of the packet being the server’s private ip address and the destination ip of the packet being the ip of the bowtrust service now the router in the middle is the default gateway for any destination so any ip packets which are destined for anything but the local network are sent to the router so as you can see here with the entry in the table it will contain the private i p address of 192.168.0.15 and mapped to the public address which in this case is 73.6.2.33 and these are statically mapped to one another and so as the packet passes through the nat device the source address of the packet is translated from the private address to the mapped public address and this results in a new packet so this new packet still has beautrest as the destination but now it has a valid public ip address as the source and so this is the translation that happens through nat now this process works in a similar way in the other direction so when the beautress service receives the packet it sees the source as this public ip so when it responds with data its packet has its ip address as the source and the previous server’s public ip address as the destination so it sends this packet back to this public ip so when the packet arrives at the nat device the table is checked it recognizes then that the ip is for the server and so this time for incoming traffic the destination ip address is updated to the corresponding private ip address and then the packet is forwarded through to the private server and this is how static nat works the source i p address is translated from the mapped private ip to public ip and for incoming traffic the destination i p address is translated from the allocated public ip to the corresponding private ip all without having to configure a public ip on any private device as they always hold their private ip addresses now i wanted to supply an analogy for nat and so a very common analogy that is used is that of a phone service so in this example laura is the new manager of bow tie inc new location in montreal and has put in a new public phone number of 514-555-8437 although as you can see here laura also has a private extension of one three three seven now if george called laura at that public phone number he would reach laura without ever knowing her private extension so the private extension acts as that private ip address and the public phone number would act as the public ip address and this would be the telephone analogy for static nat and so this is the end of part one of this lesson it was getting a bit long so i decided to break it up this would be a great opportunity for you to get up and have a stretch get yourself a coffee or a tea and whenever you’re ready you can join me in part two where we will be starting immediately from the end of part one so you can go ahead and complete this video and i will see you in part two [Music] welcome back this is part two of the network address translation lesson and we will be starting exactly where we left off from part 1. so with that being said let’s dive in now moving on to dynamic nat this method is similar to static nat except that devices are not allocated a permanent public ip a public ip address is allocated from a pool of ip addresses as they are needed and the mapping of public to private is allocation base in this example there are two devices on the left and according to the nat table there are two public ip addresses available for use 73.6.2.33 and 73.6.2.34 so when the laptop on the left is looking to access the beautress service it will generate a packet where the source ip is the private address of 192.168.0.13 and the destination ip is 54.5.4.9 so it sends this packet and again the router in the middle is the default gateway for anything that isn’t local as the packet passes through the router or the nat device it checks if the private ip has a current allocation of public addressing from the pool and if it doesn’t and one is available it allocates one dynamically and in this case 73.6.2.34 is allocated so the packet’s source i p address is translated to this address and the packets are sent to the beautress service and so this process is the same as static not thus far but because dynamic nat allocates these ip addresses dynamically multiple private devices can share a single public ip as long as the devices are not using the same public ip at the same time and so once the device is finished communication the ip is returned back to the pool and is ready for use by another device now just as a note if there’s no public ip addresses available the router rejects any new connections until you clear the nat mappings but if you have as many public ip addresses as hosts in your network you won’t encounter this problem and so in this case since the lower server is looking to access the fashion tube service there is an available public ip address in the pool of 73.6.2.33 thus giving it access to the public internet and access to fashion tube so in summary the nat device maps a private ip with the public ip in a nat table and public ips are allocated randomly and dynamically from a pool now this type of knot is used where multiple internal hosts with private ip addresses are sharing an equal or fewer amount of public ip addresses when all of those private devices at some time will need public access now an example of dynamic nat using the telephone analogy would be if laura and two other bow tie inc employees lisa and jane had private phone numbers and this would represent your private ips in this example bowtie inc has three public phone numbers now when any employee makes an outbound call they are routed to whichever public line is open at the time so the caller id on the receiver’s end would show any one of the three public phone numbers depending on which one was given to the caller and this would represent the public ips in the public ip pool now the last type of nat which i wanted to talk about is the one which you’re probably most familiar with and this is port address translation which is also known as not overload and this is the type of not you likely use on your home network port address translation is what allows a large number of private devices to share one public ip address giving it a many to one mapping architecture now in this example we’ll be using three private devices on the left all wanting to access fashiontube on the right a popular video sharing website of the latest men’s fashions shared by millions across the globe and this site has a public ip of 62.88.44.88 and accessed using tcp port 443 now the way that port address translation or pat works is to use both the ip addresses and ports to allow for multiple devices to share the same public ip every tcp connection in addition to a source and destination ip address has a source and destination port the source port is randomly assigned by the client so as long as the source port is always unique then many private clients can use the same public ip address and all this information is recorded in the nat table on the nat device in this example let’s assume that the public ip address of this nat device is 73.6.2.33 so when the laptop in the top left generates a packet and the packet is going to fashion tube its destination ip address is 62.80 and its destination port is 443. now the source ip of this packet is the laptop’s private ip address of 192.168.6 and the source port is 35535 which is a randomly assigned ephemeral port so the packet is routed through the nat device and in transit the nat device records the source ip and the original source private port and it allocates a new public ip address and a new public source port which in this case is 8844 it records this information inside the not table as shown here and it adjusts the pocket so that its source ip address is the public ip address that the nat device is using and the source port is this newly allocated source port and this newly adjusted packet is forwarded on to fashiontube now the process is very similar with the return traffic where the packet will verify the recorded ips and ports in the nat table before forwarding the packet back to the originating source now if the middle laptop with the ip of 192.168.0.14 did the same thing then the same process would be followed all of this information would be recorded in the nat table a new public source port would be allocated and would translate the packet adjusting the packet’s source ip address and source port as well the same process would happen for the laptop on the bottom generating a packet with the source and destination ip with the addition of the source and destination ports and when routed through the nat device goes through its translation recording the information in the nat table and reaching its destination again return traffic will be verified by the recorded ips and ports in the nat table before forwarding the packet back to its originating source and so just as a summary when it comes to port address translation the nat device records the source ip and source port in a nat table the source ip is then replaced with a public ip and public source port and are allocated from a pool that allows overloading and this is a many-to-one architecture and so for the telephone analogy for pat let’s use a phone operator example so in this instance george is trying to call laura now george only knows lark laura’s executive admin and only has lark’s phone number george does not have laura’s private line lark’s public phone number is the equivalent to having a public ip address george calls lark who then connects george to laura the caveat here is that lark never gives out laura’s phone number in fact laura doesn’t have a public phone number and can only be called by lark and here’s where nat can add an extra layer of security by only allowing needed ports to be accessed without allowing anyone to connect to any port now i hope this has helped you understand the process of network address translation how the translation happens and the process of using a nat table to achieve packet translation along with its destination this is so common in most environments that you will encounter and it’s very important to fully understand the different types of not and how it can be used in these types of environments and so that’s pretty much all i wanted to cover on this lesson of network address translation so you can now mark this lesson as complete and let’s move on to the next one [Music] welcome back so now that we’ve covered the fundamentals of dns along with the different record types i wanted to focus in on google cloud’s dns service called cloud dns now cloud dns is a fully managed service that manages dns servers for your specific zones and since cloud dns shows up on the exam only on a high level i will be giving an overview of what this service can do so with that being said let’s dive in now cloud dns acts as an authoritative dns server for public zones that are visible to the internet or for private zones that are visible only within your network and is commonly referred to as google’s dns as a service cloud dns has servers that span the globe making it a globally resilient service now while it is a global service there is no way to select specific regions to deploy your zones and dns server policies you simply add your zones records and policies and it is distributed amongst google’s dns servers across the globe cloud dns is also one of the few google cloud services that offers 100 availability along with low latency access by leveraging google’s massive global network backbone now in order to use cloud dns with a specific publicly available domain a domain name must be purchased through a domain name registrar and you can register a domain name through google domains or another domain registrar of your choice cloud dns does not provide this service and just as a note that to create private zones the purchasing of a domain name is not necessary now as stated earlier cloud dns offers the flexibility of hosting both public zones and privately managed dns zones now public zones are zones that are visible to the public internet and so when cloud dns is managing your public domain it has public authoritative name servers that respond to public zone dns queries for your specific domain now when it comes to private zones these enable you to manage custom domain names for your google cloud resources without exposing any dns data to the public internet a private zone can only be queried by resources in the same project where it is defined and as we discussed earlier a zone is a container of dns records that are queried by dns so from a private zone perspective these can only be queried by one or more vpc networks that you authorize to do so and just as a note the vpc networks that you authorize must be located in the same project as the private zone to query records hosted in manage private zones in other projects the use of dns peering is needed now i don’t want to get too deep into dns peering but just know that vpc network peering is not required for the cloud dns peering zone to operate peering zones do not depend on vpc network peering now each managed zone that you create is associated with a google cloud project and once this zone is created it is hosted by google’s managed name servers now these zones are always hosted on google’s manage name servers within google cloud so you would create records and record sets and these servers would then become allocated to that specific zone hosting your records and record sets and just as a quick reminder a record set is the collection of dns records in a zone that have the same name and are of the same type most records contain a single record but it’s not uncommon to see record sets a great example of this are a records or ns records which we discussed earlier and these records can usually be found in pairs and so now to give you a practical example of cloud dns i wanted to bring the theory into practice through a short demo where i’ll be creating a managed private zone so whenever you’re ready join me in the console and so here we are back in the console and i’m logged in as tonybowties gmail.com and i’m currently in project bowtie inc so now to get to cloud dns i’m going to go over to the navigation menu i’m going to scroll down to network services and go over to cloud dns and because i currently don’t have any zones i’m prompted with only one option which is to create a zone and so i’m going to go ahead and create a zone and so here i’ve been prompted with a bunch of different options in order to create my dns zone and so the first option that i have is zone type and because i’m creating a private zone i’m going to simply click on private and i need to provide a zone name which i’m going to call tony bowtie next i’m going to have to provide a dns name which i will call tony bowtie dot private and under the description i’m just going to type in private zone for tony bowtie and so the next field i’ve been given is the options field where it is currently marked as default private and so if i go over here to the right hand side and open up the drop down menu i’m given the options to forward queries to another server dns peering manage reverse lookup zones and use a service directory namespace and so depending on your type of scenario one of these five options in most cases will suffice so i’m going to keep it under default private and under networks it says your private zone will be visible to the selected networks and so i’m going to click on the drop down and i’m giving only the option of the default network because it’s the only network that i have and so i’m going to select it and i’m going to click on the white space and if i feel so inclined i can simply click on the shortcut for the command line and here i’m given this specific commands if i was to use the command line in order to create this dns zone so i’m going to click on close here and i’m going to click on create and as you can see here my zone has been created along with a couple of dns records the first one being my name server records as well as my start of authority records and so as a note to know for the exam when creating a zone these two records will always be created both the soa record and the ns record and moving on to some other options here i can add another record set if i choose to again the dns name the record type which i have a whole slew of record types to choose from it’s ttl and the ip address but i’m not going to add any records so i’m just going to cancel and by clicking in use by i can view which vpc network is using this zone and as expected the default network shows up and i also have the choice of adding another network but since i don’t have any other networks i can’t add anything so i’m going to simply cancel i also have the option of removing any networks so if i click on this i can remove the network or i can also remove the network by clicking on the hamburger menu and so as you can see i have a slew of options to choose from when creating zones and record sets and so that about covers everything that i wanted to show you here in cloud dns but before i go i’m going to go ahead and clean up and i’m just going to click on the garbage can here on the right hand side of the zone and i’m going to be prompted if i want to delete the zone yes i do so i’m going to click on delete and so that pretty much covers everything that i wanted to show you with regards to cloud dns so you can now mark this lesson as complete and let’s move on to the next one welcome back now before we step into the compute engine section of the course i wanted to cover a basic foundation of what makes these vms possible and this is where a basic understanding of virtualization comes into play now this is merely an introductory lesson to virtualization and i won’t be getting too deep into the underpinnings it serves as just a basic foundation as to how compute engine gets its features under the hood and how they are possible through the use of virtualization for more in-depth understanding on virtualization i will be including some links in the lesson text for those who are looking to learn more but for now this will provide just enough theory to help you understand how compute engine works so with that being said let’s dive in so what exactly is virtualization well virtualization is the process of running multiple operating systems on a server simultaneously now before virtualization became popular a standard model was used where an operating system would be installed on a server so the server would consist of typical hardware like cpu memory network cards and other devices such as video cards usb devices and storage and then the operating system would run on top of the hardware now there is a middle layer of the operating system a supervisor if you will that is responsible for interacting with underlying hardware and this is known as the kernel the kernel manages the distribution of the hardware resources of the computer efficiently and fairly among all the various processes running on the computer now the kernel operates under what is called kernel mode or privilege mode as it runs privileged instructions that interacts with the hardware directly now the operating system allows other software to run on top of it like an application but cannot interact directly with the hardware it must interact with the operating system in user mode or non-privileged mode so when lark decides to do something on an application that needs to use the system hardware that application needs to go through the operating system it needs to make what’s known as a system call and this is the model of running one operating system on a single server now when passed servers would traditionally run one application on one server with one operating system in the old system the number of servers would continue to mount since every new application required its own server and its own operating system as a result expensive hardware resources were purchased but not used and each server would use approximately under 20 of its resources on average server resources were then known as underutilized now there came a time when multiple operating systems were installed on one computer isolated from each other with each operating system running their own applications this was a perfect model to consolidate hardware and keep utilization high but there is a major issue that arose each cpu at this given moment in time could only have one thing running as privileged so having multiple operating systems running on their own in an unmodified state and expecting to be running on their own in a privileged state running privileged instructions was causing instability in systems causing not just application crashes but system crashes now a hypervisor is what solved this problem it is a small software layer that enables multiple operating systems to run alongside each other sharing the same physical computing resources these operating systems come as virtual machines or vms and these are files that mimic an entire computing hardware environment in software the hypervisor also known as a virtual machine monitor or vmm manages these vms as they run alongside each other it separates virtual machines from each other logically assigning each its own slice of the underlying computing cpu memory and other devices like graphics network and storage this prevents the vms from interfering with each other so if for example one operating system suffers a crash or a security compromise the others will survive and continue running now the hypervisor was never as efficient as how you see it here it went through some major iterations that gave its structure as we know it today initially virtualization had to be done in software or what we now refer to as the host machine and the operating system with its applications put in logical containers known as virtual machines or guests the operating system would be installed on the host which included additional capabilities called a hypervisor and allowed it to make the necessary privileged calls to the hardware having full access to the host the hypervisor exposed the interface of the hardware device that is available on the host and allowed it to be mapped to the virtual machine and emulated the behavior of this device and this allowed the virtual machine using the operating system drivers that were designed to interact with the emulated device without installing any special drivers or tools as well as keeping the operating system unmodified the problem here is that it was all emulated and so every time the virtual machines made calls back to the host each instruction needed to be translated by the hypervisor using what’s called a binary translation now without this translation the emulation wouldn’t work and would cause system crashes bringing down all virtual machines in the process now the problem with this process is that it made the system painfully slow and it was this performance penalty that caused this process to not be so widely adopted but then another type of virtualization came on the scene called para virtualization now in this model a modified guest operating system is able to speak directly to the hypervisor and this involves having the operating system kernel to be modified and recompiled before installation onto the virtual machine this would allow the operating system to talk directly with the hypervisor without any performance hits as there is no translation going on like an emulation para virtualization replaces instructions that cannot be virtualized with hyper calls that communicate directly with the hypervisor so a hypercall is based on the same concept as a system call privileged instructions that accept instead of calling the kernel directly it calls the hypervisor and due to the modification in this guest operating system performance is enhanced as the modified guest operating system communicates directly with the hypervisor and emulation overhead is removed the guest operating system becomes almost virtualization aware yet there is still a process whereby software was used to speak to the hardware the virtual machines could still not access the hardware directly although things changed in the world of virtualization when the physical hardware on the host became virtualization aware and this is where hardware assisted virtualization came into play now hardware assisted virtualization is an approach that enables efficient full virtualization using help from hardware capabilities on the host cpu using this model the operating system has direct access to resources without any hypervisor emulation or operating system modification the hardware itself becomes virtualization aware the cpu contains specific instructions and capabilities so that the hypervisor can directly control and configure this support it also provides improved performance because the privileged instructions from the virtual machines are now trapped and emulated in the hardware directly this means that the operating system kernels no longer need to be modified and recompiled like in para virtualization and can run as is at the same time the hypervisor also does not need to be involved in the extremely slow process of binary translation now there is one more iteration that i wanted to discuss when it comes to virtualization and that is kernel level virtualization now instead of using a hypervisor kernel level virtualization runs a separate version of the linux kernel and sees the associated virtual machine as a user space process on the physical host this makes it easy to run multiple virtual machines on a single host a device driver is used for communication between the main linux kernel and the virtual machine every vm is implemented as a regular linux process scheduled by the standard linux scheduler with dedicated virtual hardware like a network card graphics adapter cpu memory and disk hardware support by the cpu is required for virtualization a slightly modified emulation process is used as the display and execution containers for the virtual machines in many ways kernel level virtualization is a specialized form of server virtualization and this is the type of virtualization platform that is used in all of google cloud now with this type of virtualization because of the kernel acting as the hypervisor it enables a specific feature called nested virtualization now with nested virtualization it is made possible to install a hypervisor on top of the already running virtual machine and so this is what google cloud has done now you’re probably wondering after going through all the complexities involved with previous virtualization models what makes this scenario worthwhile well using nested virtualization it makes it easier for users to move their on-premises virtualized workloads to the cloud without having to import and convert vm images so in essence it eases the use when migrating to cloud a great use case for many but wouldn’t be possible on google cloud without the benefit of running kernel level virtualization now this is an advanced concept that does not show up on the exam but i wanted you to understand virtualization at a high level so that you can understand nested virtualization within google cloud as it is a part of the feature set of compute engine and so that’s pretty much all i wanted to cover when it comes to virtualization so you can now mark this lesson as complete and let’s move on to the next one [Music] welcome back now earlier on in the course i discussed compute engine at a high level to understand what it is and what it does the goal for this section is to dive deeper into compute engine as it comes up heavily on the exam and so i want to make sure i expose all the nuances as well it is the go-to service offering from google cloud when looking to solve any general computing needs with this lesson specifically i will be going into what makes up an instance and the different options that are available when creating the instance so with that being said let’s dive in now compute engine lets you create and run virtual machines known as instances and host them on google’s infrastructure compute engine is google’s infrastructure as a service virtual machine offering so it being an is service google takes care of the virtualization platform the physical servers the network and storage along with managing the data center and these instances are available in different sizes depending on how much cpu and memory you might need as well compute engine offers different family types for the type of workload you need it for each instance is charged by the second after the first minute as this is a consumption based model and as well these instances are launched in a vpc network in a specific zone and these instances will actually sit on hosts in these zones and you will be given the option of using a multi-tenant host where the server that is hosting your machine is shared with others but please note that each instance is completely isolated from the other so no one can see each other’s instances now you’re also given the option of running your instance on a sole tenant node whereby your instance is on its own dedicated hosts that is reserved just for you and you alone you don’t share it with anyone else and this is strictly for you only now although this option may sound really great it does come at a steep cost so only if your use case requires you to use a sole tenant node for security or compliance purposes i recommend that you stick with a multi-tenant host when launching your instances and this is usually the most common selection for most now compute engine instances can be configured in many different ways and allow you the flexibility to fulfill the requests for your specific scenario and as you can see here there are four different base options when it comes to configuration of the instance that you are preparing to launch and so i wanted to take time to go through them in just a bit of detail for context starting first with the machine type which covers vcpu and memory now there are many different predefined machine types that i will be covering in great depth in a different lesson but for now just know that they are available in different families depending on your needs and can be chosen from the general compute optimize and memory optimize machine types they are available in intel or amd flavors and if the pre-defined options doesn’t fit your need you have the option of creating a custom machine that will suit your specific workload now when creating a vm instance on compute engine each virtual cpu or vcpu is implemented as a single hardware hyper thread on one of the available cpu processors that live on the host now when choosing the amount of vcpus on an instance you must take into consideration the desired network throughput as the amount of vcpus will determine this throughput as the bandwidth is determined per vm instance not per network interface or per ip address and so the network throughput is determined by calculating 2 gigabits per second for every vcpu on your instance so if you’re looking for greater network throughput then you may want to select an instance with more vcpus and so once you’ve determined a machine type for your compute engine instance you will need to provide it an image with an operating system to boot up with now when creating your vm instances you must use an operating system image to create boot disks for your instances now compute engine offers many pre-configured public images that have compatible linux or windows operating systems
and these operating system images can be used to create and start instances compute engine uses your selected image to create a persistent boot disk for each instance by default the boot disk for your instance is the same size as the image that you selected and you can use most public images at no additional cost but please be aware that there are some premium images that do add additional cost to your instances now moving on to custom images this is a boot disk image that you own and control access to a private image if you will custom images are available only to your cloud project unless you specifically decide to share them with another project or another organization you can create a custom image from boot disks or other images then use the custom image to create an instance custom images that you import to compute engine add no cost to your instances but do incur an image storage charge while you keep your custom image in your project now the third option that you have is by using a marketplace image now google cloud marketplace lets you quickly deploy functional software packages that run on google cloud you can start up a software package without having to manually configure the software the vm instances the storage or even the network settings this is a all-in-one instance template that includes the operating system and the software pre-configured and you can deploy a software package whenever you like and is by far the easiest way to launch a software package and i will be giving you a run through on these marketplace images in a later demo now once you’ve decided on your machine type as well as the type of image that you wanted to use moving into the type of storage that you want would be your next step now when configuring a new instance you will need to create a new boot disk for it and this is where performance versus cost comes into play as you have the option to pay less and have a slower disk speed or lower iops or you can choose to have fast disk speed with higher iops but pay a higher cost and so the slowest and most inexpensive of these options is the standard persistent disk which are backed by standard hard disk drives the balance persistent disks are backed by solid state drives and are faster and can provide higher iops than the standard option and lastly ssd is the fastest option which also brings with it the highest iops available for persistent disks now outside of these three options for persistent disks you also have the option of choosing a local ssd and these are solid state drives that are physically attached to the server that hosts your vm instances and this is why they have the highest throughput and lowest latency than any of the available persistent disks just as a note the data that you store on a local ssd persists only until the instance is stopped or deleted which is why local ssds are suited only for temporary storage such as caches or swap disk and so lastly moving into networking each network interface of a compute engine instance is associated with a subnet of a unique vpc network as you’ve seen in the last section you can do this with an auto a default or a custom network each network is available in many different regions and zones within that region we’ve also experienced routing traffic for our instance both in and out of the vpc network by use of firewall rules targeting ip ranges specific network tags or by instances within the network now load balancers are responsible for helping distribute user traffic across multiple instances either within the network or externally using a regional or global load balancer and i will be getting into low balancing in another section of the course but i wanted to stress that load balancers are part of instance networking that help route and manage traffic coming in and going out of the network and so this is a high level overview of the different configuration types that go into putting together an instance and i will be diving deeper into each in this section as well i will be putting a hands-on approach to this by creating an instance in the next lesson and focusing on the different available features that you can use for your specific use case and so this is all i wanted to cover for this lesson so you can now mark this lesson as complete and let’s move on to the next one [Music] welcome back now i know in previous demonstrations we’ve built quite a few compute engine instances and have configured them accordingly in this demonstration we’re going to go through a build of another instance but i wanted to dig deeper into the specific configurations that are available for compute engine so with that being said let’s dive in and so i am now logged in under tony bowties gmail.com as well i am logged in under the bowtie inc project so in order to kick off this demo i’m going to head on over to the compute engine console so i’m going to go over to the navigation menu and i’m going to scroll down to compute engine and so here i’m prompted to either create or import a vm instance as well as taking the quick start and so i’m not going to import or take the quick start so i’m going to simply click on create and so i want to take a moment here to focus on the left hand menu where there are a bunch of different options to create any given instance so the first and default option allows me to create the instance from scratch choosing the new vm instance from template option allows me to create a new instance from an instance template and because i don’t have any instance templates i am prompted here with the option to create one and so for those of you who are unfamiliar with instance templates templates are used in managed instance groups and define instance properties for when instances are launched within that managed instance group but don’t worry i will be covering instance groups and instant templates in a later lesson the next option that’s available is new vm instance from machine image and an image is a clone or a copy of an instance and again i will be covering this in a separate lesson and going through all the details of machine images but if i did have any machine images i would be able to create my instance from here but since i do not i am prompted with the option to create a new machine image now the last option that i wanted to show you is the marketplace and so the marketplace has existing machine images that are all pre-configured with its proper operating system as well as the software to accompany it so for instance if i’m looking to create a vm with a wordpress installation on it i can simply go up to the top to the search bar type in wordpress and i will be presented with many different options and i’m just going to choose the one here at the top and i am presented with 49 results of virtual machines with different types of wordpress installations on them and these are all different instances that have been configured specifically for wordpress by different companies like lightspeed analog innovation and cognosis inc and so for this demonstration i’m going to choose wordpress on centos 7 and here i’m giving an overview about the software itself i’m also given information about the company that configured this as well at the top i’m given a monthly estimated cost for this specific instance and if i scroll down the page i can get a little bit more information with regards to this image and as shown here on the right i can see my pricing the usage fee will cost me 109 a month along with the vm instance type that the software is configured for the amount of disk space and the sustained use discount i’ve also been given some links here for tutorials and documentation and i’ve also been given instructions for maintenance and support i’ve been given both an email and a link to live support and of course at the bottom we have the terms of service and this is a typical software package amongst many others that’s available in the google cloud marketplace now i can go ahead and launch this if i choose but i’m going to choose not to launch this and i’m going to back out and so just to give you some context with regards to enterprise software software packages like f5 and jenkins are also available in the google cloud marketplace and again when i click on the first option it’ll give me a bunch of available options on jenkins and its availability from different companies on different platforms now just as a note to update your existing deployment of a software package you have to redeploy the software package from marketplace in order to update it but other than that caveat the easiest way to deploy a software package is definitely through the marketplace and so now that we’ve gone through all the different options on how to create an instance i’m gonna go back and select new vm instance so i can create a new vm from scratch and so i am prompted here at the top with a note telling me that there was a draft that was saved from when i started to create in my new instance but i navigated away from it and i have the option to restore the configuration i was working on and so just know that when you are in the midst of creating an instance google cloud will automatically save a draft of your build so that you are able to continue working on it later now i don’t really need this draft but i will just hit restore and for the name i’m going to keep it as instance 1 and for the sake of this demo i’m going to add a label the key is going to be environment and the value will be testing i’m going to go down to the bottom click save now when it comes to the geographic location of the instance using regions i can simply click on the drop down and i will have access to deploy this instance in any currently available region as regions are added they will be added here as well and so i’m going to keep it as us east one and under zone i have the availability of putting it in any zone within that region and so i’m going to keep it as us east 1b and just as another note once you’ve deployed the instance in a specific region you will not be able to move that instance to a different region you will have to recreate it using a snapshot in another region and i will be going over this in a later lesson now scrolling down to machine configuration there are three different types of families that you can choose from when it comes to machine types the general purpose the compute optimized and the memory optimized the general purpose machine family has a great available selection of different series types that you can choose from and is usually the go to machine family if you’re unsure about which machine type to select so for this demo i’m going to keep my selection for series type as e2 and under machine type i’m given a very large selection of different sizes when it comes to vcpu and memory and so i can select from a shared core a standard type a high memory type or a high cpu type and i will be going over this in greater detail in another lesson on machine types now in case the predefined machine types do not fit my needs or the scope for the amount of vcpus and memory that i need fall in between those predefined machine types i can simply select the custom option and this will bring up a set of sliders where i am able to select both the amount of vcpus and amount of memory that i need for the instance that i am creating now as i change the course slider to either more vcpus or less my core to memory ratio for this series will stay the same and therefore my memory will be adjusted automatically i also have the option to change the memory as i see fit to either add more memory or to remove it and so this is great for when you’re in between sizes and you’re looking for something specific that fits your workload and so i’m going to change back the machine type to an e2 micro and as you can see in the top right i will find a monthly estimate of how much the instance will cost me and i can click on this drop down and it will give me a breakdown of the cost for vcpu in memory the cost for my disks as well as my sustained use discount and if i had any other resources that i was consuming like a static ip or an extra attached disk those costs would show up here as well and so if i went to a compute optimized you can see how the price has changed but i’m given the breakdown so that i know exactly what i’m paying for so i’m going to switch it back to general purpose and i wanted to point out here the cpu platform and gpu as you can add gpus to your specific machine configuration and so just as another note gpus can only be added to an n1 machine type as any other type will show the gpu selection as grayed out and so here i can add the gpu type as well as adding the number of gpus that i need but for the sake of this demonstration i’m not going to add any gpus and i’m going to select the e2 series and change it back to e2 micro scrolling down a little bit here when it comes to cpu platform depending on the machine type you can choose between intel or amd if you are looking for a specific cpu but just know that your configuration is permanent now moving down a little bit more you will see here display device now display device is a feature on compute engine that allows you to add a virtual display to a vm for system management tools remote desktop software and any application that requires you to connect to a display device on a remote server this is an especially great feature to have for when your server is stuck at boot patching or hardware failure and you can’t log in and the drivers are already included for both windows and linux vms this feature works with the default vga driver right out of the box and so i’m going to keep this checked off as i don’t need it and i’m going to move down to confidential vm service now confidential computing is a security feature to encrypt sensitive code and data that’s in memory so even when it’s being processed it is still encrypted and is a great use case when you’re dealing with very sensitive information that requires strict requirements now compute engine also gives you the option of deploying containers on it and this is a great way to test your containers instead of deploying a whole kubernetes cluster and may even suffice for specific use cases but just note that you can only deploy one container per vm instance and so now that we’ve covered most of the general configuration options for compute engine i wanted to take a minute to dive into the options that are available for boot disk so i’m going to go ahead and click on change and here i have the option of choosing from a bunch of different public images with different operating systems that i can use for my boot disk so if i wanted to load up ubuntu i can simply select ubuntu and i can choose from each different version that’s available as well i’m shown here the boot disk type which is currently selected as the standard persistent disk but i also have the option of selecting either a balanced persistent disk or ssd persistent disk and i’m going to keep it as standard persistent disk and if i wanted to i can increase the boot disk size so if i wanted 100 gigs i can simply add it and if i select it and i go back up to the top right hand corner i can see that my price for the instance has changed now i’m not charged for the operating system due to it being an open source image but i am charged more for the standard persistent disk because i’m no longer using 10 gigs but i’m using 100 gigabytes now let’s say i wanted to go back and i wanted to change this image to a windows image i’m going to go down here to windows server and i want to select windows server 2016 i’m going to load up the data center version and i’m going to keep the standard persistent disk along with 100 gigabytes i’m going to select it if i scroll back up i can see that i’m charged a licensing fee for windows server and these images with these licensing fees are known as premium images so please make sure that you are aware of these licensing fees when launching your instances and because i want to save on money just for now i’m going to scroll back down to my boot disk and change it back to ubuntu and i’m going to change the size back down to 10 gigabytes as well before you move on i wanted to touch on custom images and so if i did have any custom images i could see them here and i would be able to create instances from my custom images using this method i also have the option of creating an instance from a snapshot and because i don’t have any nothing shows up and lastly i have the option of using existing disks so let’s say for instance i had a vm instance and i had deleted it but i decided to keep the attached boot disk it would show up as unattached and i am able to attach that to a new instance and so now that i’ve shown you all the available options when it comes to boot disk i’m going to go ahead and select the ubuntu operating system and move on to the next option here we have identity and api access which we’ve gone through in great depth in a previous demo as well i’m given an option to create a firewall rule automatically for http and https traffic and as for networking as we covered it in great depth in the last section i will skip that part of the configuration and simply launch it in the default vpc and so just as a quick note i wanted to remind you that down at the bottom of the page you can find the command line shortcut and when you click on it it will give you the gcloud command to run that you can use in order to create your instance and so i want to deploy this as is so i’m going to click here on close and i’m going to click on create and so i’m just going to give it a minute now so the instance can be created and it took a few seconds but the instance is created and this is regarded as the inventory page to view your instance inventory and to look up any correlating information on any of your instances and so this probably looks familiar to you from the previous instances that you’ve launched so here we have the name of the instance the zone the internal ip along with the external ip and a selection to connect to the instance as well i’m also given the option to connect to this instance in different ways you also have the option of adding more column information to your inventory dashboard with regards to your instance and you can do this by simply clicking on the columns button right here above the list of instances and you can select from creation time machine type preserve state and even the network and this may bring you more insight on the information available for that instance or even grouping of instances with common configurations this will also help you identify your instances visually in the console and so i’m just going to put the columns back to exactly what it was and so now i want to take a moment to dive right into the instance and have a look at the instance details so as you remember we selected the machine type of e2 micro which has two vcpus and one gigabyte of memory here we have the instance id as well scrolling down we have the cpu platform we have the display device that i was mentioning earlier along with the zone the labels the creation time as well as the network interface and scrolling down i can see here the boot disk with the ubuntu image as well as the name of the boot disk so there are quite a few configurations here and if i click on edit i can edit some of these configurations on the fly and with some configurations i need to stop the instance before editing them and there are some configurations like the network interface where i would have to delete the instance in order to recreate it so for instance if i wanted to change the machine type i need to stop the instance in order to change it and the same thing goes for my display device as well the network interface in order for me to change it from its current network or subnetwork i’m going to have to stop the instance in order to change it as well and so i hope this general walkthrough of configuring an instance has given you a sense of what can be configured on launch and allowed you to gain some insight on editing features of an instance after launch a lot of what you’ve seen here in this demo will come up in the exam and so i would recommend that before going into the exam to spend some time launching instances knowing exactly how they will behave and what can be edited after creation that can be done on the fly edits that need the instance to be shut down and edits that need the instance to be recreated and so that’s pretty much all i wanted to cover when it comes to creating an instance so you can now mark this as complete and let’s move on to the next one welcome back now in this lesson i’m going to be discussing compute engine machine types now a machine type is a set of virtualized hardware resources that’s available to a vm instance including the system memory size virtual cpu count and persistent disks in compute engine machine types are grouped and curated by families for different workloads you must always choose a machine type when you create an instance and you can select from a number of pre-defined machine types in each machine type family if the pre-defined machine types don’t meet your needs then you can create your own custom machine types in this lesson i will be going through all the different machine types their families and their use cases so with that being said let’s dive in now each machine type family displayed here includes different machine types each family is curated for specific workload types the following primary machine types are offered on compute engine which is general purpose compute optimized and memory optimized and so i wanted to go through each one of these families in a little bit of detail now before diving right into it defining what type of machine type you are running can be overwhelming for some but can be broken down to be understood a bit better they are broken down into three parts and separated by hyphens the first part in this example shown here is the series so for this example the series is e2 and the number after the letter is the generation type in this case it would be the second generation now the series come in many different varieties and each are designed for specific workloads now moving on to the middle part of the machine type this is the actual type and types as well can come in a slew of different flavors and is usually coupled with a specific series so in this example the type here is standard and so moving on to the third part of the machine type this is the amount of vcp use in the machine type and so with vcpus they can be offered anywhere from one vcpu up to 416 vcpus and so for the example shown here this machine type has 32 vcpus and so there is one more aspect of a machine type which is the gpus but please note that gpus are only available for the n1 series and so combining the series the type and the vcpu you will get your machine type and so now that we’ve broken down the machine types in order to properly define them i wanted to get into the predefined machine type families specifically starting off with the general purpose predefined machine type and all the general purpose machine types are available in the standard type the high memory type and the high cpu type so the standard type is the balance of cpu and memory and this is the most common general purpose machine type general purpose also comes in high memory and this is a high memory to cpu ratio so very high memory a lower cpu and lastly we have the high cpu machine type and this is a high cpu to memory ratio so this would be the opposite of the high memory so very high cpu to lower memory so now digging into the general purpose machine family i wanted to start off with the e2 series and this is designed for day-to-day computing at a low cost so if you’re looking to do things like web serving application serving back office applications small to medium databases microservices virtual desktops or even development environments the e2 series would serve the purpose perfectly now the e2 machine types are cost optimized machine types that offer sizing between 2 to 32 vcpus and half a gigabyte to 128 gigabytes of memory so small to medium workloads that don’t require as many vcpus and applications that don’t require local ssds or gpus are an ideal fit for e2 machines e2 machine types do not offer sustained use discounts however they do provide consistently low on-demand and committed use pricing in other words they offer the lowest on-demand pricing across the general purpose machine types as well the e2 series machines are available in both pre-defined and custom machine types moving on i wanted to touch on all the machine types available in the n-series and these are a balanced machine type with price and performance across a wide range of vm flavors and these machines are designed for web servers application servers back office applications medium to large databases as well as caching and media streaming and they are offered in the standard high memory and high cpu types now the n1 machine types are compute engines first generation general purpose machine types now this machine type offers up to 96 vcpus and 624 gigabytes of memory and again as i mentioned earlier this is the only machine type that offers both gpu support and tpu support the n1 type is available as both pre-defined machine types and custom machine types and the n1 series offers a larger sustained use discount than n2 machine types speaking of which the n2 machine types are the second generation general purpose machine types and these offer flexible sizing between two 280 vcpus and half a gigabyte of memory to 640 gigabytes of memory and these machine types also offer an overall performance improvement over the n1 machine types workloads that can take advantage of the higher clock frequency of the cpu are a good choice for n2 machine types and these workloads can get higher per thread performance while benefiting from all the flexibility that a general purpose machine type offers and two machine types also offer the extended memory feature and this helps control per cpu software licensing costs now getting into the last n series machine type the n2d machine type is the largest general purpose machine type with up to 224 vcpus and 896 gigabytes of memory this machine type is available in predefined and custom machine types and this machine type as well has the extended memory feature which i discussed earlier that helps you avoid per cpu software licensing the n2d machine type supports the committed use and sustain use discounts now moving on from the general purpose machine type family i wanted to move into the compute optimize machine family now this series offers ultra high performance for compute intensive workloads such as high performance computing electronic design automation gaming and single threaded applications so anything that is designed for compute intensive workloads this will definitely be your best choice now compute engine optimized machine types are ideal for as i said earlier compute intensive workloads and these machine types offer the highest performance per core on compute engine compute optimized types are only available as predefined machine types and so they are not available for any custom machine types the c2 machine types offer a maximum of 60 vcpus and a maximum of 240 gigabytes of memory now although the c2 machine type works great for compute intensive workloads it does come with some caveats and so you cannot use regional persistent disks with compute optimized machine types and i will be getting into the details of persistent disks in a later lesson and they are only available in select zones and regions on select cpu platforms and so now moving into the last family is the memory optimize machine family and this is for ultra high memory workloads this family is designed for large in memory databases like sap hana as well as in memory analytics now the m series comes in two separate generations m1 and m2 the m1 offering a maximum of 160 vcpus and a maximum memory of 3844 gigabytes whereas the m2 offering again a maximum of 160 vcpus but offering a whopping 11 776 gigabytes of maximum memory and as i said before these machine types they’re ideal for tasks that require intensive use of memory so they are suited for in-memory databases and in memory analytics data warehousing workloads genomics analysis and sql analysis services memory optimized machine types are only available as predefined machine types and the caveats here is that you cannot use regional persistent disks with memory optimized machine types as well they’re only available in specific zones now i wanted to take a moment to go back to the general purpose machine type so that i can dig into the shared cord machine type and this is spread amongst the e2 and n1 series and these shared core machine types are used for burstable workloads are very cost effective as well they’re great for non-resource intensive applications shared core machine types use context switching to share a physical core between vcpus for the purpose of multitasking different shared core machine types sustain different amounts of time on a physical core which allows google cloud to cut the price in general share core instances can be more cost effective for running small non-resource intensive applications than standard high memory or high cpu machine types now when it comes to cpu bursting these shared core machine types offer bursting capabilities that allow instances to use additional physical cpu for short periods of time bursting happens automatically when your instance requires more physical cpu than originally allocated during these spikes your instance will take advantage of available physical cpu in bursts and the e2 shared core machine type is offered in micro small and medium while the n1 series is offered in the f1 micro and the g1 small and both of these series have a maximum of two vcpus with a maximum of four gigabytes of memory now i wanted to take a moment to touch on custom machine types and these are available for any general purpose machine and so this is customer defined cpu and memory designed for custom workloads now if none of the general purpose predefined machine types cater to your needs you can create a custom machine type with a specific number of vcpus and amount of memory that you need for your instance these machine types are ideal for workloads that are not a good fit for the pre-defined machine types that are available they’re also great for when you need more memory or more cpu but the predefined machine types don’t quite fit exactly what you need for your workload just as a note it costs slightly more to use a custom machine type than a pre-defined machine type and there are limitations in the amount of memory and vcpu you can select and as i stated earlier when creating a custom machine type you can choose from the e2 n2 and 2d and n1 machine types and so the last part i wanted to touch on are the gpus that are available and these are designed for the graphic intensive workloads and again are only available for the n1 machine type and gpus come in five different flavors from nvidia showing here as the tesla k80 the tesla p4 the tesla t4 the tesla v100 and the tesla p100 and so these are all the families and machine types that are available for you in google cloud and will allow you to be a little bit more flexible with the type of workload that you need them for and so for the exam you won’t have to memorize each machine type but you will need to know an overview of what each machine type does now i know there’s been a lot of theory presented here in this lesson but i hope this is giving you a better understanding of all the available pre-defined machine types in google cloud and so that’s pretty much all i wanted to cover in this lesson on compute engine machine types so you can now mark this lesson as complete and let’s move on to the next one [Music] welcome back in this lesson i’m going to be reviewing managing your instances now how you manage your instances is a big topic in the exam as well it’s very useful to know for your work as a cloud engineer in the environments you are responsible for knowing both the features that are available as well as the best practices will allow you to make better decisions with regards to your instances and allow you to keep your environment healthy this lesson will dive into the many features that are available in order to better manage your instances using the specific features within google cloud so with that being said let’s dive in now i wanted to start off this lesson discussing the life cycle of an instance within google cloud every instance has a predefined life cycle from its starting provisioning state to its deletion an instance can transition through many instant states as part of its life cycle when you first create an instance compute engine provisions resources to start your instance next the instance moves into staging where it prepares the first boot and then it finally boots up and is considered running during its lifetime a running instance can be repeatedly stopped and restarted or suspended and resumed so now i wanted to take a few minutes to go through the instance life cycle in a bit of detail starting with the provisioning state now this is where resources are being allocated for the instance the instance is not yet running and the instance is being allocated its requested amount of cpu and memory along with its root disk any additional disks that are attached to it and as well some additional feature sets that are assigned to this instance and when it comes to the cost while in the provisioning state there are no costs that are being incurred moving right along to the staging state after finishing the provisioning state the life cycle continues with the staging state and this is where resources have been acquired and the instance is being prepared for first boot both internal and external ips are allocated and can be either static or ephemeral in the system image that was originally chosen for this instance is used to boot up the instance and this can be either a public image or a custom image costs in the state are still not incurred as the instance is still in the pre-boot state now once the instance has left staging it will move on to the running state and this is where the instance is booting up or running and should allow you to log into the instance either using ssh or rdp within a short waiting period due to any startup scripts or any boot maintenance tasks for the operating system now during the running state you can reset your instance and this is where you would wipe the memory contents of the vm instance and reset the virtual machine to its initial state resetting an instance causes an immediate hard reset of the vm and therefore the vm does not do a graceful shutdown for the guest operating system however the vm retains all persistent disk data and none of the instance properties change the instance remains in running state through the reset now as well in the running state a repair can happen due to the instance encountering an internal error or the underlying machine is unavailable due to maintenance during this time the instance is unusable and if the repair is successful the instance returns back to the running state paying attention to costs this state is where the instance starts to occur them and is related to the resources assigned to the instance like the cpu and memory any static ips and any disks that are attached to the instance and i will be going into a bit of detail in just a bit with regards to this state and finally we end the life cycle with the stopping suspended and terminated states now when you are suspending an instance it is like closing the lid of your laptop suspending the instance will preserve the guest operating system memory and application state of the instance otherwise it’ll be discarded and from this state you can choose either to resume or to delete it when it comes to stopping either a user has made a request to stop the instance or there was a failure and this is a temporary status and the instance will move to terminated touching on costs for just a second when suspending or stopping an instance you pay for resources that are still attached to the vm instance such as static ips and persistent disk data you do not pay the cost of a running vm instance ephemeral external ip addresses are released from the instance and will be assigned a new one when the instance is started now when it comes to stopping suspending or resetting an instance you can stop or suspend an instance if you no longer need it but want to keep the instance around for future use compute engine waits for the guest to finish shutting down and then transitions the instance to the terminated state so touching on the terminated state this is where a user either shuts down the instance or the instance encounters a failure you can choose to restart the instance or delete it as well as holding some reset options within the availability policy in this state you still pay for static ips and disks but like the suspending or stopping state you do not pay for the cpu and memory resources allocated to the instance and so this covers a high level overview of the instance lifecycle in google cloud and all of the states that make up this lifecycle now to get into some detail with regards to some feature sets for compute engine i wanted to revisit the states where those features apply now when creating your instance you have the option of using shielded vms for added security and when using them the instance would instantiate them as the instance boots and enters into the running state so what exactly is a shielded vm well shielded vms offer verifiable integrity of your compute engine vm instances so you can be sure that your instances haven’t been compromised by boot or kernel level malware or rootkits and this is achieved through a four-step process which is covered by secure boot virtual trusted platform module also known as vtpm measure boot which is running on vtpm and integrity monitoring so i wanted to dig into this for just a sec to give you a bit more context now the boot process for shielded vms start with secure boot and this helps ensure that the system only runs authentic software by verifying the digital signature for all boot components and stopping the boot process if signature verification fails so shielded vm instances run firmware that’s signed and verified using google’s certificate authority and on each and every boot any boot component that isn’t properly signed or isn’t signed at all is not allowed to run and so the first time you boot a vm instance measure boot creates the integrity policy baseline from the first set of these measurements and then securely stores this data each time the vm instance boots after that these measurements are taken again and stored in secure memory until the next reboot having these two sets of measurements enables integrity monitoring which is the next step and allows it to determine if there have been changes to a vm instance’s boot sequence and this policy is loaded onto a virtualized trusted platform module again known as the vtpm for short which is a specialized computer chip that you can use to protect objects like keys and certificates that you use to authenticate access to your system with shielded vms vtpm enables measured boot by performing the measurements needed to create a known good boot baseline and this is called the integrity policy baseline the integrity policy baseline is used for comparison with measurements from subsequent vm boots to determine if anything has changed integrity monitoring relies on the measurements created by measured boot for both the integrity policy baseline and the most recent boot sequence integrity monitoring compares the most recent boot measurements to the integrity policy baseline and returns a pair of pass or failed results depending on whether they match or not one for the early boot sequence and one for the late boot sequence and so in summary this is how shielded vms help prevent data exfiltration so touching now on the running state when you start a vm instance using google provided public images a guest environment is automatically installed on the vm instance a guest environment is a set of scripts daemons and binaries that read the content of the metadata server to make a virtual machine run properly on compute engine a metadata server is a communication channel for transferring information from a client to the guest operating system vm instances created using google provided public images include a guest environment that is installed by default creating vm instances using a custom image will require you to manually install the guest environment this guest environment is available for both linux and windows systems and each supported operating system that is available on compute engine requires specific guest environment packages either google or the owner of the operating system builds these packages now when it comes to the linux guest environment it is either built by google or the owner of the operating system and there are some key components that are applicable to all builds which can be found in the link that i have included in the lesson text the base components of a linux guest environment is a python package that contains scripts daemons and packages for the supported linux distributions when it comes to windows a similar approach applies where a package is available with main scripts and binaries as a part of this guest environment now touching back on the metadata server compute engine provides a method for storing and retrieving metadata in the form of the metadata server this service provides a central point to set metadata in the form of key value pairs which is then provided to virtual machines at runtime and you can query this metadata server programmatically from within the instance and from the compute engine api this is great for use with startup and shutdown scripts or gaining more insight with your instance metadata can be assigned to projects as well as instances and project metadata propagates to all instances within the project while instance metadata only impacts that instance and you can access the metadata using the following url with the curl command you see here on the screen so if you’re looking for the metadata for a project you would use the first url that ends in project and for any instance metadata you can use the second url that ends in instance now please note that when you make a request to get information from the metadata server your request and the subsequent metadata response never leaves the physical host running the virtual machine instance now once the instance has booted and has gone through the startup scripts you will then have the ability to login to your instance using ssh or rdp now there are some different methods that you can use to connect and access both your linux instances and your windows instances that i will be going over now when it comes to linux instances we’ve already gone through accessing these types of instances in previous lessons and demos but just as a refresher you would typically connect to your vm instance via ssh access on port 22. please note that you will require a firewall rule as we have done in previous demos to allow this access and you can connect to your linux instances through the google cloud console or the cloud shell using the cloud sdk now i know that the use of ssh keys are the defacto when it comes to logging into linux instances now in most scenarios on google cloud google recommends using os login over using ssh keys the os login feature lets you use compute engine iam roles to manage ssh access to linux instances and then if you’d like you can add an extra layer of security by setting up os login with two-step verification and manage access at the organization level by setting up organizational policies os login simplifies ssh access management by linking your linux user account to your google identity administrators can easily manage access to instances at either an instance or project level by setting iam permissions now if you’re running your own directory service for managing access or are unable to set up os login you can manually manage ssh keys and local user accounts in metadata by manually creating ssh keys and editing the public ssh key metadata now when it comes to windows instances you would typically connect to your vm instance via rdp access on port 3389 and please note that you will also require a firewall rule as shown here to allow this access you can connect to your windows instances through the rdp protocol or through a powershell terminal now when logging into windows this requires setting a windows password and can be done either through the console or the gcloud command line tool and then after setting your password you can then log in from the recommended rdp chrome extension or using a third-party rdp client and i will provide a link to this rdp chrome extension in the lesson text now once the instance has booted up and your instance is ready to be logged into you always have the option of modifying your instance and you can do it manually by either modifying it on the fly or you can take the necessary steps to edit your instance like i showed you in a previous lesson by stopping it editing it and then restarting it although when it comes to google having to do maintenance on a vm or you merely want to move your instance to a different zone in the same region this has all become possible without shutting down your instance using a feature called live migration now when it comes to live migration compute engine migrates your running instances to another host in the same zone instead of requiring your vms to be rebooted this allows google to perform maintenance reliably without interrupting any of your vms when a vm is scheduled to be live migrated google provides a notification to the guest that a migration is coming soon live migration keeps your instances running during compute engine hosts that are in need of regular infrastructure maintenance and upgrades replacement of failed hardware and system configuration changes when google migrates a running vm instance from one host to another it moves the complete instance state from the source to the destination in a way that is transparent to the guest os and anyone communicating with it google also gives you the option of doing live migration manually from one zone to another within the same region either using the console or running the command line you see here gcloud compute instances move the name of the vm with the zone flag and the zone that it’s currently in and then the destination zone flag with the zone that you wanted to go to and just as a note with some caveats instances with gpus attached cannot be live migrated and you can’t configure a preemptable instance to live migrate and so instance lifecycle is full of different options and understanding them can help better coordinate moving editing and repairing vm instances no matter where they may lie in this life cycle now i hope this lesson has given you the necessary theory that will help better use the discuss feature sets and giving you some ideas on how to better manage your instances now there is a lot more to know than what i’ve shown you here to manage your instances but topics shown here are what shows up in the exam as well are some really great starting points to begin managing your instances and so that’s pretty much all i wanted to cover when it comes to managing instances so you can now mark this lesson as complete and join me in the next one where i will cement the theory in this lesson with the hands-on demo [Music] welcome back in this demonstration i’m going to be cementing some of the theory that we learned in the last lesson with regards to the different login methods for windows and linux instances how to implement these methods are extremely useful to know both for the exam and for managing multiple instances in different environments now there’s a lot to cover here so with that being said let’s dive in so as you can see i am logged in here under tony bowtie ace gmail.com as well i am in the project of bowtie inc and so the first thing that i want to do is create both a linux instance and a windows instance and this is to demonstrate the different options you have for logging into an instance and so in order for me to do that i need to head on over to compute engine so i’m going to go over to the navigation menu and i’m going to scroll down to compute engine and so just as a note before creating your instances please make sure that you have a default vpc created before going ahead and creating these instances if you’ve forgotten how to create a default vpc please go back to the networking services section and watch the vpc lesson for a refresher and so i’m going to go ahead and create my first instance and i’m going to start with the windows instance so i’m going to simply click on create and so for the name of this instance you can simply call this windows dash instance and i’m not going to add any labels and for the region you should select us east1 and you can keep the zone as the default for us east 1b and scrolling down to the machine configuration for the machine type i’m going to keep it as is as it is a windows instance and i’m going to need a little bit more power scrolling down to boot disk we need to change this from debian over to windows so i’m going to simply click on the change button and under operating system i’m going to click on the drop down and select windows server for the version i’m going to select the latest version of windows server which is the windows server 2019 data center and you can keep the boot disk type and the size as its default and simply head on down and click on select and we’re going to leave everything else as the default and simply click on create and success our windows instance has been created and so the first thing that you want to do is you want to set a windows password for this instance and so i’m going to head on over to the rdp button and i’m going to click on the drop-down and here i’m going to select set windows password and here i’m going to get a pop-up to set a new windows password the username has been propagated for me as tony bowties i’m going to leave it as is and i’m going to click on set and i’m going to be prompted with a new windows password that has been set for me so i’m going to copy this and i’m going to paste it into my notepad so be sure to record it somewhere either write it down or copy and paste it into a text editor of your choice i’m going to click on close and so now for me to log into this i need to make sure of a couple things the first thing is i need to make sure that i have a firewall rule open for port 3389 the second is i need to make sure that i have an rdp client and so in order to satisfy my first constraint i’m going to head on over to the navigation menu and go down to vpc network here i’m going to select firewall and as expected the rdp firewall rule has been already created due to the fact that upon creation of the default vpc network this default firewall rule is always created and so now that i’ve gotten that out of the way i’m going to head back on over to compute engine and what i’m going to do is i’m going to record the external ip so that i’ll be able to log into it now i’m going to be logging into this instance from both a windows client and a mac client so starting with windows i’m going to head on over to my windows virtual machine and because i know windows has a default rdp client already built in i’m going to simply bring it up by hitting the windows key and typing remote desktop connection i’m going to click on that i’m going to paste in the public ip for the instance that i just recorded and i’m going to click on connect you should get a pop-up asking for your credentials i’m going to type in my username as tony bowtie ace as well i’m going to paste in the password and i’m going to click on ok i’m prompted to accept the security certificate and i’m going to select yes and success i’m now connected to my windows server instance and it’s going to run all its necessary startup scripts you may get a couple of prompts that come up asking you if you want to connect to your network absolutely i’m going to close down server manager just for now and another thing that i wanted to note is that when you create a windows instance there will automatically be provisioned a google cloud shell with the sdk pre-installed and so you’ll be able to run all your regular commands right from this shell without having to install it and this is due to the guest environment that was automatically installed on the vm instance upon creation and this is a perfect example of some of the scripts that are installed with the guest environment i’m going to go ahead and close out of this and i’m going to go ahead and close out of my instance hit ok and so being here in windows i wanted to show you an alternate way of logging into your instance through powershell so for those of you who are quite versed in windows and use powershell in your day-to-day there is an easy way to log into your instance using powershell now in order for me to do that i need to open another firewall rule covering tcp port 5986 so i’m going to head on over back to the google cloud console i’m going to head over to the navigation menu and i’m going to scroll down to vpc network i’m going to go into firewall and i’m going to create a new firewall rule and under name i’m going to name this as allow powershell i’m going to use the same for the description i’m going to scroll down to targets and i’m going to select all instances in the network and under source ip ranges for this demonstration i’m going to use 0.0.0.0 forward slash 0. and again this should not be used in a production environment but is used merely for this demo i’m going to leave everything else as is and i’m going to go down to protocols and ports i’m going to click on tcp and i’m going to type in 5986 for the port and i’m going to click on create i’m going to give it a second just to create and it took a couple seconds but our firewall rule is now created and so now i’m gonna head over to my windows vm and i’m gonna open up a powershell command prompt and hit the windows key and type in powershell and so in order for me to not get constantly asked about my username and password i’m going to use a variable that will keep my password for me and so every time i connect to my windows instance i won’t need to type it in all the time and so the command for that is dollar sign credentials equals get dash credential i’m going to hit enter and i’m going to get a prompt to type in my username and password so i’m going to simply type that in now along with my password and hit ok and if you don’t get a prompt with any errors then chances are that you’ve been successful at entering your credentials and so now in order to connect to the instance you’re going to need the public ip address again so i’m going to head on over back to the console i’m going to head on over to the navigation menu and back to compute engine here i’m going to record the external ip and i’m going to head on over back to my windows virtual machine and so you’re going to enter this command which i will include in the lesson text and you’ll also be able to find it in the github repository beside computer name you’re going to put in your public ip address of your windows instance and make sure at the end you have your credentials variable i’m going to simply click enter and success i’m now connected to my windows instance in google cloud so as you can see here on the left is the public ip of my windows instance and so these are the various ways that you can connect to your windows instance from a windows machine and so now for me to connect to my windows instance on a mac i’m going to head on over there now and like i said before i need to satisfy the constraint of having an rdp client unfortunately mac does not come with an rdp client and so the recommended tool to use is the chrome extension but i personally like microsoft’s rdp for mac application and so i’m going to go ahead and do a walkthrough of the installation so i’m going to start off by opening up safari and i’m going to paste in this url which i will include in the lesson text and microsoft has made available a microsoft remote desktop app available in the app store i’m going to go ahead and view it in the app store and i’m going to simply click on get and then install and once you’ve entered your credentials and you’ve downloaded and installed it you can simply click on open i’m going to click on not now and continue and i’m going to close all these other windows for better viewing i’m going to click on add pc i’m going to paste in the public ip address of my windows instance and under user account i’m going to add my user account type in my username paste in my password you can add a friendly name here i’m going to type in windows dash gc for google cloud and i’m going to click on add and then once you’ve pasted in all the credentials and your information you can then click on add and i should be able to connect to my windows instance by double clicking on this window it’s asking me for my certificates i’m going to hit continue and success i’m connected to my windows instance and so this is how you would connect to a windows instance from a windows machine as well as from a mac as well there are a couple of other options that i wanted to show you over here on the drop down beside rdp i can download an rdp file which will contain the public ip address of the windows instance along with your username if i need to reset my password i can view the gcloud command to do it or i can set a new windows password if i forgotten my old one and so that’s everything i had to show you with regards to connecting to a windows instance and so since this demo was getting kind of long i decided to split it up into two parts and so this is the end of part one of this demo and this would be a great opportunity to get up and have a stretch grab yourself a tea or a coffee and whenever you’re ready you can join me in part two where we will be starting immediately from the end of part 1 so you can complete this video and i’ll see you in part 2. [Music] welcome back this is part 2 of the connecting to your instances demo and we will be starting exactly where we left off in part one so with that being said let’s dive in and so now that we’ve created our windows instance and went through all the methods of how to connect to it let’s go ahead and create a linux instance i’m going to go up to the top menu here and click on create instance and i’m going to name this instance linux instance i’m not going to give it any labels under region i’m going to select the us east one region and the zone i’m going to leave it as its set default as us east 1b the machine configuration i’m going to leave it as is under boot disk i’m going to leave this as is with the debian distribution and i’m going to go ahead and click on create okay and our linux instance has been created and in order for me to connect to it i am going to ssh into it but first i need to satisfy the constraint of having a firewall rule with tcp port 22 open so i’m going to head on over to the navigation menu and i’m going to scroll down to vpc network i’m going to head on over to firewall and as expected the allow ssh firewall rule has been created alongside the default vpc network and so since i’ve satisfied that constraint i can head back on over to compute engine and so here i have a few different options that i can select from for logging into my linux instance i can open in a browser window if i decided i wanted to put it on a custom port i can use this option here if i provided a private ssh key to connect to this linux instance i can use this option here i have the option of viewing the gcloud command in order to connect to it and i’ve been presented with a pop-up with the command to use within the gcloud command line in order to connect to my instance i can run it now in cloud shell but i’m going to simply close it and so whether you are on a mac a windows machine or a linux machine you can simply click on ssh and it will open a new browser window connecting you to your instance now when you connect to your linux instance for the first time compute engine generates an ssh key pair for you this key pair by default is added to your project or instance metadata and this will give you the freedom of not having to worry about managing keys now if your account is configured to use os login compute engine stores the generated key pair with your user account now when connecting to your linux instance in most scenarios google recommends using os login this feature lets you use iam roles to manage ssh access to linux instances and this relieves the complexity of having to manage multiple key pairs and is the recommended way to manage many users across multiple instances or projects and so i’m going to go ahead now and show you how to configure os login for your linux instance and the way to do this will be very similar on all platforms so i’m going to go ahead and go back to my mac vm and i’m going to open up my terminal make this bigger for better viewing and i’m going to start by running the gcloud init command in order to make sure i’m using the right user and for the sake of this demonstration i’m going to re-initialize this configuration so i’m going to click on one hit enter number two for tony bowtie ace and i’m going to use project bow tie ink so 1 and i’m not going to configure a default compute region in zone and so if i run the gcloud config list command i can see that the account that i’m using is tony bowties gmail.com in project bowtie inc and so because os
login requires a key pair i’m going to have to generate that myself so i’m going to go ahead and clear the screen and i’m going to use the command ssh keygen and this is the command to create a public and private key pair i’m going to use the default path to save my key and i’m going to enter a passphrase i’m going to enter it again and i recommend that you write down your passphrase so that you don’t forget it as when you lose it you will be unable to use your key pair and so if i change directory to dot ssh and do an ls for list i can see that i now have my public and private key pair the private key lying in id underscore rsa and the public key lying in id underscore rsa.pub and so another constraint that i have is i need to enable os login for my linux instance so i’m going to go ahead and go back to the console and i’m going to go ahead and go into my linux instance i’m going to click on edit and if you scroll down you will come to some fields marked as custom metadata and under key you will type in enable dash os login and under value you will type in all caps true now i wanted to take a moment here to discuss this feature here under ssh keys for block project wide ssh keys now project wide public ssh keys are meant to give users access to all of the linux instances in a project that allow project project-wide public ssh keys so if an instance blocks project-wide public ssh keys as you see here a user can’t use their project-wide public ssh key to connect to the instance unless the same public ssh key is also added to the instance metadata this allows only users whose public ssh key is stored in instance level metadata to access the instance and so this is an important feature to note for the exam and so we’re going to leave this feature checked off for now and then you can go to the bottom and click on save now if i wanted to enable os login for all instances in my project i can simply go over to the menu on the left and click on metadata and add the metadata here with the same values so under key i type in enable dash os login and under value i type in in all caps true but i don’t want to enable it for all my instances only for that one specific instance so with regards to project-wide public keys these keys can be managed through metadata and should only be used as a last resort if you cannot use the other tools such as ssh from the console or os login these are where the keys are stored and so you can always find them here when looking for them here as you can see there are a couple of keys for tony bowtie ace that i have used for previous instances and so i’m going to go back to metadata just to make sure that my key value pair for os login has not been saved and it is not and i’m going to head back on over to my instances and so now that my constraint has been fulfilled where i’ve enabled the os login feature by adding the unnecessary metadata i’m going to head on over back to my mac vm i’m going to go ahead and clear the screen so now i’m going to go ahead and log into my instance using os login by using the command gcloud compute os dash login ssh dash keys add and then the flag key dash file and then the path for my public key which is dot ssh forward slash id underscore rsa.pub i’m gonna hit enter and so my key has been successfully stored with my user account i’m gonna go ahead and make this a little bigger for better viewing and so in order to log into my instance i’m going to need my username which is right up here under username i’m going to copy that and i’m just going to clear my screen for a second here for better viewing and so in order for me to ssh into my instance i’m going to type in the command ssh minus i i’m going to have to provide my private key which is in dot ssh forward slash id underscore rsa and then my username that i had recorded earlier at and then i’m going to need my public ip address of my linux instance so i’m going to head back over to the console for just a sec i’m going to copy the ip address head back over to my mac vm paste it in and hit enter it’s asking if i want to continue yes i do enter the passphrase for my key and success i am connected and so there is one caveat that i wanted to show you with regards to permissions for os login so i’m going to head back over to the console and i’m going to go up to the navigation menu and head over to i am an admin now as you can see here tony bowties gmail.com has the role of owner and therefore i don’t need any granular specific permissions i have the access to do absolutely anything now in case i was a different user and i didn’t hold the role of owner i would be looking for specific permissions that would be under compute os login and this would give me permissions as a standard user now if i wanted super user access or root access i would need to be given the compute os admin login role and as you can see it would allow me administrator user privileges so when using os login and the member is not an owner one of these two roles are needed so i’m going to exit out of here i’m going to hit cancel and so that about covers everything that i wanted to show you with regards to all the different methods that you can use for connecting to vm instances for both windows and linux instances now i know this may have been a refresher for some but for others knowing all the different methods of connecting to instances can come in very useful especially when coordinating many instances in bigger environments i want to congratulate you on making it to the end of this demo and gaining a bit more knowledge on this crucial part of managing your instances so before you go be sure to delete any resources that you’ve created and again congrats on the great job so you can now mark this as complete and i’ll see you in the next one welcome back in this demonstration i’ll be discussing metadata and how it can pertain to a project as well as an instance as well i’m going to touch on startup and shutdown scripts and it’s real world use cases in the last lesson we touched the tip of the iceberg when it came to metadata and wanted to go a bit deeper on this topic as i personally feel that it holds so much value and give you some ideas on how you can use it i’m also going to combine the metadata using variables in a startup script and i’m going to bring to life something that’s dynamic in nature so with that being said let’s dive in so i am currently logged in as tony at bowtie ace gmail.com under the project of bow tie inc and so in order to get right into the metadata i’m going to head on over to my navigation menu and go straight to compute engine and over here on the left hand menu you will see metadata and you can drill down into there now as i explained in a previous lesson metadata can be assigned to both projects and instances while instance metadata only impacts a specific instance so here i can add and store metadata which will be used on a project-wide basis as well as mentioned earlier metadata is stored in key value pairs and can be added at any time now this is a way to add custom metadata but there is a default set of metadata entries that every instance has access to and again this applies for both project and instance metadata so here i have the option of setting my custom metadata for the entire project and so i’m going to dive into where to store custom metadata on an instance and so in order for me to show you this i’m going to first head over to vm instances and create my instance and so just as a note before creating your instance make sure that you have the default vpc created and so because i like to double check things i’m going to head over to the navigation menu i’m going to scroll down to vpc network and as expected i have the default vpc already created and so this means i can go ahead and create my instance so i’m going to head back on over to compute engine and i’m going to create my instance and i’m going to name this instance bowtie dash web server i’m not going to add any labels and under the region i’m going to select us east one and you can keep the zone as the default as us east 1b under machine type i want to keep things cost effective so i’m going to select the e2 micro i’m going to scroll down and under identity and api access i want to set access for each api and scroll down to compute engine i want to select it and i want to select on read write and i’m going to leave the rest as is and scrolling down to the bottom i want to click on management security disks networking and sold tenancy and under here you will find the option to add any custom metadata and you can provide it right here under metadata as a key value pair but we’re not going to add any metadata right now so i’m just going to scroll down to the bottom i’m going to leave everything else as is and simply click on create and it should take a few moments for my instance to be created okay and now that my instance is up i want to go ahead and start querying the metadata now just as a note metadata must be queried from the instance itself and can’t be done from another instance or even from the cloud sdk on your computer so i’m going to go ahead and log into the instance using ssh okay and now that i’m logged into my instance i want to start querying the metadata now normally you would use tools like wget or curl to make these queries in this demo i will use curl and for those who don’t know curl is a command line tool to transfer data to or from a server using supported protocols like http ftp scp and many more this tool is fantastic for automation since it’s designed to work without any user interaction and so i’m going to paste in the url that i am going to use to query the instance metadata and this is the default url that you would use to query any metadata on any instance getting a little deeper into it a trailing slash shown here shows that the instance value is actually a directory and will have other values that append to this url whether they are other directories or just endpoint values now when you query for metadata you must provide the following header in all of your requests metadata dash flavor colon google and should be put in quotations if you don’t provide this header the metadata server will deny your request so i’m going to go ahead and hit enter and as you can see i’ve been brought up a lot of different values that i can choose from in order to retrieve different types of metadata and as stated before anything with a trailing slash is actually a directory and will have other values underneath it so if i wanted to query the network interfaces and because it’s a directory i need to make sure that i add the trailing slash at the end and as you can see here i have the network interface of 0 and i’m going to go ahead and query that and here i will have access to all the information about the network interface on this instance so i’m going to go ahead and query the network on this interface and as expected the default network is displayed i’m going to quickly go ahead and clear my screen and i’m going to go ahead and query some more metadata this time i’m going to do the name of the server and as expected bowtie dash web server showed up and because it’s an endpoint i don’t need the trailing slash at the end i’m going to go ahead and do one more this time i’m going to choose machine type and again as expected the e2 micro machine type is displayed and so just as a note for those who haven’t noticed any time that you query metadata it will show up to the left of your command prompt now what i’ve shown you here is what you can do with instance metadata and so how about if you wanted to query any project metadata well instead of instance at the end you would use project with the trailing slash i’m going to simply click on enter and as you can see here project doesn’t give me a whole lot of options but it does give me some important values like project id so i’m going to simply query that right now and as expected bowtie inc is displayed and so this is a great example of how to query any default metadata for instances and for projects now you’re probably wondering how do i query my custom metadata well once custom metadata has been set you can then query it from the attributes directory in the attributes directory can be found in both the instance and project metadata so i’m going to go ahead and show you that now but first i wanted to add some custom metadata and this can be set in either the console the gcloud command line tool or using the api and so i’m going to run the command here gcloud compute instances add dash metadata the name of your instance and when you’re adding custom metadata you would add the flag dash dash metadata with the key value pair which in this example is environment equals dev and then i’m also going to add the zone of the instance which is us east 1a and i’m going to hit enter and because i had a typo there i’m going to go ahead and try that again using us east 1b i’m going to hit on enter and success and so to verify that this command has worked i’m going to go ahead and query the instance and i’m going to go under attributes i’m going to hit on enter and as you can see here the environment endpoint has been populated so i’m going to query that and as expected dev is displaying as the environment value now if i wanted to double check that in the console i can go over to the console i can drill down into bowtie web server and if i scroll down to the bottom under custom metadata you can see the key value pair here has m as the key and dev being the value and so these are the many different ways that you can query metadata for any instances or projects now i wanted to take a quick moment to switch gears and talk about startup and shutdown scripts now compute engine lets you create and run your own startup and shutdown scripts on your vm instance and this allows you to perform automation that can perform actions when starting up such as installing software performing updates or any other tasks that are defined in the script and when shutting down you can allow instances time to clean up on perform tasks such as exporting logs to cloud storage or bigquery or syncing with other systems and so i wanted to go ahead and show you how this would work while combining metadata into the script so i’m going to go ahead and drill down into bow tie web server i’m going to click on edit and i’m going to scroll down here to custom metadata i’m going to click on add item and under key i’m going to type in startup dash script and under value i’m going to paste in my script i’m going to just enlarge this here for a second and i will be providing the script in the github repository now just to break it down this is a bash script i’m pulling in a variable called name which will query the instance name as well i have a variable called zone which will query the instance zone i’m going to be installing an apache web server and it’s going to display on a web browser both the server name and the zone that it’s in and so in order for me to see this web page i also need to open up some firewall rules and so an easy way to do this would be to scroll up to firewalls and simply click on allow http and allow https traffic this will tag the instance with some network tags as http server and https server and create two separate firewall rules that will allow traffic for port 80 and port 443 so i’m going to leave everything else as is i’m going to scroll down to the bottom and click on save okay and it took a few seconds there but it did finish saving i’m going to go ahead and go up to the top and click on reset and this will perform a hard reset on the instance and will allow the startup script to take effect so i’m going to click on reset it’s going to ask me if i really want to do this and for the purposes of this demonstration i’m going to click on reset please note you should never do this in production as it doesn’t do a clean shutdown on the operating system but as this is an instance with nothing on it i’m going to simply click on reset now i’m going to head on back to the main console for my vm instances and i’m going to record my external ip i’m going to open up a new browser i’m going to zoom in for better viewing and i’m going to paste in my ip address and hit enter and as you can see here i’ve used my startup script to display not only this web page but i was able to bring in metadata that i pulled using variables and was able to display it here in the browser and so before i end this demonstration i wanted to show you another way of using a startup script but being able to pull it in from cloud storage so i’m going to go back to the navigation menu and i’m going to scroll down to storage here i will create a new bucket and for now find a globally unique name to name your bucket and i’m going to call my bucket bowtie web server site and i’m going to leave the rest as its default and i’m going to simply click on create and if you have a globally unique name for your bucket you will be prompted with this page without any errors and i’m going to go ahead and upload the script and you can find this script in the github repository so i’m going to go into my repo and i’m going to look for bow tie start up final sh i’m going to open it and now that i have the script uploaded i’m going to drill into this file so i can get some more information that i need for the instance and what i need from here is to copy the uri so i’m going to copy this to my clipboard and i’m going to head back on over to compute engine i’m going to drill down into my instance i’m going to click on edit at the top and i’m going to scroll down to where it says custom metadata and here i’m going to remove the startup script metadata and i’m going to add a new item and i’m going to be adding startup dash script dash url and in the value i’m going to paste in the uri that i had just copied over and this way on startup my instance will use this startup script that’s in cloud storage so i’m going to scroll down to the bottom click on save and now i’m going to click on reset i’m going to reset here i’m going to go back to the main page for my vm instances and i can see that my external ip hasn’t changed so i’m going to go back to my open web browser and i’m going to click on refresh and success and as you can see here i’ve taken a whole bunch of different variables including the machine name the environment variable the zone as well as the project and i’ve displayed it here in a simple website and although you may not find this website specifically useful in your production environment this is just an idea to get creative using default and custom metadata along with a startup script i’ve seen in some environments where people have multiple web servers and create a web page to display all the specific web servers in their different environments along with their ips their data and their configurations and so just as a recap we’ve gone through the default and custom metadata and how to query it in an instance we also went through startup scripts and how to apply them both locally and using cloud storage and so i hope you have enjoyed having fun with metadata and using them in startup scripts such as this one i also hope you find some fascinating use cases in your current environments and so before you go just a quick reminder to delete any resources that you’ve created to not incur any added costs and so that’s pretty much all i wanted to cover with this demonstration so you can now mark this as complete and let’s move on to the next one [Music] welcome back and in this lesson i’m going to be discussing compute engine billing now when it comes to pricing with regards to compute engine i’ve only gone over the fact that instances are charged by the second after the first minute but i never got into the depths of billing and the various ways to save money when using compute engine in this lesson i will be unveiling how both costs and discounts are broken down in google cloud as it refers to the resource based billing model and the various savings that can be had when using compute engine so with that being said let’s dive in now each vcpu and each gigabyte of memory on compute engine is built separately rather than as part of a single machine type you are still creating instances using pre-defined machine types but your bill shows them as individual cpus and memory used per hour and this is what google refers to as resource-based billing which i will get into in just a bit the billing model applies to all vcpus gpus and memory resources and are charged a minimum of one minute for example if you run your virtual machine for 30 seconds you will be billed for one minute of usage after one minute instances are charged in one second increments instance up time is another determining factor for cost and is measured as the number of seconds between when you start an instance and when you stop an instance in other words when your instance is in the terminated state if an instance is idle but still has a state of running it will be charged for instance uptime but again you will not be charged if your instance is in a terminated state now getting into reservations these are designed to reserve the vm instances you need so after you create a reservation the reservation ensures that those resources are always available for you to use during the creation process you can choose how a reservation is to be used for example you can choose for a reservation to be automatically applied to any new or existing instances that match the reservation’s properties which is the default behavior or you can specify that reservation to be consumed by a specific instance in all cases a vm instance can only use a reservation if its properties exactly match the properties of the reservation after you create a reservation you begin paying for the reserved resources immediately and they remain available for your project to use indefinitely until the reservation is deleted reservations are great to ensure that your project has resources for future increases in demand including planned or unplanned spikes backup and disaster recovery or for a buffer when you’re planning growth when you no longer need a reservation you can simply delete the reservation to stop incurring charges each reservation like normal vms are charged based on existing on-demand rates which include sustained use discounts and are eligible for committed use discounts which i will be getting into in just a bit now purchasing reservations do come with some caveats reservations apply only to compute engine data proc and google kubernetes engine as well reservations don’t apply to shared core machine types preemptable vms sole tenant nodes cloud sql and data flow now as i explained before each vcpu and each gigabyte of memory on compute engine is built separately rather than as a part of a single machine type and is billed as individual cpus and memory used per hour resource-based pricing allows compute engine to apply sustained use discounts to all of your pre-defined machine type usage in a region collectively rather than to individual machine types and this way vcpu and memory usage for each machine type can receive any one of the following discounts sustained use discounts committed use discounts and preemptable vms and i’d like to take a moment to dive into a bit of detail on each of these discount types starting with sustained use discounts now sustained use discounts are automatic discounts for running specific compute engine resources a significant portion of the billing month for example when you run one of these resources for more than 25 percent of a month compute engine automatically gives you a discount for every incremental minute that you use for that instance now the following tables show the discounts applied for the specific resources described here now for the table on the left for general purpose n2 and n2d predefined and custom machine types and for compute optimized machine types you can receive a discount of up to 20 percent the table on the right shows that for general purpose n1 predefined and custom machine types as well as sole tenant nodes and gpus you can get a discount of up to 30 percent sustained use discounts are applied automatically to usage within a project separately for each region so there is no action required on your part to enable these discounts now some notes that i wanted to cover here is that sustained use discounts automatically apply to vms created by both google kubernetes engine and compute engine as well they do not apply to vms created using the app engine flexible environment as well as data flow and the e-2 machine types sustained use discounts are applied on incremental use after you reach certain usage thresholds this means that you pay only for the number of minutes that you use an instance and compute engine automatically gives you the best price google truly believes that there’s no reason to run an instance for longer than you need it now sustained use discounts are applied on incremental use after you reach certain usage thresholds this means that you pay only for the number of minutes that you use an instance and compute engine automatically gives you the best price now consider a scenario where you have two instances or sole tenant nodes in the same region that have different machine types and run at different times of the month compute engine breaks down the number of vcpus and amount of memory used across all instances that use predefined machine types and combines the resources to qualify for the largest sustained usage discounts possible now in this example assume you run the following two instances in the us east one region during a month for the first half you run an n1 standard four instance with four vcpus and 15 gigabytes of memory for the second half of the month you run a larger and one standard 16 instance with 16 vcpus and 60 gigabytes of memory in this scenario compute engine reorganizes these machine types into individual vcpu and memory resources and combines their usage to create the following resources for vcpus so because four vcpus were being used for the whole month the discount here would be thirty percent the additional twelve vcpus were added on week two in the month and so for those 12 vcpus they would receive a 10 discount and this is how discounts are applied when it comes to sustained use discounts now moving on to the next discount type is committed use discounts so compute engine lets you purchase committed use contracts in return for deeply discounted prices for vm usage so when you purchase a committed use contract you purchase compute resource which is comprised of vcpus memory gpus and local ssds and you purchase these resources at a discounted price in return for committing to paying for those resources for one year or three years committed use discounts are ideal for workloads with predictable resource needs so if you know exactly what you’re going to use committed use discounts would be a great option for this and the discount is up to 57 for most resources like machine types or gpus when it comes to memory optimized machine types the discount is up to 70 percent now when you purchase a committed use contract you can purchase it for a single project and applies to a single project by default or you can purchase multiple contracts which you can share across many projects by enabling shared discounts once purchased your billed monthly for the resources you purchased for the duration of the term you selected whether you use the services or not if you have multiple projects that share the same cloud billing account you can enable committed use discount sharing so that all of your projects within that cloud billing account share all of your committed use discount contracts your sustained use discounts are also pooled at the same time now some caveats when it comes to committed use discounts shared core machines are excluded on this as well you can purchase commitments only on a per region basis if a reservation is attached to a committed use discount the reservation can’t be deleted for the duration of the commitment so please be aware now to purchase a commitment for gpus or local ssds you must purchase a general purpose and one commitment and lastly after you create a commitment you cannot cancel it you must pay the agreed upon monthly amount for the duration of the commitment now committed use discount recommendations give you opportunities to optimize your compute costs by analyzing your vm spending trends with and without a committed use discount contract by comparing these numbers you can see how much you can save each month with a committed use contract and this can be found under the recommendations tab on the home page in the console and so i wanted to move on to the last discount type which are preemptable vms now preemptable vms are up to eighty percent cheaper than regular instances pricing is fixed and you never have to worry about variable pricing these prices can be found on the link to instance pricing that i have included in the lesson text a preemptable vm is an instance that you can create and run at a much lower price than normal instances however compute engine might stop or preempt these instances if it requires access to those resources for other tasks as preemptable instances our access compute engine capacity so their availability varies with usage now generally compute engine avoids preempting instances but compute engine does not use an instant cpu usage or other behavior to determine whether or not to preempt it now a crucial characteristic to know about preemptable vms is that compute engine always stops them after they run for 24 hours and this is something to be aware of for the exam preemptable instances are finite compute engine resources so they might not always be available and if you happen to accidentally spin up a preemptable vm and you want to shut it down there is no charge if it’s running for less than 10 minutes now another thing to note is that preemptable instances can’t live migrate to a regular vm instance or be set to automatically restart when there is a maintenance event due to the limitations preemptable instances are not covered by any service level agreement and when it comes to the google cloud free tier credits for compute engine this does not apply to preemptable instances so you’re probably asking when is a great time to use preemptable vms well if your apps are fault tolerant and can withstand possible instance preemptions then preemptable instances can reduce your compute engine costs significantly for example batch processing jobs can run on preemptable instances if some of those instances stop during processing the job slows down but does not completely stop preemptable instances create your batch processing tasks without placing any additional workload on your existing instances and without requiring for you to pay full price for additional normal instances and since containers are naturally stateless and fault tolerant this makes containers an amazing fit for preemptable vms so running preemptable vms for google kubernetes engine is another fantastic use case now it’s really critical that you have an understanding for each different discount type and when is a good time to use each as you may be presented different cost-effective solutions in the exam and understanding these discount types will prepare you to answer them understanding the theory behind this resource-based pricing model all the available discount types along with the types of workloads that are good for each will guarantee that you will become familiar with what types of questions are being asked in the exam and will also make you a better cloud engineer as you will be able to spot where you can save money and be able to make the appropriate changes and so that’s pretty much all i wanted to cover when it comes to compute engine billing and its discount types so you can now mark this lesson as complete and let’s move on to the next one welcome back in this lesson i’m going to be covering the fundamentals as it pertains to storage these concepts are needed to know in order to fully understand the different google cloud storage options that i will be diving into later as well the exam expects that you know the different types of storage that’s available for all the various services and so before i get into the different types of storage i wanted to cover the underlying theory behind it so with that being said let’s dive in so i wanted to start off by going through the three types of storage and how data is presented to a user or to the server there is block storage file storage and object storage these types of storage tie into the available services that are available in google cloud and they offer different options for different types of workloads and i will be going over each of these in a bit of depth and so the first one i wanted to touch on is block storage now block storage is sometimes referred to as block level storage and is a technology that is used to store data files on storage systems or cloud-based storage environments block storage is the fastest available storage type and it is also efficient and reliable with block storage files are split into evenly sized blocks of data each with its own unique identifier it is presented to the operating system as structureless raw data in the form of a logical volume or a hard drive and the operating system structures it with a file system like ext3 or ext4 on linux and ntfs for windows it would then mount this volume or drive as the root volume in linux or a c or d drive in windows block storage is usually delivered on physical media in the case of google cloud it is delivered as either spinning hard drives or solid state drives so in google cloud you’re presented with block storage that consists of either persistent disks or local ssd which can both be mountable and bootable block storage volumes can then be used as your boot volumes for compute instances in google cloud installed with your operating system of choice and structured so that your operating system database or application will then be able to consume it now moving on to the second type of storage is file storage now file storage is also referred to as file level or file based storage and is normally storage that is presented to users and applications as a traditional network file system in other words the user or application receives data through directory trees folders and files file storage also allows you to do the same this functions similarly to a local hard drive however a structure has already been applied and cannot be adjusted after the fact this type of structure only has the capabilities of being mountable but not bootable you cannot install an operating system on file storage as i said before the structure has already been put in place for you and is ready for you or your application to consume due to this structure the service that is serving the file system has some underlying software that can handle access rights file sharing file locking and other controls related to file storage in google cloud this service that serves this type of storage is known as cloud file store and is usually presented over the network to users in your vpc network using the nfs protocol or in this case nfs version 3. but i’ll be diving into that a little bit later and the last storage type that i wanted to cover is object storage now object storage also referred to as object-based storage is a general term that refers to the way in which we organize and work with units of storage called objects and this is a storage type that is a flat collection of unstructured data and this type of storage holds no structure like the other two types of storage and is made up of three characteristics the first one is the data itself and this could be anything from movies songs and even photos of men in fancy bow ties the data could also be binary data as well the second characteristic is the metadata and this is usually related to any contextual information about what the data is or anything that is relevant to the data and the third characteristic is a globally unique identifier and this way it’s possible to find the data without having to know the physical location of the data and this is what allows object storage to be infinitely scalable as it doesn’t matter where the object is stored this type of storage can be found in google cloud and is known as cloud storage cloud storage is flat storage with a logical container called a bucket that you put objects into now although this type of storage is not bootable using an open source tool called fuse this storage type can be mounted in google cloud and i will be covering that a little bit later in the cloud storage lesson but in most cases object store is designed as the type of storage that is not bootable or mountable and because of the characteristics of this storage it allows object storage again to be infinitely scalable and so these are the three main types of storage that you will need to know and understand as each has its use cases so if you’re looking for high performance storage you will always look to block storage to satisfy your needs if you’re looking to share files across multiple systems or have multiple applications that need access to the same files and directories then file storage might be your best bet if you’re looking to store terabytes of pictures for a web application and you don’t want to worry about scaling object storage will allow you to read and write an infinite amount of pictures that will meet your requirements so now that we’ve covered these storage types let’s take a few moments to discuss storage performance terms now when discussing storage performance there are some key terms to understand that when used together define the performance of your storage first there is io which stands for input output and is a single read write request and can be measured in block size and this block size can vary anywhere from one kilobyte to four megabytes and beyond depending on your workload now q depth when it comes to storage is the number of pending input output requests waiting to be performed on a disk io requests become queued when reads or writes are requested faster than they can be processed by the disk when io requests are queued the total amount of time it takes to read or write data to disk becomes significantly higher this is where performance degradation can occur and queue depth must be adjusted accordingly now the next term is a common touch point when it comes to discussing storage performance on gcp and on the exam which is iops and this is a metric that stands for input output operations per second this value indicates how many different input or output operations a device or group of devices can perform in one second more value in the iops signifies the capability of executing more operations per second and again this is a common touch point that i will be diving into a little bit later now next up is throughput and this is the speed at which the data is transferred in a second and is most commonly measured in megabytes per second this is going to be another common topic that comes up frequently when discussing storage on gcp as well latency is the measurement of delay between the time data is requested when the data starts being returned and is measured in milliseconds so the time each io request will take to complete results in being your average latency and the last two terms i wanted to bring up is sequential and random access sequential would be a large single file like a video and random access would be loading an application or an operating system so lots of little files that are all over the place it’s obvious that accessing data randomly is much slower and less efficient than accessing it sequentially and this can also affect performance now why i bring up all these terms is not about calculating the average throughput but to give you a holistic view on storage performance as all these characteristics play a part in defining the performance of your storage there is not one specific characteristic that is responsible for disk performance but all have a role in achieving the highest performance possible for your selected storage now i know this is a lot of theory to take in but this will all start to make more sense when we dive into other parts of the course where we will discuss disk performance with all these characteristics as it relates to compute engine and other services that use storage it is crucial to know the storage types as well as the performance characteristics as it will bring clarity to questions in the exam and also give you a better sense on how to increase your storage performance in your work environment and so that’s pretty much all i wanted to cover when it comes to storage types and storage performance as it pertains to storage as a whole so you can now mark this lesson as complete and let’s move on to the next one [Music] welcome back and in this lesson i’m going to be covering persistent disks and local ssds i’m going to be getting into the detail with the most commonly used storage types for instances which are both persistent disks and local ssds this lesson will sift through all the different types of persistent disks and local ssds along with the performance of each knowing what type of disk to use for your instance and how to increase disk performance shows up on the exam and so i want to make sure to cover it in detail and leave no stone unturned so with that being said let’s dive in now persistent disks and local ssds are the two available types of block storage devices available in google cloud and the determining factor of what you will use for your particular scenario will depend on your use case and the specific characteristics that you require from each storage medium now by default each compute engine instance has a single boot persistent disk that contains the operating system when you require additional storage space you can add one or more additional persistent disks or local ssds to your instance and i will be going through these storage options along with their characteristics now as you can see here persistent disks and local ssds come in a slew of different types as well with persistent disks they are available in both zonal and regional options so starting off with persistent disks you have three different types you can choose from as well you have the flexibility of choosing from two different geographic options when it comes to the redundancy of your persistent disks and i will be covering the zonal and regional options in detail in just a bit now persistent disks are durable network storage devices that your instances can access like physical disks in a computer so these are not physically attached disks but network disks that are connected over google’s internal network persistent disks are independent of your instance and can persist after your instance has been terminated and this can be done by turning on this flag upon creation you can even detach your disk and move it to other instances when you need to scaling persistent disks can be done automatically and on the fly by using the disk resize feature and this gives you the flexibility to resize your current persistent disks with no downtime and even add additional disks to your instance for additional performance and storage persistent disks are also encrypted by default and google also gives you the option of using your own custom keys each persistent disk can be up to 64 terabytes in size and most instances can have up to 128 persistent disks and up to 257 terabytes of total persistent disk space attached and just as a note share core machine types are limited to 16 persistent disks and 3 terabytes of total persistent disk space and so now that i’ve gone through the details of persistent disks i wanted to dive into the two geographic options that’s available for persistent disks first starting with zonal now zonal persistent disks are disks that are available in one zone in one region these disks are the most commonly used persistent disks for general day-to-day usage and used for those whose workloads are not sensitive to specific zone outages they are redundant within the zone you’ve created them in but cannot survive an outage of that zone and may be subjected to data loss if that specific zone is affected and this is where snapshots should be a part of your high availability strategy when using zonal persistent disks snapshots are incremental and can be taken even if you snapshot disks that are attached to running instances and i’ll be going into detail about snapshots in a later lesson zonal persistent disks can also be used with any machine type including pre-defined shared core and custom machine types now when it comes to regional persistent disks they have storage qualities that are similar to zonal persistent disks however regional persistent disks provide durable storage and replication of data between two zones in the same region if you are designing systems that require high availability on compute engine you should use regional persistent disks combined with snapshots for durability regional persistent disks are also designed to work with regional managed instance groups in the unlikely event of a zonal outage you can usually fail over your workload running on regional persistent disks to another zone by simply using the force attached flag regional persistent disks are slower than zonal persistent disks and should be taken into consideration when write performance is less critical than data redundancy across multiple zones now noting a couple of caveats here when it comes to disk limits regional persistent disks are similar to zonal persistent disks however regional standard persistent disks have a 200 gigabyte size minimum and may be a major factor when it comes to cost so please be aware as well you can’t use regional persistent disks with memory optimized machine types or compute optimized machine types now these two geographic options are available for all three persistent disk types whose characteristics i will dive into now starting off with the standard persistent disk type also known in google cloud as pd standard now these persistent disks are backed by standard hard disk drives and these are your standard spinning hard disk drives and allows google cloud to give a cost effective solution for your specific needs standard persistent disks are great for large data processing workloads that primarily use sequential ios now as explained earlier sequential access would be accessing larger files and would require less work by the hard drive thus decreasing latency as there are physical moving parts in this hard drive this would allow the disc to do the least amount of work as possible and therefore making it the most efficient as possible and therefore sequential ios are best suited for this type of persistent disk and again this is the lowest price persistent disks out of all the persistent disk types now stepping into the performance of standard persistent disks for just a second please remember that iops and throughput performance depends on disk size instance vcpu count and i o block size among other factors and so this table here along with the subsequent tables you will see later are average speeds that google has deemed optimum for these specific disk types they cover the maximum sustained iops as well as the maximum sustained throughput along with the granular breakdown of each here you can see the differences between both the zonal and regional standard pd and as you can see here in the table the zonal standard pd and the regional standard pd are pretty much the same when it comes to most of these metrics but when you look closely at the read iops per instance this is where they differ where the zonal standard pd has a higher read iops per instance than the regional standard pd and this is because the regional standard pd is accessing two different disks in two separate zones and so the latency will be higher the same thing goes for right throughput per instance and so this would be a decision between high availability versus speed moving on to the next type of persistent disk is the balanced persistent disk in google cloud known as pd balance this disk type is the alternative to the ssd persistent disks that balance both performance and cost as this disk type has the same maximum iops as the ssd persistent disk type but holds a lower iops per gigabyte and so this disk is designed for general purpose use the price for this disk also falls in between the standard and the ssd persistent disks so this is basically your middle of the road disk when you’re trying to decide between price and speed moving straight into performance i put the standard pd metric here so that you can see a side-by-side comparison between the balance pd and the standard pd and as you can see here when it comes to the metrics under the maximum sustained iops the balance pd is significantly higher than the standard pd in both the zonal and regional options as well looking at the maximum sustained throughput the read write throughput per gigabyte is a little over two times faster and the right throughput per instance is three times faster so quite a bit of jump from the standard pd to the balance pd and moving on to the last persistent disk type is the ssd persistent disk type also known in google cloud as a pd ssd and these are the fastest persistent disks that are available and are great for enterprise applications and high performance databases that demand lower latency and more iops so this would be great for transactional databases or applications that require demanding and near real-time performance the pd ssds have a single digit millisecond latency and because of this comes at a higher cost and therefore is the highest price persistent disk moving on to the performance of this persistent disk this disk type is five times faster when it comes to read iops per gigabyte than the balance pd as well as five times faster for the right iops per gigabyte and so the table here on the left shows the performance for the pd ssd and the table on the right shows the performance of both the standard pd and the balance pd and so here you can see the difference moving from the standard pd over to the ssd pd the read write throughput per instance stays the same from the standard pd all the way up to the ssd pd but where the ssd outperforms all the other ones is through the read write throughput per gigabyte it’s one and a half times faster than the balance pd and four times faster than the standard pd and again you will also notice a drop in performance from the zonal option to the regional option and so this is the end of part one of this lesson as it started to get a little bit long and so whenever you’re ready you can join me in part two where i will be starting immediately from the end of part one so you can complete this video and i will see you in the next [Music] welcome back this is part two of the persistent disks and local ssds lesson and we will be starting exactly where we left off in part one so with that being said let’s dive in and so now that i’ve covered all the persistent disk types i wanted to move into discussing the characteristics of the local ssd local ssds are physically attached to the server that hosts your vm instance local ssds have higher throughput and lower latency than any of the available persistent disk options and again this is because it’s physically attached and the data doesn’t have to travel over the network now the crucial thing to know about local ssds is that the data you store on a local ssd persists only until the instance is stopped or deleted once the instance is stopped or deleted your data will be gone and there is no chance of getting it back now each local ssd is 375 gigabytes in size but you can attach a maximum of 24 local ssd partitions for a total of 9 terabytes per instance local ssds are designed to offer very high iops and very low latency and this is great for when you need a fast scratch disk or a cache and you don’t want to use instance memory local ssds are also available in two flavors scuzzy and mvme now for those of you who are unaware scuzzy is an older protocol and made specifically for hard drives it also holds the limitation of having one queue for commands nvme on the other hand also known as non-volatile memory express is a newer protocol and is designed for the specific use of flash memory and designed to have up to 64 000 qs as well each of those queues in turn can have up to 64 000 commands running at the same time and thus making nvme infinitely faster now although nvme comes with these incredible speeds it does come at a cost and so when it comes to the caveats of local ssd although compute engine automatically encrypts your data when it’s written to local ssd storage space you can’t use customer supplied encryption keys with local ssds as well local ssds are only available for the n1 n2 and compute optimized machine types now moving on to the performance of local ssds throughput is the same between scuzzy and nvme but the read write iops per instance is where nvme comes out on top and as you can see here the read iops per instance is a whopping two million four hundred thousand read iops per instance as well the right iops per instance is 1.2 million over the 800 000 for local ssd now before i end this lesson i wanted to cover a few points on performance scaling as it pertains to block storage on compute engine now persistent disk performance scales with the size of the disk and with the number of vcpus on your vm instance persistent disk performance scales linearly until it reaches either the limits of the volume or the limits of each compute engine instance whichever is lower now this may seem odd that the performance of your disk scales with cpu count but you have to remember persistent disks aren’t physically attached to your vm they are independently located as such i o on a pd is a network operation and thus it takes cpu to do i o which means that smaller instances run out of cpu to perform disk io at higher rates so in order for you to get better performance you can increase the iops for your disk by resizing them to their maximum capacity but once that size has been reached you will have to increase the number of cpus on your instance in order to increase your disk performance a recommendation by google is that you have one available vcpu for every 2000 to iops of expected traffic so to sum it up performance scales until it reaches either the limits of the disk or the limits of the vm instance to which the disk is attached the vm instance limits are determined by the machine type and the number of vcpus of the instance now if you want to get more granular with regards to disk performance i’ve included a few links in the lesson text that will give you some more insight but for most general purposes and for the exam remember that persistent disk performance is based on the total persistent disk capacity attached to an instance and the number of vcpus that the instance has and so that’s pretty much all i wanted to cover when it comes to persistent disks and local ssds so you can now mark this lesson as complete and let’s move on to the next one welcome back in this demo i’m going to be covering how to manage and interact with your disks on compute engine this demo is designed to give you both experience and understanding on working with persistent disks and how you would interact with them we’re going to start the demo off by creating an instance we’re then going to create a separate persistent disk and attach it to the instance we’re going to then interact with the disk and then resize the disk while afterwards we will delete it and we’re going to do this all by both using the console and the command line so with that being said let’s dive in so here i am in the console i’m logged in as tony bowties gmail.com and i am in project bowtie inc and so the first thing we need to do to kick off this demo is to create an instance that we can attach our disk to but first i always like to make sure that i have a vpc to deploy my instance into with its corresponding default firewall rules so i’m going to head on over to the navigation menu and i’m going to go down to vpc network and as expected my default vpc has been created and just to make sure that i have all my necessary firewall rules i’m going to drill down into the vpc and head on over to firewall rules i’m going to click on firewall rules and the necessary firewall rule that i need for ssh is created and so i can go ahead and create my instance so i’m going to go back up to the navigation menu and i’m going to go over to compute engine so i’m going to go ahead and click on create and i’m going to name this instance bowtie dash instance and for the sake of this demo i’ll add in a label here the key is going to be environment and the value will be testing i’m going to go down to the bottom click on save with regards to the region i’m going to select us east 1 and i’m going to keep the zone as the default for us east 1b and under machine type to keep things cost effective i’m going to use an e2 micro shared core machine and i’m going to scroll down to service account and under service account you want to select the set access for each api you want to scroll down to compute engine and here you want to select read write and this will give us the necessary permissions in order to interact with our disk that we will be creating later so i’m going to scroll down to the bottom here and i’m going to leave everything else set at its default and just before creating the instance please do remember you can always click on the command line link where you can get the gcloud command to create this instance through the command line i’m going to close this up and i’m going to simply click on create i’m just going to wait a few seconds here for my instance to come up okay and my instance is up and so now what we want to do is we want to create our new disk so i’m going to go over here to the left hand menu and i’m going to click on disks and as you can see here the disk for the instance that i had just created has 10 gigabytes in us east 1b and we want to leave that alone and we want to create our new disk so i’m going to go up to the top here and simply click on create disk and so for the name of the disk i’m going to call this disk new pd for persistent disk and i’m going to give it the same description i’m going to keep the type as standard persistent disk and for the region i want to select us east one i’m going to keep the zone as its default in us east 1b and as the disk is in us east 1b i’ll be able to attach it to my instance and so just as a note here there is a selection where you can replicate this disk within the region if i click that off i’ve now changed this from a zonal persistent disk to a regional persistent disk and over here in zones it’ll give me the option to select any two zones that i prefer and so if you’re looking at creating some regional persistent disks these are the steps you would need to take in order to get it done in the console now in order to save on costs i’m going to keep this as a zonal persistent disk so i’m going to click on cancel i’m going to uncheck the option and make sure your region is still set at us east 1 and your zone is selected as us east 1b we’re going to leave the snapshot schedule alone and i’ll be diving into snapshot schedules in a later lesson i’m going to scroll down here to source type i’m going to keep it as blank disk and the size here is set at 500 gigabytes and we want to set it to 100 gigabytes but before we do that i wanted to bring your attention to the estimated performance here you can see the sustain random iops limits as well as the throughput limit and so depending on the size of the disk that you want to add these limits will change accordingly so if i change this to 100 my sustained random iops limit on read went from 375 iops to 75 iops and so this is a great demonstration that the larger your disc the better your performance and so this is a great way to figure out on what your performance will be before you create your disk and i’ve also been prompted with a note here saying that because my disk is under 200 gigabytes that i will have reduced performance and so for this demo that’s okay i’m going to keep my encryption as the google manage key and under labels i will add environment as the key and value is testing and so now that i’ve entered all my options i’m going to simply click on create and i’m going to give it a few seconds and my new disk should be created okay and my new disk has been created and you can easily create this disk through the command line and i will be supplying that in the lesson text i merely want to go through the console setup so that you are aware of all the different options and so now that i’ve created my disk and i’ve created my instance i want to now log into my instance and attach this new disk so i’m going to go back to vm instances and here i want to ssh into the bowtie instance and i’m going to give it a few seconds here to connect and i’m going to zoom in for better viewing i’m going to clear my screen and so the first thing i want to do is i want to list all my block devices that are available to me on this instance and the linux command for that is ls blk and as you can see my boot disk has been mounted and is available to me and so now i want to attach the new disk that we just created and just as a note i could as easily have done this in the console but i wanted to give you an idea of what it would look like doing it from the command line and so i’m going to paste in the command to attach the disk which is gcloud compute instances attach dash disk the name of the instance which is bow tie dash instance along with the flag dash dash disk the disk name which is new pd and the zone of the disk using the zone flag with us east 1b so i’m going to go ahead and hit enter and no errors came up so i’m assuming that this had worked and so just to double check i’m gonna run the lsblk command again and success as you can see here my block
device sdb has been attached to my instance and is available to me with the size of 100 gigabytes and so now i want to look at the state that this roblox device is in and so the command for that will be sudo file dash s followed by the path of the block device which is forward slash dev forward slash sdb i’m going to hit on enter and as you can see it is showing data which means that it is just a raw data device and so in order for me to interact with it i need to format the drive with a file system that the operating system will be able to interact with and so the command to format the drive would be sudo mkfs which is make file system i’m going to use ext4 as the file system minus capital f along with the path of the new disk so i’m going to hit on enter and no errors so i’m assuming that it was successful so just to verify i’m going to run the sudo file minus s command and as you can see here because the disk now has a file system i’ve been given the information with regards to this disk whereas before it was simply raw data and so now that we’ve created our disk and we’ve formatted our disk to a file system that the operating system is able to read we need to now mount the disk and so in order to do that we need to create a mount point so i’m going to first clear the screen and i’m going to run the command sudo mkdir and the new mount point i’m going to call it slash new pd i’m going to hit enter and now i’m going to mount the disk and the command for that is sudo mount the path for the block device which is forward slash dev forward slash sdb and then the mount point which is forward slash new pd i’m going to hit enter no errors so i’m assuming that it had worked but just to verify i’m going to run the command lsblk and success as you can see sdb has now been mounted as new pd and so now i can interact with this disk so the first thing i want to do is i want to change directories to this mount point i’m in now new pd i’m going to do an ls and so just as a note for those of you who are wondering the lost and found directory is found on each linux file system and this is designed to place orphaned or corrupted files or any corrupted bits of data from the file system to be placed here and so it’s not something that you would interact with but always a good to know so i’m going to now create a file in new pd so i’m going to run the command sudo nano file a bow ties dot text so file a bow ties is the file that i’m going to create nano is my text editor and so i’m going to hit on enter and so in this file i’m going to type in bow ties are so classy because after all they are i’m going to hit ctrl o to save i’m going to hit enter to verify it and ctrl x to exit so if i do another ls i can see the file of bow ties has been created also by running the command df minus k i’ll be able to see the file system here as well and so this is the end of part one of this demo it was getting a bit long so i decided to break it up this would be a great opportunity for you to get up have a stretch get yourself a coffee or tea and whenever you’re ready you can join me in the next one where part two will be starting immediately from the end of part one [Music] welcome back this is part two of this demo and we’re gonna continue immediately from the end of part one so with that being said let’s dive in and so what i want to do now is i want to reboot the instance in order to demonstrate the mounting of this device and i’m going to do that by using the command sudo reboot it’s going to disconnect me i’m going to click on close and i’m going to wait about a minute for it to reboot okay and it’s been about a minute so i’m going to now ssh into my instance okay and here i am back again logged into my instance i’m going to quickly clear the screen and i’m going to run the lsblk command now what i wanted to demonstrate here is that although i mounted the new device it did not stay mounted through the reboot and this is because there is a configuration file in linux that points to which partitions get mounted automatically upon startup that i need to edit in order to make sure that this device is mounted every time the instance reboots and so in order to do that i need to edit a file called fstab and i’m going to have to add the unique identifier for this partition also known as the device sdb and this will mount the partition automatically every time there happens to be a reboot so in order to do that i’m going to run the command sudo blk id and the path of the block device forward slash dev forward slash sdb i’m going to hit on enter and here is the identifier also known as the uuid that i need to append to the fstab file so i’m going to copy the uuid and i’m going to use the command sudo nano etc fs tab and i’m going to hit on enter and here you will find the uuid for your other partitions and so you’re going to be appending a line here right at the end so i’m going to move my cursor down here i’m going to type in uuid equals and then the uuid that i had copied earlier the amount point which is going to be forward slash new pd the type of file system which is ext4 along with defaults comma no fail i’m going to hit control o to save hit enter to verify and control x to exit and so now i’m going to mount this device by running the command sudo mount dash a and hit enter and this command will mount all the partitions that are available in the fstab file and so when i run a lsblk i can see here that my block device sdb is now mounted on forward slash new pd now i know this may be a refresher for some but this is a perfect demonstration of the tasks that need to be done when creating and attaching a new disk to an instance and is a common task for many working on linux instances and working in cloud this can definitely be scripted but i wanted to show you the steps that need to be taken in order to get a new disk in a usable state okay so great we have created a new disk we had attached the disk created a file system and had mounted the disk along with editing the configuration file to make sure that the device mounts whenever the instance starts up so now that we’ve done all that i wanted to demonstrate resizing this disk from 100 gigabytes to 150 gigabytes and so just to show you where it is in the console i’m going to quickly go back to my console tab and so here i’m going to go to the left hand menu i’m going to click on disks i’m going to drill down into new pd and at the top i’m going to click on edit and so here i’m able to adjust the disk space size and simply click on save not much that i really need to do here but i did want to show you how to do this in the command line so i’m going to go back to the tab of my instance and i’m going to quickly clear the screen and i’m going to paste in the command gcloud compute disks resize the name of the disk which is new pd and the new size in gigabytes using the dash dash size flag 150 which is the new size of the disc along with the dash dash zone flag of us east 1b i’m going to hit enter it’s going to ask me if i want to do this as this is not reversible and please remember when you resize a disk you can only make it bigger and never smaller so i’m going to hit y to continue and it took a few seconds there but it was successful so if i run a df minus k you can see here that i only have 100 gigabytes available to me and this is because i have to extend the file system on the disk so i’ve made the disk larger but i haven’t allocated those raw blocks to the file system so in order for the file system to see those unallocated blocks that’s available to it i need to run another command so i’m going to quickly clear my screen again and i’m going to run the command sudo resize to fs along with the block device i’m going to hit enter and as you can see it was successful showing the old blocks as 13 and the new blocks as 19. so if i run a df minus k i can now see my 150 gigabytes that’s available to me and so just to demonstrate after resizing the disk along with mounting and then remounting the disk that the file that i’ve created still exists i’m going to run an ls minus al but first i will need to change directories into new pd clear my screen and run an ls and phyla bow ties is still there and so this is a great example demonstrating how the data on persistent disks persist through the lifetime of a disk even when mounting unmounting rebooting and resizing and so as you can see we’ve done a lot of work here and so just as a recap where we’ve created a new disk we attached this disk to an instance we formatted the disk into an ext4 file system we’ve mounted this disk we’ve written a file to it added its unique identifier to the configuration file so that it mounts on startup and then we’ve resized the disk along with extending the file system on the disk and so this is the end of the demo and i wanted to congratulate you on making it to the end and i hope this demo has been extremely useful and again fantastic job on your part now before you go i wanted to quickly walk through the steps of deleting all the resources you’ve created and so the first thing that i want to do is delete the disk that was created for this demo and so before i can delete the disk i’m going to first detach the disk from the instance and the easiest way to do that is through the command line so i’m going to quickly clear my screen and so i’m going to show you how to detach the disk from the instance and so i’m going to paste in this command gcloud compute instances detach disk the instance name which is bow tie dash instance along with the disc with the flag dash dash disc the name of the disc which is new pd along with the zone i’m going to hit enter and it’s been successfully detached and so now that it’s detached i can actually delete the disk and so i’m going to head on over back to the console and i’m going to go ahead and delete the new pd disk i’m going to click on delete i’m going to get a prompt asking me if i’m sure yes i am if i go back to the main menu for my disks and this should just take a moment and once it’s deleted you will no longer see it here and i’m going to go back over to vm instances and i’m going to delete this as well and so there’s no need to delete your default vpc unless you’d like to recreate it again but don’t worry for those who decide to keep it you will not be charged for your vpc as we will be using it in the next demo and so that’s pretty much all i wanted to cover when it comes to managing disks with compute engine so you can now mark this as complete and let’s move on to the next one [Music] welcome back in this lesson i’ll be discussing persistent disk snapshots now snapshots are a great way to backup data from any running or stopped instances from unexpected data loss snapshots are also a great strategy for use in a backup plan for any and all instances no matter where they are located and so as cloud engineers and architects this is a great tool for achieving the greatest uptime for your instances so diving right into it snapshots as i mentioned before are a great way for both backing up and restoring the data of your persistent disks you can create snapshots from disks even while they are attached to running instances snapshots are global resources so any snapshot is accessible by any resource within the same project you can also share snapshots across projects as well snapshots also support both zonal and regional persistent disks snapshots are incremental and automatically compressed so you can create regular snapshots on a persistent disk faster and at a much lower cost than if you regularly created a full image of a disk now when you create a snapshot you have the option of choosing a storage location snapshots are stored in cloud storage and can be stored in either a multi-regional location or a regional cloud storage bucket a multi-regional storage location provides higher availability but will drive up costs please be aware that the location of a snapshot affects its availability and can incur networking costs when creating the snapshot or restoring it to a new disk if you do not specify storage location for a snapshot google cloud uses the default location which stores your snapshot in a cloud storage multi-regional location closest to the region of your source disk if you store your snapshot in the same region as your source disk there is no network charge when you access that snapshot from the same region if you access the snapshot from a different region you will incur a network cost compute engine stores multiple copies of each snapshot across multiple locations as well you cannot change the storage location of an existing snapshot once a snapshot has been taken it can be used to create a new disk in any region and zone regardless of the storage location of the snapshot now as i explained earlier snapshots are incremental and i wanted to take a moment to dive into that for just a minute so when creating snapshots the first successful snapshot of a persistent disk is a full snapshot that contains all the data on the persistent disk the second snapshot only contains any new data or modify data since the first snapshot data that hasn’t changed since snapshot 1 isn’t included instead snapshot 2 contains references to snapshot 1 for any unchanged data as shown here snapshot 3 contains any new or changed data since snapshot 2 but won’t contain any unchanged data from snapshot 1 or 2. instead snapshot 3 contains references to blocks in snapshot 1 and snapshot 2 for any unchanged data this repeats for all subsequent snapshots of the persistent disk snapshots are always created based on the last successful snapshot taken and so now you’re probably wondering what happens when you decide to delete a snapshot are they dependent on each other well when you delete a snapshot compute engine immediately marks the snapshot as deleted in the system if the snapshot has no dependent snapshots it is deleted outright however if the snapshot does have dependent snapshots then there are some steps that happen behind the scenes so shown here in this diagram snapshot 2 is deleted the next snapshot from the full snapshot no longer references the snapshot for deletion in this example snapshot 1 then becomes the reference for snapshot 3 and any data that is required for restoring other snapshots is moved into the next snapshot increasing its size shown here blocks that were unique to snapshot 2 are moved to snapshot 3 and the size of snapshot 3 increases any data that is not required for restoring other snapshots is deleted so in this case blocks that are already in snapshot 3 are deleted from snapshot 2 and the size of all snapshots are lower now because subsequent snapshots might require information stored in a previous snapshot please be aware that deleting a snapshot does not necessarily delete all the data on the snapshot if you’re looking to make sure that your data has indeed been deleted from your snapshots you should delete all snapshots if your disk has a snapshot schedule you must detach the snapshot schedule from the disk before you can delete the schedule removing the snapshot schedule from the disk prevents further snapshot activity from occurring now touching on the topic of scheduled snapshots by far the best way to backup your data on compute engine is to use scheduled snapshots this way you will never have to worry about manually creating snapshots or even worry about using other tools to kick off those snapshots you can simply use this built-in tool by google which is why snapshot schedules are considered best practice to backup any compute engine persistent disks now in order to create any snapshot schedules you must create your snapshot schedule in the same region where your persistent disk resides now there are two ways to create a snapshot schedule the first one is to create a snapshot schedule and then attach it to an existing persistent disk the other way is to create a new persistent disk with a snapshot schedule you also have the option of setting up a snapshot retention policy that defines how long you want to keep your snapshots some options when creating snapshot schedules are both retention policies and source disk deletion rules now if you choose to set up a snapshot retention policy you must do it as part of your snapshot schedule when you create a snapshot schedule is when you can also set a source disk deletion rule the source disk deletion rule controls what happens to your snapshots if the source disk is deleted now a few caveats here on the scheduled snapshots is that a persistent disk can only have one snapshot schedule attached to it at a time also you cannot delete a snapshot schedule if it is attached to a disk you must detach the schedule from all disks then delete the schedule as well after you create a snapshot schedule you cannot edit it to update a snapshot schedule you must delete it and create a new one now before i end this lesson i wanted to touch on managing snapshots for just a minute so when managing snapshots there’s a few things to remember in order to use snapshots to manage your data efficiently you can snapshot your disks at most once every 10 minutes you are unable to snapshot your disks at intervals less than 10 minutes so please keep that in mind when creating your schedules also you should create snapshots on a regular schedule to minimize data loss if there was an unexpected failure if you have existing snapshots of a persistent disk the system automatically uses them as a baseline for any subsequent snapshots that you create from that same disk so in order to improve performance you can eliminate excessive snapshots by creating an image and reusing it using this method would not only be ideal for storage and management of snapshots but also help to reduce costs and if you schedule regular snapshots for your persistent disks you can reduce the time that it takes to complete each snapshot by creating them during off-peak hours when possible and lastly for those of you who use windows for most situations you can use the volume shadow copy service to take snapshots of persistent disks that are attached to windows instances you can create vss snapshots without having to stop the instance or detach the persistent disk and so that’s pretty much all i wanted to cover when it comes to the theory of persistent disk snapshots their schedules and how to manage them in the next lesson i’ll be doing a hands-on demo demonstrating snapshots and putting this theory into practice and get a feel for how snapshots work and how they can be applied to persistent disks so you can now mark this lesson as complete and whenever you’re ready join me in the console [Music] welcome back in this demonstration we’re going to dive into snapshots and snapshot schedules this demo will give you the hands-on knowledge you need to create and delete snapshots along with how to manage snapshot schedules we’re going to start the demo off by creating an instance we’re going to interact with it and then take a snapshot of the disk we’re going to then create another instance from the snapshot and then create some snapshot schedules for both of these instances by using both the console and the command line so there’s a lot to do here so with that being said let’s dive in and so i’m currently logged in as tony bowties gmail.com as well i’m in project bowtie inc so the first thing that we need to do to kick off this demo is to create an instance but first as always i like to make sure that i have a vpc to deploy my instance into with its corresponding default firewall rules and so i’m going to head on over to the navigation menu and scroll down to vpc network and because i didn’t delete my default vpc from the last demo i still have it here i’m just going to drill down and make sure that i have my firewall rules i’m gonna go over to firewall rules and as expected the ssh firewall rule that i need has already been created and so now that i have everything in order i’m gonna go back over to the navigation menu and head on over to compute engine to create my instance now i figure for this demo i’d switch it up a little bit and create the instance by the command line so i’m going to head on over to cloud shell i’m going to open that up and it took a minute to provision and so what i’m going to do now is i’m going to open it up in a new tab i’m going to zoom in for better viewing and i’m going to paste in my command to create my instance and this gcloud command to create these instances will be available in the github repository and you will find all the instructions and the commands under managing snapshots in compute engine so i’m going to hit enter and you may get a prompt to authorize this api call and i’m going to click on authorize and success our instance has been created and is up and running and so now what i want to do is ssh into the instance and so i’m just going to run the command from here which is gcloud compute ssh dash dash zone the zone that i’m in which is used 1b and the instance which is bowtie dash instance i’m going to hit enter it’s going to prompt me if i want to continue i’m going to say yes and i’m going to enter my passphrase and enter it again it’s going to update my metadata and it’s going to ask me again for my passphrase and i’m in so i’m going to just quickly clear my screen and so the first thing i want to do is i want to verify the name of my instance so i’m going to type in the command hostname and as expected bowtie dash instance shows up and so now i want to create a text file and so i’m going to run the command sudo nano file a text i’m going to hit enter and it’s going to open up my nano text editor and you can enter a message of any kind that you’d like for me i’m going to enter more bow tie needed because you can never get enough bow ties i’m going to hit ctrl o to save press enter to verify the file name to write and then ctrl x to exit i’m going to run the command ls space minus al to list my files so i can verify that my file has been created and as you can see here file a bowties.txt has been created and so now that i’ve created my instance and i’ve written a file to disk i’m going to now head on over to the console and take a snapshot of this disk and because my session was transferred to another tab i can now close the terminal and you want to head over to the left-hand menu and go to disks and so now i want to show you two ways on how you can create this snapshot the first one is going to disks and choosing the disk that you want for me it’s bowtie instance and under actions i’m going to click on the hamburger menu and here i can create snapshot and this will bring me straight to my snapshot menu but for this demo i’m going to go over to the left hand menu and i’m going to click on snapshots and here i’m going to click on create snapshot and so for the name of the snapshot i’m going to type in bowtie snapshot and i’m going to use the same for the description moving down on the source disk the only one that i can select is bow tie instance and that’s the one that i want anyways so i’m going to click on that the location in order to cut down on costs we don’t need multi-regional we’re going to just select regional and if you select on the location i’m able to select any other locations like tokyo and i can create my snapshot in tokyo but i want to keep my snapshot in the same region so i’m going to go back and select us east one where it is based on the source disk location and i’m going to add a label here with the key environment and the value of testing i’m going to leave my encryption type as google managed and i’m going to simply click on create and this will create a snapshot of the boot disk on bow tie instance and that took about a minute there and so just as a note if you have any bigger discs they will take a little bit longer to snapshot okay and now that i’ve created my snapshot i’m going to go back up to vm instances and i’m going to create a new instance from that snapshot and so i’m going to name this instance bowtie dash instance dash 2 and i’m going to give this a label i’m going to add a label here the key of environment and the value of testing and hit save the region is going to be used 1 and you can leave the zone as its default as us east 1b and under machine type you can select the e2 micro and you want to go down to boot disk and select the change button and here i’m going to select snapshots instead of using a public image so i’m going to click on snapshots and if i select the snapshot drop down menu i will see here my bowtie snapshot so i’m going to select this i’m going to leave the rest as default and i’m going to go down to select and i’m going to leave everything else as its default and i’m going to click on create i’m going to just give it a minute here so bowtie instance 2 can be created okay and it took a minute there so now i’m going to ssh into this instance and i’m going to zoom in for better viewing and even though i know the instance is named bowtie.instance2 i’m still going to run the hostname command and as expected the same name pops up but what i was really curious about is if i run the command ls space dash al i can see here my file of file of bowties.text and if i cat the file [Music] i’ll be able to see the text that i inputted into that file and so although it was only one file and a text file at that i was able to verify that my snapshot had worked as there will be times where your snapshot can get corrupted and so doing some various spot checks on your snapshots is some good common practice and so now i want to create a snapshot schedule for both of these instances and so i’m going to go back to the console and on the left hand menu i’m going to head down to snapshots and if i go over to snapshot schedules you can see that i have no snapshot schedules so let’s go ahead and create a new one by clicking on create snapshot schedule and so as mentioned in the last lesson we need to create this schedule first before we can attach it to a disk and so i’m going to name this snapshot schedule as bow tie dash disk schedule i’m going to use the same for the description the region i’m going to select it as us east one and i’m going to keep the snapshot location as regional under us east one you scroll down here and under schedule options you can leave the schedule frequency as daily and just as a note for start time this time is measured in utc so please remember this when you’re creating your schedule in your specific time zone and so i’m going to put the start time as o 600 and this will be 1 am eastern standard time as backups are always best done when there is the least amount of activity and i’m going to keep the auto delete snapshots after 14 days i’m going to keep the deletion rule as keep snapshots as well i can enable the volume shadow copy service for windows but since we’re running linux i don’t need to enable this and since we labeled everything else i might as well give this a label i’m going to use the key as environment and the value of testing and once you’ve filled everything out then you can simply click on create and it took a minute there but the schedule was created and so now that i have my snapshot schedule i need to attach it to a disk so i’m going to head on over to the left hand menu and click on disks and here i’m going to drill down into bow tie instance i’m going to go up to the top and click on edit and under snapshot schedule i’m going to click on the drop down and here i will find bow tie disk schedule i’m going to select that i’m going to click on save and so now that i have my snapshot schedule attached to my disk for the bowtie instance instance i now want to create a snapshot schedule for my other instance and so instead of using the console i’m going to go ahead and do it through the command line so i’m going to go up to the top to my open shell and i’m going to quickly clear the screen and so in order to create my schedule i’m going to run this command gcloud compute resource policies create snapshot schedule the name of the snapshot schedule which is bow tie disk schedule 2 the region the maximum retention days the retention policy and the schedule followed by the storage location and like i said before these commands you will find in the github repository so i’m going to go ahead and hit enter and so i wanted to leave this error in here to show you that i needed the proper permissions in order to create this snapshot schedule a great reminder to always check if you have the right role for the task at hand and so i have two options i can either change users from my service account user to tony bowtie or i can simply head on over to my instance and edit the service account permissions and so the easiest way to do it would be to just switch users and so i’m going to go ahead and do that so i’m going to go ahead and run the command gcloud auth login and remember that this is something that you don’t have to do i merely wanted to show you that you require the proper permissions on creation of specific resources okay and i quickly went through the authentication process i’m gonna just clear my screen and i’m going to go ahead and run the command again and as expected the snapshot schedule was created with no errors and so now that my schedule has been created i can now attach it to the disk so i’m going to run the command gcloud compute disks add resource policies the instance name which is bowtie instance 2 and the resource policy which is the snapshot schedule named as bowtie disk schedule 2 in the zone of us east 1b i’m going to hit enter and success and so just to verify that the snapshot schedule has been attached to my disk i’m going to go back to the console i’m going to head back on over to the main page of disks i’m going to drill down into bow tie instance 2 and here it is the snapshot schedule has been attached and so i want to congratulate you on making it to the end of this demo and i hope this demo has been useful as snapshots in the role of an engineer is a common task that can save you from any data loss once set into place and so just as a recap you’ve created an instance you created a file on that instance and then you’ve created a snapshot of the disk of that instance and used it to create another instance you then verified the snapshot and then created a snapshot schedule for both boot disks of the instances using the console and the command line well done on another great job now before you go i wanted to take a moment to clean up any resources we’ve used so we don’t accumulate any costs and so the first thing we want to do is we want to detach the snapshot schedules from the disks and so since we’re in bow tie instance 2 i’m going to go ahead and click on edit under snapshot schedule i’m going to select the no schedule hit save and i’m going to do the same thing with my other disk now i’m going to head back on over to snapshots i’m going to delete this snapshot and i’m going to head back on over to snapshot schedules i’m going to select all the snapshot schedules and i’m going to click on delete and now that everything’s cleaned up with regards to snapshots and snapshot schedules i can now go over to vm instances and delete the instances i’m going to select them all and simply click on delete and so that’s pretty much all i wanted to cover in this demo when it comes to snapshots and snapshot schedules so you can now mark this as complete and let’s move on to the next one welcome back in this lesson we’re going to switch gears and take an automated approach to deployment by diving into google’s tool for infrastructure as code called deployment manager now deployment manager allows you to deploy update and tear down resources from within google cloud using yaml jinja and python code templates it allows you to automate the deployment of all the resources that are available in google cloud and deploy it in a fast easy and repeatable way for consistency and efficiency in this lesson we’re going to explore the architecture of deployment manager and dive into all the different components that gives it its flexibility and the features that make this tool an easy solution for deploying complex environments so with that being said let’s dive in now breaking down the components that i mentioned earlier i wanted to start off with the first component being the configuration now a configuration defines the structure of your deployment as you must specify a configuration to create a deployment a configuration describes all the resources you want for a single deployment and is written in yaml syntax that lists each of the resources you want to create and its respective resource properties a configuration must contain a resources section followed by the list of resources to create and so each resource must contain these three components the name the type and properties without these three components a deployment will not instantiate and so i wanted to take a moment to go over these three components in a bit of depth so the first component of the configuration is the name and the name is a user defined string to identify this resource and can be anything you choose from names like instance one my-vm bowtie dash instance and you can even go as far to use larks dash instance dash don’t dash touch and the syntax can be found here and must not contain any spaces or invalid characters next component in a configuration is type and there are a couple of different types that you can choose from a type can represent a single api source known as a base type or a set of resources known as a composite type and either one of these can be used to create part of your deployment the type of the resource being deployed here in this diagram is shown as a base type of compute.v1.instance and there are many other api resources that can be used such as compute.v1.disk app engine dot v1 as well as bigquery.v2 and the syntax is shown here as api dot version dot resource now a composite type contains one or more templates that are pre-configured to work together these templates expand to a set of base types when deployed in a deployment composite types are essentially hosted templates that you can add to deployment manager the syntax is shown here as gcp dash types forward slash provider colon resource and to give you an example of what a composite type looks like here is shown the creation of a reserved ip address using the compute engine v1 api and you could also use composite types with other apis in the same way such as gcp dash types forward slash app engine dash v1 colon apps or bigquery v2 colon data sets and for the last component in a configuration is properties and this is the parameters for the resource type this includes all the parameters you see here in this example including the zone machine type the type of disk along with its parameters pretty much everything that gives detail on the resource type now just as a note they must match the properties for this type so what do i mean by this so let’s say you entered a zone but that particular zone doesn’t exist or that compute engine machine type doesn’t exist in that zone you will end up getting an error as deployment manager will not be able to parse this configuration and thus failing deployment so make sure when you add your properties that they match those of the resource now a configuration can contain templates which are essentially parts of the configuration file that have been abstracted into individual building blocks a template is a separate file that is imported and used as a type in a configuration and you can use as many templates as you want in a configuration and allow you to separate your configuration out into different pieces that you can use and reuse across different deployments templates can be as generalized or specific as you need and they also allow you to take advantage of features like template properties environment variables and modules to create dynamic configuration as shown here templates can be written in a couple of different ways they can be written in either ginger 2.1 or python 3. the example shown on the left has been written in ginger and is very similar to the yaml syntax so if you’re familiar with yaml this might be better for you the example on the right has been written in python and is pretty amazing as you can take advantage of programmatically generating parts of your templates if you are familiar with python this might be a better format for you now one of the advantages of using templates is the ability to create and define custom template properties template properties are arbitrary variables that you define in template files any configuration file or template file that uses the template in question can provide a value for the template property without changing the template directly this lets you abstract the property so that you can change the property’s value for each unique configuration without updating the underlying template and just as a note deployment manager creates predefined environment variables that you can use in your deployment in this example the project variable will use the project id for this specific project and so combining all these components together will give you a deployment and so a deployment is a collection of resources that are deployed and managed together using a configuration you can then deploy update or delete this deployment by merely changing some code or at the click of a button now when you deploy you provide a valid configuration in the request to create the deployment a deployment can contain a number of resources across a number of google cloud services when you create a deployment deployment manager creates all of the described resources to deploy a configuration it must be done through the command line and cannot be done through the console you can simply use the syntax shown here and a deployment will be instantiated from the configuration file that you have entered where bow tie deploy is the name of the deployment and the file after the dash dash config is your configuration file google cloud also offers pre-defined templates that you can use to deploy from the gcp marketplace and can be found right in the console of deployment manager this way all the configuration and template creation is handled for you and you just deploy the solution through the console now after you’ve created a deployment you can update it whenever you need to you can update a deployment by adding or removing resources from a deployment or updating the properties of existing resources in a deployment a single update can contain any combination of these changes so you can make changes to the properties of existing resources and add new resources in the same request you update your deployment by first making changes to your configuration file or you can create a configuration file with the changes you want you will then have the option to pick the policies to use for your updates or you can use the default policies and finally you then make the update request to deployment manager and so once you’ve launched your deployment each deployment has a corresponding manifest as the example shown here a manifest is a read-only property that describes all the resources in your deployment and is automatically created with each new deployment manifests cannot be modified after they have been created as well it’s not the same as a configuration file but is created based on the configuration file and so when you delete a deployment all resources that are part of the deployment are also deleted if you want to delete specific resources from your deployment and keep the rest delete those resources from your configuration file and update the deployment instead and so as you can see here deployment manager gives you a slew of different options to deploy update or delete resources simultaneously in google cloud now like most services in gcp there are always some best practices to follow note that there are many more best practices to add to this and can be found in the documentation which i will be providing the link to in the lesson text but i did want to point out some important ones to remember so the first one i wanted to bring up is to break your configurations up into logical units so for example you should create separate configurations for networking services security services and compute services so this way each team will be able to easily take care of their own domain without having to sift through a massive template containing the code to the entire environment another best practice to follow is to use references and references should be used for values that are not defined until a resource is created such as resources self-link ip address or system generated id without references deployment manager creates all resources in parallel so there’s no guarantee that dependent resources are created in the correct order using references would enforce the order in which resources are created the next one is to preview your deployments using the preview flag so you should always preview your deployments to assess how making an update will affect your deployment deployment manager does not actually deploy resources when you preview a configuration but runs a mock deployment of those resources instead this gives you the opportunity to see the changes to your deployment before committing to it you also want to consider automating the creation of projects as well as automating the creation of resources contained within the projects and this enables you to adopt an infrastructure as code approach for project provisioning this will allow you to provide a series of predefined project environments that can be quickly and easily provisioned it will also allow you to use version control to manage your base project configuration and it will also allow you to deploy reproducible and consistent project configurations and lastly using a version control system as part of the development process for your deployments is a great best practice to follow as it allows you to fall back to a previous known good configuration it provides an audit trail for changes as well it uses the configuration as part of a continuous deployment system now as you’ve seen here in this lesson deployment manager can be a powerful tool in your tool belt when it comes to implementing infrastructure as code and it has endless possibilities that you can explore on your own it can also provide a massive push towards devops practices and head down the path of continuous automation through continuous integration continuous delivery and continuous deployment and so that’s pretty much all i wanted to cover when it comes to deployment manager and so whenever you’re ready join me in the next one where we will go hands-on in a demonstration to deploy a configuration in deployment manager so you can now mark this lesson as complete and whenever you’re ready join me in the console [Music] welcome back in this demonstration we’re gonna go hands-on with deployment manager and deploy a small web server we’re gonna first use the google cloud editor to copy in our code and we’re gonna then do a dry run and then finally deploy our code we’re gonna then do a walkthrough of deployment manager in the console and go through the manifest as well as some of the other features we’re then going to verify all the deployed resources and we get to do an easy cleanup in the end by hitting the delete button and taking care of removing any resources that were created so there’s quite a bit to go through here and so with that being said let’s dive in and so as you can see here i am logged in as tonybowties gmail.com in the project of bowtie inc now since we’re going to be doing most of our work in code the first thing that we want to do is go to the google cloud editor so i’m going to go up here to the top and open up cloud shell and i’m going to then click on the button open editor i’m going to make this full screen for better viewing and so in order to get the terminal in the same viewing pane as the editor i’m going to simply go up to the top menu and click on terminal and select new terminal now for better viewing and this is totally optional for you i’m going to change the color theme into a dark mode and so i’m going to go up to the menu click on file go down to settings and go over to color theme and i’m going to select dark visual studio and for those of you who are working in visual studio code this may look very familiar to you and i’m also going to increase the font size by again going back up to file over to settings and then over to open preferences here under workspace and then scroll down to terminal and if you scroll down to integrated font size i’m going to adjust the font size to 20 for better viewing and my cloud shell font size is a little bit easier to see and so once you’ve done that you can then close the preferences tab and we’re now ready to create files in our editor okay so next up i want to create a folder for all my files to live in so i’m going to go up to the menu here i’m going to select on file and select new folder and i’m going to rename this folder as templates and hit ok and so now that we have the folder that all of our files are going to live in the next step is to open up the github repository in your text editor and have your files ready to copy over and so just as a note for those who are fluent in how to use git you can use this new feature in the cloud shell editor to clone the course repo without having to recreate the files so i’m going to go over my text editor and make sure that you’ve recently done a git pull we’re going to open up the files under compute engine deployment manager and you’ll see templates with a set of three files and i’ve already conveniently opened them up i’m going to go up to bow tie deploy.yaml and this is going to be the configuration file that i’m going to be copying over and once i finish copying all these files over i’ll be going through this in a little bit of detail just so you can understand the format of this configuration and so i’m going to select all of this i’m going to copy this head back on over to the editor and here i’m going to select file new file so i’m going to rename this as bow tie dash deploy dot yaml hit okay and i’m going to paste in my code and so this configuration file is showing that i’m going to be importing two templates by the name of bowtie.webserver.jinja as well as bowtie.network.jinja so i’m going to have a template for my web server and a template for the network and under resources as you can see this code here will create my bow tie dash web server the type is going to be the template the properties will have the zone the machine type as well as a reference for the network as well underneath the bowtie web server is the bowtie network and again this is pulling from type bowtie.network.jinja so this is a another template file and under the properties we have the region of us east one and so we’re going to copy over these two templates bowtie web server and bowtie network as we need both of these templates in order to complete this deployment and so i’m going to go ahead and do that now head back on over to my code editor i’m going to go to bowtie web server i’m going to copy everything here back to my editor and i’m going to create the new file called bowtie web server it’s going to be dot jinja hit enter i’m going to paste the code in and just to do a quick run through of the template the instance name is going to be bow tie dash website the type is compute.v1.instance and as you can see here we are using a bunch of different properties here under zone we have property zone which is going to reference back to the yaml template here under zone you will see us east 1b and so this way if i have to create another web server i can enter whatever zone i like here in the configuration file and leave the bow tie dash web server template just the way it is under machine type i have variables set for both the zone and machine type under disks i’m going to have the device name as an environment variable and it’s going to be a persistent disk and the source image is going to be debian9 i also put in some metadata here that will bring up the web server and lastly i have a network tag of http server as well as the configuration for the network interface the network referring to bowtie dash network and a sub network called public which i will be showing to you in just a moment and as well the access configs of the type one to one nat and this will give the instance a public ip address and so now that we’ve gone through that template we need to create one last template which is the bowtie dash network so i’m going to head back on over to my code editor and open up bowtie network select the code copy it back over to cloud editor and i’m going to create a new file call this bowtie network dot jinja hit enter paste in my code and to quickly walk you through this we’re going to be creating a new custom network called bow tie dash network the type is going to be compute.v1.network as the vpc uses the compute engine api it’s going to be a custom network so the value of the auto create sub networks is going to be false the name is going to be public here we have the custom ipcider range and you can also use this as a variable but for this demo i decided to just leave it under network i have a reference to the bowtie network the value for private google access is false and the region variable is fulfilled through the configuration file moving right along i have two firewall rules here one for ssh access and the other for web server access one opening up port 22 to the world as well as port 80. as well the web server access firewall rule has a target tag of http server referencing back to the network tag of the bowtie web server instance okay and so now we’ve finished creating the configuration file along with the templates so i’m going to head back on up to the menu click on file and select save all and since we’ve finished creating all of our files the next thing to do is to execute a mock deploy using the bowtie deploy configuration but first i know that we haven’t used deployment manager before and so i need to go in and turn on the api and so i’m just going to go up here to the top to the search bar and i’m going to type in deployment and you should see deployment manager as the first result and bring this down a little bit and as expected the deployment manager api has not been enabled yet so i’m going to click on enable and after a few moments we should be good to go okay and as you can see here deployment manager is pretty empty as most of it is done through the command line but if you’re looking to deploy a marketplace solution you can do that right here at the top and this will bring you right to the marketplace and will allow you to deploy from a large selection of pre-configured templates but i don’t want to do that and so i’m just going to bring this up a little bit and i’m going to head on over to the terminal i’m going to run an ls i’m going to run the command ls and you should be able to see the templates folder i’m going to change my directory into the templates folder do another ls and here are all my files and so before we do a mock deploy of this configuration we want to make sure that we’re deploying to the correct project i can see here that i am currently in bow tie inc but if you are ever unsure about the project that you’re in you can always run the gcloud config list command in order to confirm so i’m going to quickly clear my screen and i’m going to run the command gcloud config list it’s going to prompt me to authorize this api call and i’m going to authorize and as expected my project is set to deploy in project bowtie inc and so now that i’ve verified it i’m going to quickly clear my screen again and so i’m going to paste in my command gcloud deployment dash manager deployments create bowtie deploy which is the name of the deployment along with the configuration file flag dash dash config and then the name of the configuration file which is bowtie deploy.yaml and the preview flag as we’re only doing a mock deploy and so if there are any errors i’ll be able to see this before i actually deploy all the resources so i’m going to go ahead and hit enter and in just a minute we’ll find out exactly what happens and as you can see here the mock deployment was a success and there are no errors and if i do a quick refresh up here in the console i’ll be able to see my deployment which i can drill down into and here i will see my manifest file with my manifest name and i can view the config as well as my templates that it imported the layout as well as the expanded config so if i click on view of the config it’ll show me here in the right hand panel exactly what this deployment has used for the config and i can do the same thing with my template files so i’m going to open up my network template and i can quickly go through that if i’d like as well i also have the option to download it and if i really want to get granular i can go over here to the left hand pane i can select on vm instance and it’ll show me all the resource properties everything from the disks to the machine type to the metadata the network interfaces the zone that it’s in and the network tag same thing if i go over here to the network and again because this is a custom network the value for the autocreate subnetworks is false i can check on the public sub network as well as the firewall rules and so because this is a preview it has not actually deployed anything now taking a look at compute engine instances in a new tab you can see here that i have no instances deployed and so the same goes for any of the other resources and so what we want to do now is we want to deploy this deployment and we can do that one of two ways we can simply click on the button here that says deploy or we can run the command in the command line and so i’m looking to show you how to do it in the command line so i’m going to move down to the command line i’m going to quickly clear my screen i’m going to paste in the code which is gcloud deployment dash manager deployments update bowtie deploy now you’re probably wondering why update and this is because the configuration has been deployed even though it’s a preview deployment manager still sees it as a deployment and has created what google cloud calls a shell and so by using update you can fully deploy the configuration using your last preview to perform that update and this will deploy your resources exactly how you see it in the manifest and so anytime i make an adjustment to either the configuration or the templates i can simply run the update command instead of doing the whole deployment again so i want to get this deployed now and so i’m going to hit enter and i’ll be back in a minute once it’s deployed all the resources and success my deployment is successful and as you can see here there are no errors and all the resources are in a completed state so i’m going to select my bow tie website in my manifest and i’ll have access to the resource with a link up here at the top that will bring me to the instance as well i can ssh into the instance and i have all the same options that i have in the compute engine console and so in order to verify that all my resources have been deployed i’m going to go back over to the tab that i already have open and as you can see my instance has been deployed and i want to check to see if my network has been deployed so i’m going to go up to the navigation menu and i’m going to head on down to vpc network and as you can see here bowtie network has been deployed with its two corresponding firewall rules i’m going to drill down into bowtie network and check out the firewall rules and as you can see here ssh access and web server access have been created with its corresponding protocols and ports and so now that i know that all my resources have been deployed i want to head back on over to compute engine to see if my instance has been configured properly so i’m going to click on ssh to see if i can ssh into the instance and success with ssh so i know that this is working properly and so i’m going to close this tab down and i also want to see whether or not my web server has been configured properly with the metadata that i provided it and so i can directly open up the webpage by simply clicking on this link and success my you look dapper today why thank you tony bowtie and so as you can see the web server has been configured properly using the metadata that i provided so i wanted to congratulate you on making it to the end of this demo and hope it has been extremely useful and gave you an understanding of how infrastructure is code is used in google cloud using their native tools i hope this also triggered some possible use cases for you that will allow you to automate more resources and configurations in your environment and allow you to start innovating on fantastic new ways for cicd for those of you who are familiar with infrastructure as code this may have been a refresher but will give you some insight for questions on the exam that cover deployment manager and just as a quick note for those of you who are looking to learn more about infrastructure as code i have put a few links in the lesson text going into depth on deployment manager and another tool that google recommends called terraform and so now before you go we want to clean up all the resources that we’ve deployed to reduce any incurred costs and because deployment manager makes it easy we can do it in one simple step so i’m going to head back on over to my open tab where i have my console open to deployment manager and i’m going to head on over to the delete button and simply click on delete now deployment manager gives me the option of deleting all the resources it created or simply deleting the manifest but keeping the resources untouched and so you want to select delete bowtie deploy with all of its resources and simply click on delete all and this will initiate the teardown of all the resources that have been deployed from the bowtie deploy configuration and this will take a few minutes to tear down but if you ever have a larger configuration to deploy just as a note it may take a little bit longer to both deploy and to tear down and so just as a recap you’ve created a configuration file and two templates in the cloud shell editor you then deployed your configuration using deployment manager through the command line in cloud shell you then verified each individual resource that was deployed and verified the configuration of each resource congratulations again on a job well done and so that’s pretty much all i wanted to cover in this demo when it comes to deploying resources using deployment manager so you can now mark this as complete and let’s move on to the next one [Music] welcome back and in this lesson we’re going to learn about google cloud load balancing and how it’s used to distribute traffic within the google cloud platform google cloud load balancing is essential when using it with instance groups kubernetes clusters and is pretty much the defacto when it comes to balancing traffic coming in as well as within your gcp environment knowing the differences between the types of load balancers and which one to use for specific scenarios is crucial for the exam as you will be tested on it and so there’s a lot to cover here so with that being said let’s dive in now i wanted to start off with some basics with regards to what is low balancing and so when it comes to the low balancer itself a low balancer distributes user traffic across multiple instances of your application so by spreading the load you reduce the risk of your applications experiencing performance issues a load balancer is a single point of entry with either one or multiple back ends and within gcp these back ends could consist of either instance groups or negs and i’ll be getting into any g’s in just a little bit low balancers on gcp are fully distributed and software defined so there is no actual hardware load balancer involved in low balancing on gcp it is completely software defined and so there’s no need to worry about any hardware any pre-warming time as this is all done through software now depending on which low balancer you choose google cloud gives you the option of having either a global load balancer or a regional load balancer the load balancers are meant to serve content as close as possible to the users so that they don’t experience increased latency and gives the users a better experience as well as reducing latency on your applications when dealing with low balancers in between services google cloud also offers auto scaling with health checks in their load balancers to make sure that your traffic is always routed to healthy instances and by using auto scaling able to scale up the amount of instances you need in order to handle the load automatically now as there are many different low balancers to choose from it helps to know what specific aspects you’re looking for and how you want your traffic distributed and so google has broken them down for us into these three categories the first category is global versus regional global load balancing is great for when your back ends are distributed across multiple regions and your users need access to the same applications and content using a single anycast ip address as well when you’re looking for ipv6 termination global load balancing will take care of that now when it comes to regional load balancing this is if you’re looking at serving your back ends in a single region and handling only ipv4 traffic now once you’ve determined whether or not you need global versus regional low balancing the second category to dive into is external versus internal external load balancers are designed to distribute traffic coming into your network from the internet and internal load balancers are designed to distribute traffic within your network and finally the last category that will help you decide on what type of load balancer you need is the traffic type and shown here are all the traffic types that cover
http https tcp and udp and so now that we’ve covered the different types of load balancing that’s available on google cloud i wanted to dive into some more depth on the low balancers themselves here you can see that there are five load balancers available and i will be going through each one of these in detail now before diving into the low balancers themselves i wanted to introduce you to a concept using gcp for all load balancers called back end services how a low balancer knows exactly what to do is defined by a backend service and this is how cloud load balancing knows how to distribute the traffic the backend service configuration contains a set of values such as the protocol used to connect to back ends various distribution in session settings health checks and timeouts these settings provide fine grain control over how your load balancer behaves an external http or https load balancer must have at least one backend service and can have multiple backend services the back ends of a backend service can be either instance groups or network endpoint groups also known as negs but not a combination of both and so just as a note you’ll hear me refer to negs over the course of this lesson and so a network endpoint group also known as neg is a configuration object that specifies a group of back-end endpoints or services and a common use case for this configuration is deploying services into containers now moving on to the values themselves i wanted to first start with health checks and google cloud uses the overall health state of each back end to determine its eligibility for receiving new requests or connections back ends that respond successfully for the configured number of times are considered healthy back-ends that fail to respond successfully for a separate number of times are considered unhealthy and when a back-end is considered unhealthy traffic will not be routed to it next up is session affinity and session affinity sends all requests from the same client to the same back end if the back end is healthy and it has capacity service timeout is the next value and this is the amount of time that the load balancer waits for a backend to return a full response to a request next up is traffic distribution and this comprises of three different values the first one is a balancing mode and this defines how the load balancer measures back-end readiness for the new requests or connections the second one is target capacity and this defines a target maximum number of connections a target maximum rate or target maximum cpu utilization and the third value for traffic distribution is capacity scalar and this adjusts overall available capacity without modifying the target capacity and the last value for back-end services are back-ends and a back-end is a group of endpoints that receive traffic from a google cloud load balancer and there are several types of back-ends but the one that we are concentrating on for this section and for the exam is the instance group now backend services are not critical to know for the exam but i wanted to introduce you to this concept to add a bit more context for when you are creating low balancers in any environment and will help you understand other concepts in this lesson and so this is the end of part one of this lesson it was getting a bit long so i decided to break it up this would be a great opportunity for you to get up and have a stretch get yourself a coffee or tea and whenever you’re ready join me in part two where we will be starting immediately from the end of part one so you can now complete this video and i will see you in part two this is part two of the cloud load balancers lesson and we’ll be starting exactly where we left off in part one so with that being said let’s dive in now before jumping right into the first load balancer that i wanted to introduce which is http and https low balancer there’s a couple of different concepts that i wanted to introduce and these are the methods of how an http and https load balancer distributes traffic using forwarding rules and these are cross region low balancing and content based load balancing now touching on cross region load balancing when you configure an external http or https load balancer in premium tier it uses a global external ip address and can intelligently route requests from users to the closest backend instance group or neg based on proximity for example if you set up instance groups in north america and europe and attach them to a low balancers back-end service user requests around the world are automatically sent to the vms closest to the users assuming that the vms pass health checks and have enough capacity if the closest vms are all unhealthy or if the closest instance group is at capacity and another instance group is not at capacity the load balancer automatically sends requests to the next closest region that has available capacity and so here in this diagram a user in switzerland hits the low balancer by going to bowtieinc.co and because there are vms that are able to serve that traffic in europe west 6 traffic is routed to that region and so now getting into content based load balancing http and https low balancing supports content based load balancing using url maps to select a backend service based on the requested host name request path or both for example you can use a set of instance groups or negs to handle your video content and another set to handle static as well as another set to handle any images you can also use http or https low balancing with cloud storage buckets and then after you have your load balancer set up you can add cloud storage buckets to it now moving right along when it comes to http and https load balancer this is a global proxy based layer 7 low balancer which is at the application layer and so just as a note here with all the other low balancers that are available in gcp the http and https low balancer is the only layer 7 load balancer all the other low balancers in gcp are layer 4 and will work at the network layer and so this low balancer enables you to serve your applications worldwide behind a single external unicast ip address external http and https load balancing distributes http and https traffic to back ends hosted on compute engine and gke external http and https load balancing is implemented on google front ends or gfes as shown here in the diagram gfes are distributed globally and operate together using google’s global network and control plane in the premium tier gfes offer cross-regional low balancing directing traffic to the closest healthy backend that has capacity and terminating http and https traffic as close as possible to your users with the standard tier the load balancing is handled regionally and this load balancer is available to be used both externally and internally that makes this load balancer global external and internal this load balancer also gives support for https and ssl which covers tls for encryption in transit as well this load balancer accepts all traffic whether it is ipv4 or ipv6 traffic and just know that ipv6 traffic will terminate at the low balancer and then it will forward traffic as ipv4 so it doesn’t really matter which type of traffic you’re sending the load balancer will still send the traffic to the back end using ipv4 this traffic is distributed by location or by content as shown in the previous diagram forwarding rules are in place to distribute defined targets to each target pool for the instance groups again defined targets could be content based and therefore as shown in the previous diagram video content could go to one target whereas static content could go to another target url maps direct your requests based on rules so you can create a bunch of rules depending on what type of traffic you want to direct and put them in maps for requests ssl certificates are needed for https and these can be either google managed or self-managed and so just as a quick note here the ports used for http are on 80 and 8080 as well on https the port that is used is port 443 now moving into the next low balancer is ssl proxy an ssl proxy low balancing is a reverse proxy load balancer that distributes ssl traffic coming from the internet to your vm instances when using ssl proxy load balancing for your ssl traffic user ssl connections are terminated at the low balancing layer and then proxied to the closest available backend instances by either using ssl or tcp with the premium tier ssl proxy low balancing can be configured as a global load balancing service with the standard tier the ssl proxy load balancer handles low balancing regionally this load balancer also distributes traffic by location only ssl proxy low balancing lets you use a single ip address for all users worldwide and is a layer 4 load balancer which works on the network layer this load balancer shows support for tcp with ssl offload and this is something specific to remember for the exam this is not like the http or https load balancer where we can use specific rules or specific configurations in order to direct traffic ssl proxy low balancer supports both ipv4 and ipv6 but again it does terminate at the load balancer and forwards the traffic to the back end as ipv4 traffic and forwarding rules are in place to distribute each defined target to its proper target pool and encryption is supported by configuring back-end services to accept all the traffic over ssl now just as a note it can also be used for other protocols that use ssl such as web sockets and imap over ssl and carry a number of open ports to support them moving on to the next load balancer is tcp proxy now the tcp proxy load balancer is a reverse proxy load balancer that distributes tcp traffic coming from the internet to your vm instances when using tcp proxy load balancing traffic coming over a tcp connection is terminated at the load balancing layer and then forwarded to the closest available backend using tcp or ssl so this is where the low balancer will determine which instances are at capacity and send them to those instances that are not like ssl proxy load balancing tcp proxy load balancing lets you use a single ip address for all users worldwide the tcp proxy load balancer automatically routes traffic to the back ends that are closest to the user this is a layer 4 load balancer and again can serve traffic both globally and externally tcp proxy distributes traffic by location only and is intended for specifically non-http traffic although you can decide if you want to use ssl between the proxy and your back end and you can do this by selecting a certificate on the back end again this type of load balancer supports ipv4 and ipv6 traffic and ipv6 traffic will terminate at the low balancer and forwards that traffic to the back end as ipv4 traffic now tcp proxy low balancing is intended for tcp traffic and supports many well-known ports such as port 25 for simple mail transfer protocol or smtp next up we have the network load balancer now the tcp udp network load balancer is a regional pass-through load balancer a network load balancer distributes tcp or udp traffic among instances in the same region network load balancers are not proxies and therefore responses from the back end vms go directly to the clients not back through the load balancer the term known for this is direct server return as shown here in the diagram this is a layer 4 regional load balancer and an external load balancer as well that can serve to regional locations it supports either tcp or udp but not both although it can low balance udp tcp and ssl traffic on the ports that are not supported by the tcp proxy and ssl proxy ssl traffic can still be decrypted by your back end instead of the load balancer itself traffic is also distributed by incoming protocol data this being protocols scheme and scope there is no tls offloading or proxying and forwarding rules are in place to distribute and define targets to their target pools and this is for tcp and udp only now with other protocols they use target instances as opposed to instance groups lastly a network load balancer can also only support self-managed ssl certificates as opposed to the google managed certificates as well and so the last low balancer to introduce is the internal load balancer now an internal tcp or udp load balancer is a layer 4 regional load balancer that enables you to distribute traffic behind an internal load balancing ip address that is accessible only to your internal vm instances internal tcp and udp load balancing distributes traffic among vm instances in the same region this load balancer supports tcp or udp traffic but not both and as i said before this type of load balancer is used to balance traffic within gcp across instances this low balancer cannot be used for balancing internet traffic as it is internal only traffic is automatically sent to the back end as it does not terminate client connections and for forwarding rules this load balancer follows specific specifications where you need to specify at least one and up to five ports by number as well you must specify all to forward traffic to all ports now again like the network load balancer you can use either tcp or udp and so that’s pretty much all i had to cover with this lesson on low balancing please remember that for the exam you will need to know the differences between them all in my experience there are a few questions that come up on the exam where you will need to know what low balancer to use and so a good idea might be to dive into the console and have a look at the options as well as going back through this lesson as a refresher to understand each use case this is also a crucial component in any environment that is used especially when serving applications to the internet for any three-tier web application or kubernetes cluster and so that pretty much sums up this lesson on low balancing so you can now mark this lesson as complete and let’s move on to the next one welcome back in this lesson i will be going into depth on instance groups along with instance templates instance groups are a great way to set up a group of identical servers used in conjunction with instance groups instance templates handles the instance properties to deploy the instance groups into your environment this lesson will dive into the details of the features use cases and how instance groups and instance templates work together to create a highly scalable and performing environment now there’s a lot to cover here so with that being said let’s dive in now an instance group is a collection of vm instances that you can manage as a single entity compute engine offers two kinds of vm instance groups managed and unmanaged manage instance groups or migs let you operate applications on multiple identical vms you can make your workload scalable and highly available by taking advantage of automated mig services like auto scaling auto healing regional and zonal deployments and automatic updating and i’ll be getting into these services in just a sec now when it comes to unmanaged instance groups they also let you low balance across a fleet of vms but this is something that you need to manage and i’ll be going deeper into unmanaged instance groups a bit later right now i wanted to take some time to go through the features and use cases of migs in a bit more detail for some more context starting off with its use cases now migs are great for stateless serving workloads such as website front ends web servers and website applications as the application does not preserve its state and saves no data to persistent storage all user and session data stays with the client and makes scaling up and down quick and easy migs are also great for stateless batch workloads and these are high performance or high throughput compute workloads such as image processing from a queue and lastly you can build highly available stateful workloads using stateful managed instance groups or stateful migs stateful workloads include applications with stateful data or configuration such as databases legacy monolith type applications and long running batch computations with checkpointing you can improve uptime and resiliency of these types of applications with auto healing controlled updates and multi-zone deployments while preserving each instance’s unique state including instance names persistent disks and metadata now that i’ve covered the type of workloads that are used with migs i wanted to dive into the features starting with auto healing now when it comes to auto healing managed instance groups maintain high availability of your applications by proactively keeping your instances in a running state a mig automatically recreates an instance that is not running and managed instance groups also take care of application-based auto healing and this improves application availability by relying on a health check that detects things like freezing crashing or overloading if a health check determines that an application has failed on a vm the mig auto healer automatically recreates that vm instance the health check used to monitor the migs are similar to the health checks used for low balancing with a few little differences low balancing health checks help direct traffic away from unresponsive instances and towards healthy ones these health checks cannot recreate instances whereas mig health checks proactively signal to delete and recreate instances that become unhealthy moving on to managed instance groups regional or multi-zone feature now you have the option of creating regional migs or zonal migs regional migs provide higher availability compared to zonal migs because the instances in a regional mig are spread across multiple zones in a single region google recommends regional migs over zonal migs as you can manage twice as many migs as zonal migs so you can manage 2 000 migs instead of 1000 you can also spread your application load across multiple zones instead of a single zone or managing multiple zonal migs across different zones and this protects against zonal failures and unforeseen scenarios where an entire group of instances in a single zone malfunctions in the case of a zonal failure or if a group of instances in a zone stops responding a regional mig continues supporting your instances by continuing to serve traffic to the instances in the remaining zones now cloud low balancing can use instance groups to serve traffic so you can add instance groups to a target pool or to a back end an instance group is a type of back end and the instances in the instance group respond to traffic from the load balancer the back end service in turn knows which instances it can use and how much traffic they can handle and how much traffic they are currently handling in addition the back-end service monitors health checking and does not send new connections to unhealthy instances now when your applications require additional compute resources migs support auto scaling that dynamically add or remove instances from the mig in response to an increase or decrease in load you can turn on auto scaling and configure an auto scaling policy to specify how you want the group to scale not only will auto scaling scale up to meet the load demands but will also shrink and remove instances as the load decreases to reduce your costs auto scaling policies include scaling based on cpu utilization load balancing capacity and cloud monitoring metrics and so when it comes to auto updating you can easily and safely deploy new versions of software to instances in a mig the rollout of an update happens automatically based on your specifications you can also control the speed and scope of the deployments in order to minimize disruptions to your application you can optionally perform rolling updates as well as partial rollouts for canary testing and for those who don’t know rolling updates allow updates to take place with zero downtime by incrementally updating instances with new ones as well canary testing is a way to reduce risk and validate new software by releasing software to a small percentage of users with canary testing you can deliver to certain groups of users at a time and this is also referred to as stage rollouts and this is a best practice in devops and software development now there are a few more things that i wanted to point out that relate to migs you can reduce the cost of your workload by using preemptable vm instances in your instance group and when they are deleted auto healing will bring the instances back when preemptable capacity becomes available again you can also deploy containers to instances in managed instance groups when you specify a container image in an instance template and is used to create a mig each vm is created with the container optimized os that includes docker and your container starts automatically on each vm in the group and finally when creating migs you must define the vpc network that it will reside in although when you don’t define the network google cloud will attempt to use the default network now moving on into unmanaged instance groups for just a minute unmanaged instance groups can contain heterogeneous instances and these are instances that are of mixed sizes of cpu ram as well as instance types and you can add and remove these instances from the group whenever you choose there’s a major downside to this though unmanaged instance groups do not offer auto scaling auto healing rolling update support multi-zone support or the use of instance templates and are not a good fit for deploying highly available and scalable workloads you should only use unmanaged instance groups if you need to apply load balancing to groups of these mixed types of instances or if you need to manage the instances yourself so unmanaged instance groups are designed for very special use cases where you will need to mix instance types in almost all cases you will be using managed instance groups as they were intended to capture the benefits of all the features they have to offer now in order to launch an instance group into any environment you will need another resource to do this and this is where instance templates come into play an instance template is a resource that you can use to create vm instances and managed instance groups instance templates define the machine type boot disk image or container image as well as labels and other instance properties you can then use an instance template to create a mig or vm instance instance templates are an easy way to save a vm instances configuration so you can use it later to recreate vms or groups of vms an instance template is a global resource that is not bound to a zone or region although you can restrict a template to a zone by calling out specific zonal resources now there is something to note for when you are ever using migs if you want to create a group of identical instances you must use an instance template to create a mig and is something you should always keep in the front of mind when using migs these two resources both instance templates and managed instance groups go hand in hand now some other things to note is that instance templates are designed to create instances with identical configurations so you cannot update an existing instance template or change an instance template after you create it if you need to make changes to the configuration create a new instance template you can create a template based on an existing instance template or based on an existing instance to use an existing vm to make a template you can save the configuration using the gcloud command gcloud instance dash templates create or to use the console you can simply go to the instance templates page click on the template that you want to update and click on create similar the last thing that i wanted to point out is that you can use custom or public images in your instance templates and so that’s pretty much all i had to cover when it comes to instance groups and instance templates managed instance groups are great for when you’re looking at high availability as a priority and letting migs do all the work of keeping your environment up and running and so you can now mark this lesson as complete and whenever you’re ready join me in the next one where we go hands-on with instance groups instance templates and load balancers in a demo welcome back in this demo we’re going to put everything that we’ve learned together in a hands-on demo called managing bow ties we’re going to create an instance template and next we’re going to use it to create an instance group we’re then going to create a low balancer with a new back end and create some health checks along the way we’re then going to verify that all instances are working by browsing to the load balancer ip and verifying the website application we’re then going to stress test one of the instances to simulate a scale out using auto scaling and then we’re going to simulate scaling the instance group back in now there’s quite a bit to do here so with that being said let’s dive in so here i am logged in as tony bowties at gmail.com under project bowtie inc and so the first thing that you want to do is you want to make sure that you have a default vpc network already created and so just to double check i’m going to go over to the navigation menu i’m going to scroll down to vpc network and yes i do have a default vpc network so i’m going to go ahead and start creating my resources and so now what i want to do is i want to create my instance template and so in order to do that i’m going to go back up to the navigation menu i’m going to go down to compute engine and go up to instance templates as you can see i currently have no instance templates and yours should look the same and so you can go ahead and click on create instance template and so just as a note there are no monthly costs associated with instance templates but this estimate here on the right is to show you the cost of each instance you will be creating with this template okay so getting right into it i’m going to name this instance template bowtie template and since we’re spinning up a lot of vms you want to be conscious on costs and so under series you’re going to click on the drop down and you’re going to select n1 and under machine type you’re going to select f1 micro and this is the smallest instance type as well as the cheapest within google cloud you can go ahead and scroll down right to the bottom here under firewall you want to check off allow http traffic next you want to select management security disks networking and sold tenancy you scroll down a little bit and under startup script you’re going to paste in the script that’s available in the repo and you will find a link to this script and the repo in the lesson text and so you can leave all the other options as its default and simply click on create it’s going to take a couple minutes here okay and the instance template is ready and so the next step that you want to do is create an instance group and as i said in a previous lesson in order to create an instance group you need an instance template hence why we made the instance template first okay and our instance template has been created and so now that you’ve created your instance template you can head on over to instance groups here in the left hand menu and as expected there are no instance groups and so you can go ahead and click on the big blue button and create an instance group you’re going to make sure that new managed instance group stateless is selected and here you have the option of choosing a stateful instance group as well as an unmanaged instance group and so we’re going to keep things stateless and so for the name of the instance group you can simply call this bowtie group i’m going to use the same name in the description and under location you want to check off multiple zones in under region you want to select us east one and if you click on configure zones you can see here that you can select all the different zones that’s available in that region that you choose to have your instances in and so i’m going to keep it under all three zones i’m going to scroll down here a little bit and under instance template you should see bow tie template you can select that you can scroll down a little bit more and here under minimum number of instances you want to set the minimum number of instances to 3 and under maximum number of instances you want to set that to 6 and so this is going to be double the amount of the minimum number of instances so when you’re scaled out you should have a maximum of 6 instances and when you’re scaled in or you have very low traffic you should only have three instances so you can scroll down some more and under auto healing you want to select the health check and you’re going to go ahead and create a new health check under name you can call this healthy bow ties i’m going to use the same for the description and i’m going to leave the rest as its default and go down and click on save and continue i’m going to scroll down some more and i’m going to leave the rest as is and simply click on create and it’s going to take a couple minutes here and so i’m going to pause the video and i’ll be back in a flash okay and my instance group has been created and so to get a better look at it i’m going to click on bow tie group and i can see here that three instances have been created if i go up to vm instances you can see here that i have three instances but under instance groups because i have health check enabled it shows that my instances are unhealthy and this is because i still need to create a firewall rule that will allow google’s health check probes to reach my vm instances and so you’re going to go ahead and create that firewall rule so you can bring the health check status up to healthy so i’m going to go over to the navigation menu and scroll down to vpc network and go over to firewall here under firewall as expected you have the default firewall rules from the default created vpc network and so i’m going to go up to create firewall and you can name this firewall rule allow health check i’m going to use the same for the description i’m going to scroll down here a little bit and under targets i’m going to select all instances in the network source filter i’m going to leave as i p ranges and so here under source i p ranges i want to enter in the ip addresses for the google cloud health check probes and you can find these in the documentation and i will also be supplying them in the instructions and there are two sets of ip addresses that need to be entered and just as a note you don’t need to know this for the exam but it’s always a good to know if you’re ever adding health checks to any of your instances i’m going to scroll down a little bit to protocols and ports and under tcp i’m going to check it off and put in port 80. that’s pretty much all you have to do here so whenever you entered all that information in you can simply click on create and so now i have a firewall rule that will allow health checks to be done and so it may take a minute or two but if i head back on over to my compute engine instances and go over to my instance groups i’ll be able to see that all my instances are now healthy and so whenever you’re creating instance groups and you’re applying health checks this firewall rule is necessary so please be aware okay so now that we’ve created our instance templates we’ve created our instance groups and we created a firewall rule in order to satisfy health checks we can now move on to the next step which is creating the load balancer so i’m going to go back up to the navigation menu and i’m going to scroll down to network services and over to load balancing and as expected there are no load balancers created and so whenever you’re ready you can click on the big blue button and create a new low balancer here you have the option of creating an http or https load balancer along with a tcp load balancer or a udp load balancer and because we’re serving external traffic on port 80 we’re going to use the http load balancer so you can click on start configuration and i’m being prompted to decide between internet facing or internal only and you’re going to be accepting traffic from the internet to your load bouncer so make sure that from internet to my vms is checked off and simply click continue and so next you will be prompted with a page with a bunch of configurations that you can enter and so we’ll get to that in just a second but first we need to name our load balancer and so i’m going to call this bowtie dash lb for low balancer and so next step for your load balancer is you need to configure a back end so you can click on back end configuration and here you have the option of selecting from back-end services or back-end buckets so you’re going to go ahead and click on back-end services and create a back-end service and here you will be prompted with a bunch of fields to fill out in order to create your back-end service and you can go ahead and name the backend service as bowtie backend service back-end type is going to be instance group and you can leave the protocol named port and timeout as is as we’re going to be using http under instance group in new back-end if you select the drop-down you should see your available bow tie group instance group select that scroll down a little bit and under port numbers you can enter in port 80 and you can leave all the other options as default and simply click on done and so if you’re ever interested you can always add a cache using cloud cdn now i know we haven’t gone through cloud cdn in this course but just know that this is google’s content delivery network and it uses google’s global edge network to serve content closer to users and this accelerates your websites and your applications and delivers a better user experience for your user okay and moving on here under health check if i click on the drop down you should see healthy bow ties you can select that for your health check and so just as a note here under advanced configurations you can set your session affinity your connection draining timeout as well as request and response headers and so we don’t need any of that for this demo and so i’m going to go ahead and collapse this and once you’ve finished filling in all the fields you can simply click on create okay and so you should now have your back end configuration and your host and path rules configured and so the only thing that’s left to configure is the front end so you can go up and click on front-end configuration and you can name your front-end bowtie front-end service gonna keep the protocols http and here is where you would select the network service tier choosing either premium or standard and if you remember in the load balancing lesson in order to use this as a global load balancer i need to use a premium tier okay and we’re going to keep this as ipv4 with an ephemeral ip address on port 80 so once you’ve finished configuring the front end you can simply click on done and you can go and click on review and finalize and this will give you a summary on your configuration and so i’m happy with the way everything’s configured and if you are as well you can simply click on create and this may take a minute or two but it will create your low balancer along with your back end and your front end so again i’m going to pause the video here for just a minute and i’ll be back before you can say cat in the hat okay and my load balancer has been created and to get a little bit more details i’m going to drill down into it and i can see here the details of my load balancer along with my monitoring and any caching but i don’t have any caching enabled and therefore nothing is showing so going back to the details i can see here that i have a new ip address for my load balancer and i’ll be getting into that in just a minute i’m going to go back here and i’m going to check out my back ends click on bow tie back end service and here i can see the requests per second as well as my configuration and if you do see this caution symbol here showing that some of your instances are unhealthy it’s only because the low balancer needs time to do a full health check on all the instances in the instance group and so this will take some time okay and so i’m going to go back over and check out my front end and there’s nothing to drill down into with the front end service but it does show me my scope the address the protocol network tier and the low balancer itself so this is the end of part one of this demo it was getting a bit long so i decided to break it up this would be a great opportunity for you to get up have a stretch get yourself a coffee or tea and whenever you’re ready part two will be starting immediately from the end of part one so you can now mark this as complete and i’ll see you in part two this is part two of the managing bow ties demo and we will be starting exactly where we left off in part one so with that being said let’s dive in and so before you move forward you want to make sure that all your instances are considered healthy by your load balancer and as i can see here all my instances in my instance group are considered healthy by the load balancer and so just to verify this i’m going to go ahead and copy the i p address and you can open up a new tab in your browser and simply paste it in and success as you can see here managing the production of many bow ties can be automated but managing the wearer of them definitely cannot another fine message from the people at bow tie inc now although this is a simple web page i used a couple variables just to show you the low balancing that happens in the background and traffic will be load balanced in between all of the instances in the instance group so if you click on refresh then you should see the machine name and the data center change so every time i click refresh the traffic will be routed to a different instance in a different zone and so a simple simulation on how traffic is low balance between the different instances in their different zones okay so now that we’ve verified the website application i’m going to close down this tab and so now that we’ve created our instance template we’ve created our instance group and we’ve created our low balancer with the back end and front end service and it looks like everything seems to be working together nicely we’re going to go ahead and simulate a scale out using auto scaling and so in order to simulate this we’re going to do a stress test on one of the instances so i’m going to head back on over to the navigation menu scroll down to compute engine and here you can ssh into any one of these instances and run the stress test from there so i’m going to pick here the one at the top and so whenever you’re logged in you can simply paste in the command that i’ve included in the instructions that will run the stress test and so this is a stress test application called stress that was included in the startup script and this again will put stress on the server itself and trigger a scale out to handle the load and it’ll do this for 30 seconds so you can go ahead and hit enter and head back over to the console and in about a minute or two you should see some new instances that will be created by your instance group in order to handle the load okay and after about a couple minutes it’s showing here that instances are being created and it will be scaling out to the maximum amount of instances that i’ve set it to which is six i’m going to drill down into this and yes a scale out is happening and some new instances are being created to handle the load so i’m going to give it just a minute here okay and as you can see here all the instances have been created they’ve been added to the instance group and all of them are marked as healthy and so just to verify that all the instances are working i’m going to go ahead and open up a new tab i’m going to plug in the ip address on my load balancer and i’m going to simply cycle through all these instances to make sure that all them are working and it looks like i have no issues and so now that you’ve simulated a scale out i wanted to go ahead and run a scale in and so i’m first going to close up these tabs now with regards to scaling there is a 10 minute stabilization period that cannot be adjusted for scaling and this is a built-in feature into google cloud now because i respect your time as a student i’m going to show you a work around to trigger a scale in sooner strictly for this demo and i also wanted to caution that this should never be done in a production or production-like environment you should always wait for the scaling to happen on its own and never force it this method is being used strictly for learning purposes to save you some time and so i’m going to go ahead to the top menu and click on rolling restart and replace and this will bring up a new page where you will have the option to either restart or replace any instances in your instance group and so for your purposes under operation make sure that you have restart checked off and this will restart all of your instances and only bring up the ones that are needed so i’m going to go ahead and click on restart i’m going to go back to my instance group console and i’m just going to give this a few minutes to cook and i’ll be right back in a flash okay so it looks like the instance group has scaled in and we are now down left to three instances the minimum that we configured for our instance group and so that pretty much covers the managing bow ties demo so i wanted to congratulate you on making it through this demo and i hope that this has been extremely useful in excelling your knowledge on managing instance templates managed instance groups and creating load balancers with back-end and front-end services now this was a jam-packed demo and there was a lot to pack in with everything you’ve learned from the last few lessons and so just as a recap you created an instance template with your startup script you then created a new instance group with a health check to go with it configuring auto scaling for a minimum of three instances you then created a firewall rule so that the health check probes were able to connect to the application and you then created a load balancer with its back end and front-end service and verified that the website application was indeed up and running you then ran a stress test to allow a simulation of a scale out of your instance group and then simulated a scale in of your instance group great job and so now that we’ve completed this demo you want to make sure that you’re not accumulating any unnecessary costs and so i’m going to go ahead and walk you through the breakdown of deleting all these resources so first you’re going to go ahead and delete the load balancer go back up to the navigation menu and scroll down to network services and go over to load balancing so i’m going to go ahead and check off bow tie lb and simply go up to the top and click on delete it’s going to ask me if i’m sure i want to do this i’m also going to select bow tie back end service and i can delete my load balancer and my back end service all at once i’m going to go ahead and delete load balancer and the selected resources and this should clear up within a few seconds okay and our load balancer has been deleted i’m going to just go up here to the back end make sure everything’s good yeah we’re all clean same thing with front end and so now you can move on to instance groups so i’m going to head back up to the navigation menu go down a compute engine and go up to instance groups and here you can just simply check off bow tie group and simply click on delete you’re going to be prompted with a notification to make sure you want to delete bow tie group yes i want to delete and again this should take about a minute okay it actually took a couple minutes but my instance group has been deleted and so now i’m going to go over to instance templates and i’m going to delete my template and check off bow tie template and simply click delete you’re going to get a prompt to make sure you want to delete your instance template yes you want to delete and success you’ve now deleted all your resources although there is one more resource that you will not be billed for but since we’re cleaning everything up we might as well clean that up as well and this is the firewall rule that we created and go over to the navigation menu and scroll down to vpc network i’m going to go to firewall here on the left hand menu and here i’m going to check off the allow health check firewall rule and simply click on delete i’m going to get a prompt to make sure that i want to delete it yes you want to delete i’m going to quickly hit refresh and yes we’ve deleted it and so this concludes the end of this demo so you can now mark this as complete and i’ll see you in the next one welcome back in this next section we will be focusing on google cloud’s premier container orchestration service called kubernetes but before we can dive right into kubernetes and the benefits that it gives to containers you’ll need an understanding as to what containers are and what value containers provide in this lesson i will be covering the difference between virtual machines and containers what containers are how they work and the value proposition they bring so with that being said let’s dive in now for those of you who didn’t know container technology gets its name from the shipping industry products get placed into standardized shipping containers which are designed to fit into the ship that accommodates the container’s standard size instead of having various sizes of packaging now by standardizing this process and keeping the items together the container can be moved as a unit and it costs less to do it this way as well the standardization allows for consistency when packing and moving the containers placing them on ships and docks as well as storage no matter where the container is it always stays the same size and the contents stay isolated from all the other containers that they are stacked with and so now before we get into the details of containers i wanted to cover how we got here and why so a great way to discuss containers is through their comparison to virtual machines now as we discussed in a previous lesson when it comes to vms the systems are virtualized through a hypervisor that sits on top of the underlying host infrastructure the underlying hardware is virtualized so that multiple operating system instances can run on the hardware each vm runs its own operating system and has access to virtualized resources representing the underlying hardware due to this process vms come with the cost of large overhead in cpu memory and disk as well can be very large due to the fact that each vm needs its own individual operating system there also lacks standardization between each vm making them unique due to the os configuration the software installed and the software libraries thus not making it very portable to be able to run in any environment now when dealing with containers things are run very differently the underlying host infrastructure is still there but instead of just using a hypervisor and abstracting the underlying hardware containerization takes it one step further and abstracts the operating system thus leaving the application with all of its dependencies in a neatly packaged standardized container this is done by installing the operating system on top of the host infrastructure and then a separate layer on top of the host operating system called the container engine now instead of having their own operating system the containers share the operating system kernel with other containers while operating independently running just the application code and the dependencies needed to run that application this allows each container to consume very little memory or disk making containers very lightweight efficient and portable containerized applications can start in seconds and many more instances of the application can fit onto the machine compared to a vm environment this container can now be brought over to other environments running docker and able to run without having the worries of running into issues of compatibility now although there are a few different container engines out there the one that has received the most popularity is docker and this is the engine that we will be referring to for the remainder of this course now a docker image is a collection or stack of layers that are created from sequential instructions on a docker file so each line in the dockerfile is run line by line and a unique read-only layer is written to the image what makes docker images unique is that each time you add another instruction in the docker file a new layer is created now going through a practical example here shown on the right is a docker file and we will be able to map each line of code to a layer shown on the docker image on the left the line marked from shows the base image that the image will be using the example shown here shows that the ubuntu image version 12.04 will be used next the run instruction is used which will perform a general update install apache 2 and output a message to be displayed that is written to the index.html file next up is the working directories and these are the environment variables set by using an env instruction and this will help run the apache runtime next layer is the expose instruction and this is used to expose the container’s port on 8080 and lastly the command layer is an instruction that is executing the apache web server from its executable path and so this is a great example of how a docker file is broken down from each line to create the layers of this image and so just as a note here each docker image starts with a base image as well each line in a docker file creates a new layer that is added to the image and finally all the layers in a docker image are read only and cannot be changed unless the docker file is adjusted to reflect that change so now how do we get from a docker image to a container well a running docker container is actually an instantiation of an image so containers using the same image are identical to each other in terms of their application code and runtime dependencies so i could use the same image for multiple copies of the same container that have different tasks what makes each individual container different is that running containers include a writable layer on top of the read-only content runtime changes including any rights and updates to data and files are saved in this read write layer so in this example when using the command docker run fashionista a docker container will be instantiated from the docker image and a read write layer is always added on top of the read-only layers when a container is created writing any necessary files that’s needed for the application and so just as a note here docker containers are always created from docker images and containers can use the same image yet will always have a different read write layer no matter the amount of containers running on a given host so now when your containers have been created you need a place to store them and so this is where a container registry comes into play now a container registry is a single place for you to store and manage docker images now when you create your docker file and then build your image you want to store that image in a central image repository whether it be a private one or a public one a popular public container registry is docker hub and this is a common registry where many open source images can be found including those used for the base layer images like the ubuntu example that i showed you earlier and so once you have your containers in a container registry you need to be able to run these containers so in order to run these containers you need docker hosts and these can consist of any machine running the docker engine and this could be your laptop server or you can run them in provided hosted cloud environments now this may have been a refresher for some but for those of you who are new to containers i hope this has given you a lot more clarity on what containers are what they do and the value that they bring to any environment and so that’s pretty much all i wanted to cover on this short lesson of an introduction to containers so you can now mark this lesson as complete and let’s move on to the next one welcome back so now that you’ve gotten familiar with what containers are and how they work i wanted to dive into google cloud’s platform as a service offering for containers called google kubernetes engine also known as short as gke now although the exam goes into a more operational perspective with regards to gke knowing the foundation of kubernetes and the different topics of kubernetes is a must in order to understand the abstractions that take place with gke from regular kubernetes in this lesson i will be getting into key topics with regards to kubernetes and we’ll be touching on the architecture components and how they all work together to achieve the desired state for your containerized workloads now there’s a lot to get into so with that being said let’s dive in now before i can get into gke i need to set the stage on explaining what kubernetes is put simply kubernetes is an orchestration platform for containers which was invented by google and eventually open source it is now maintained by the cncf short for the cloud native computing foundation and has achieved incredible widespread adoption kubernetes provides a platform to automate schedule and run containers on clusters of physical or virtual machines thus eliminating many of the manual processes involved in deploying and scaling containerized applications kubernetes manages the containers that run the applications and ensure that there is no downtime in a way that you the user can define for example if you define that when a container goes down and another container needs to start kubernetes would take care of that for you automatically and seamlessly kubernetes provides you with the framework to run distributed systems resiliently it takes care of scaling and failover for your application provides deployment patterns and allows you to manage your applications with tons of flexibility reliability and power it works with a range of container tools including docker now although this adoption was widespread it did come with its various challenges this included scaling at cd load balancing availability auto scaling networking rollback on faulty deployments and so much more so now google cloud has since developed a managed offering for kubernetes providing a managed environment for deploying managing and scaling your containerized applications using google infrastructure the gke environment consists of compute engine instances grouped together to form a cluster and it provides all the same benefits as on-premises kubernetes yet has abstracted the complexity of having to worry about the hardware and to top it off it has the benefits of advanced cluster management features that google cloud provides with things like cloud load balancing and being able to spread traffic amongst clusters and nodes node pools to designate subnets of nodes within a cluster for additional flexibility automatic scaling of your cluster’s node instance count and automatic upgrades for your clusters node software it also allows you to maintain node health and availability with node auto repair and takes care of logging and monitoring with google cloud’s operation suite for visibility into your cluster so as you can see here gke holds a lot of benefits when it comes to running kubernetes in google cloud so i wanted to take a moment now to dive into the cluster architecture and help familiarize you with all the components involved in a cluster so a cluster is the foundation of google kubernetes engine and kubernetes as a whole the kubernetes objects that represent your containerized applications all run on top of the cluster in gke a cluster consists of at least one control plane and multiple worker machines called nodes the control plane and node machines run the kubernetes cluster the control plane is responsible to coordinate the entire cluster and this can include scheduling workloads like containerized applications and managing the workload’s life cycle scaling and upgrades the control plane also manages network and storage resources for those workloads and most importantly it manages the state of the cluster and make sure it is at the desired state now the nodes are the worker machines that run your containerized applications and other workloads the nodes are compute engine vm instances that gke creates on your behalf when you create a cluster each node is managed from the control plane which receives updates on each node’s self-reported status a node also runs the services necessary to support the docker containers that make up your cluster’s workloads these include the docker runtime and the kubernetes node agent known as the cubelet which communicates with the control plane and is responsible for starting and running docker containers scheduled on that node now diving deeper into the architecture there are components within the control plane and nodes that you should familiarize yourself with as these components are what ties the cluster together and helps manage the orchestration as well as the state now the control plane is the unified endpoint for your cluster the control plane’s components make global decisions about the cluster for example scheduling as well as detecting and responding to cluster events all interactions with the cluster are done via kubernetes api calls and the control plane runs the kubernetes api server process to handle those requests you can make kubernetes api calls directly via http or grpc or can also be done indirectly by running commands from the kubernetes command line client called cubectl and of course you can interact with the ui in the cloud console the api server process is the hub for all communications for the cluster moving on to the next component is cube scheduler the cube scheduler is a component that discovers and assigns newly created pods to a node for them to run on so any new pods that are created will automatically be assigned to its appropriate node by the cube scheduler taking into consideration any constraints that are in place next up is the cube controller manager and this is the component that runs controller processes and is responsible for things like noticing and responding when nodes go down maintaining the correct number of pods populating the services and pods as well as creating default accounts and api access tokens for new namespaces it is these controllers that will basically look to make changes to the cluster when the current state does not meet the desired state now when it comes to the cloud controller manager this is what embeds cloud-specific control logic the cloud controller manager lets you link your cluster into any cloud providers api and separates out the components that interact with that cloud platform from components that just interact with your cluster the cloud controller manager only runs controllers that are specific to your cloud provider in this case google cloud and lastly we have fcd and this component is responsible to store the state of the cluster at cd is a consistent and highly available key value store that only interacts with the api server it saves all the configuration data along with what nodes are part of the cluster and what pods they are running so now the control plane needs a way to interact with the nodes of the cluster thus the nodes having components themselves for this communication to occur this component is called a cubelet and this is an agent that runs on each node in the cluster that communicates with the control plane it is responsible for starting and running docker containers scheduled on that node it takes a set of pod specs that are provided to it and ensures that the containers described in those pod specs are running and healthy and i will be diving into pod specs in a later lesson next up is cube proxy and this is the component that maintains network connectivity to the pods in a cluster and lastly the container runtime is the software that is responsible for running containers kubernetes supports container runtimes like docker and container d and so these are the main components in a cluster covering the control plane and nodes with regards to communication within the cluster now before i end this lesson there is one more topic i wanted to touch on with regards to the architecture of a gke cluster and that is the abstraction that happens and what exactly does gke manage with regards to kubernetes well gke manages all the control plane components the endpoint exposes the kubernetes api server that cubectl uses to communicate with your cluster control plane the endpoint exposes the kubernetes api server that cubectl uses to communicate with your cluster control plane the endpoint ip is displayed in cloud console and this ip will allow you to interact with the cluster when you run the command gcloud container clusters get dash credentials you see that the command gets the cluster endpoint as part of updating cubeconfig an ip address for the cluster is then exposed to interact with and is responsible for provisioning and managing all the infrastructure that is needed for the control plane gke also automates the kubernetes nodes by launching them as compute engine vms under the hood but still allows the user to change the machine type and access upgrade options by default google kubernetes engine clusters and node pools are upgraded automatically by google but you can also control when auto upgrades can and cannot occur by configuring maintenance windows and exclusions and just as a note a clusters control plane and nodes do not necessarily run the same version at all times and i will be digging more into that in a later lesson and so i know this is a lot of theory to take in but is as i said before a necessity to understanding kubernetes and gke and as we go further along into kubernetes and get into demos i promise that this
will start to make a lot more sense and you will start becoming more comfortable with gke and the underlying components of kubernetes knowing kubernetes is a must when working in any cloud environment as it is a popular and growing technology that is not slowing down so knowing gke will put you in a really good position for your career as an engineer in google cloud as well will give you a leg up on diving into other cloud vendors implementation of kubernetes and so that’s pretty much all i wanted to cover when it comes to google kubernetes engine and kubernetes so you can now mark this lesson as complete and let’s move on to the next one welcome back in this lesson i will be covering cluster and node management in gke as it refers to choosing different cluster types for your workloads cluster versions node pools as well as upgrades and the many different options to choose from it is good to familiarize yourself with these options as they may be the deciding factor of having to keep your workloads highly available and your tolerance to risk within your environment so with that being said let’s dive in now in the last lesson we touched on nodes and how they are the workers for the kubernetes cluster so now that you are familiar with nodes i wanted to touch on a concept that builds on it called node pools now a node pool is a group of nodes within a cluster that all have the same configuration and using node config specification to achieve this a node pool can also contain one or multiple nodes when you first create a cluster the number and type of nodes that you specify becomes the default node pool as shown here in the diagram then you can add additional custom node pools of different sizes and types to your cluster all nodes in any given node pool are identical to one another now custom node pools are really useful when you need to schedule pods that require more resources than others such as more memory more disk space or even different machine types you can create upgrade and delete node pools individually without affecting the whole cluster and just as a note you cannot configure a single node in any node pool any configuration changes affect all nodes in the node pool and by default all new node pools run the latest stable version of kubernetes existing node pools can be manually upgraded or automatically upgraded you can also run multiple kubernetes node versions on each node pool in your cluster update each node pool independently and target different node pools for specific deployments in that node now with gke you can create a cluster tailored to your availability requirements and your budget the types of available clusters include zonal both single zone or multi-zonal and regional zonal clusters have a single control plane in a single zone depending on what kind of availability you want you can distribute your nodes for your zonal cluster in a single zone or in multiple zones now when you decide to deploy a single zone cluster it again has a single control plane running in one zone this control plane manages workloads on nodes running in the same zone a multi-zonal cluster on the other hand has a single replica of the control plane running in a single zone and has nodes running in multiple zones during an upgrade of the cluster or an outage of the zone where the control plane runs workloads still run however the cluster its nodes and its workloads cannot be configured until the control plane is available multi-zonal clusters are designed to balance availability and cost for consistent workloads and just as a note the same number of nodes will be deployed to each selected zone and may cost you more than budgeted so please be aware and of course when you’re looking to achieve high availability for your cluster regional clusters are always the way to go a regional cluster has multiple replicas of the control plane running in multiple zones within a given region nodes also run in each zone where a replica of the control plane runs because a regional cluster replicates the control plane and nodes it consumes more compute engine resources than a similar single zone or multi-zonal cluster the same number of nodes will be deployed to each selected zone and the default when selecting regional clusters is three zones now if you’re dealing with more sensitive workloads that require more strict guidelines private clusters give you the ability to isolate nodes from having inbound and outbound connectivity to the public internet this isolation is achieved as the nodes have internal ip addresses only if you want to provide outbound internet access for certain private nodes you can use cloudnat or manage your own nat gateway by default private google access is enabled in private clusters and their workloads with limited outbound access to google cloud apis and services over google’s private network in private clusters the control plane’s vpc network is connected to your clusters vpc network with vpc network peering your vpc network contains the cluster nodes and a separate google cloud vpc network contains your cluster’s control plane the control plane’s vpc network is located in a project controlled by google traffic between nodes and the control plane is routed entirely using internal ip addresses the control plane for a private cluster has a private endpoint in addition to a public endpoint the control plane for a non-private cluster only has a public endpoint the private endpoint is an internal ip address in the control plane’s vpc network the public endpoint is the external ip address of the control plane and you can control access to this endpoint using authorized networks or you can disable access to the public endpoint as shown here in the diagram you can disable the public endpoint and connect to your network using an internal ip address using cloud interconnect or cloud vpn and you always have the option of enabling or disabling this public endpoint now when you create a cluster you can choose the cluster specific kubernetes version or you can mix the versions for flexibility on features either way it is always recommended that you enable auto upgrade for the cluster and its nodes now when you have auto upgrade enabled you are given the choice to choose from what are called release channels when you enroll a new cluster in a release channel google automatically manages the version and upgrade cadence for the cluster and its node pools all channels offer supported releases of gke and are considered in general availability you can choose from three different release channels for automatic management of your cluster’s version and upgrade cadence as shown here the available release channels are rapid regular and stable release channels the rapid release channel gets the latest kubernetes release as early as possible and be able to use new gka features the moment that they go into general availability with the regular release channel you have access to gke and kubernetes features reasonably soon after they are released but on a version that has been qualified two to three months after releasing in the rapid release channel and finally we have the stable release channel where stability is prioritized over new functionality changes and new versions in this channel are rolled out last after being validated two to three months in the regular release channel and so if you’re looking for more direct management of your cluster’s version choose a static version when you enroll a cluster in a release channel that cluster is upgraded automatically when a new version is available in that channel now if you do not use a release channel or choose a cluster version the current default version is use the default version is selected based on usage and real world performance and is changed regularly while the default version is the most mature one other versions being made available are generally available versions that pass internal testing and qualification changes to the default version are announced in a release note now if you know that you need to use a specific supported version of kubernetes for a given workload you can specify it when creating the cluster if you do not need to control the specific patch version you use consider enrolling your cluster in a release channel instead of managing its version directly now when it comes to upgrading the cluster please be aware that control plane and nodes do not always run the same version at all times as well a control plane is always upgraded before its nodes when it comes to zonal clusters you cannot launch or edit workloads during that upgrade and with regional clusters each control plane is upgraded one by one as well with control planes auto upgrade is enabled by default and this is google cloud’s best practice now again if you choose you can do a manual upgrade but you cannot upgrade the control plane more than one minor version at a time so please be aware as well with any cluster upgrades maintenance windows and exclusions are available and so this way you can choose the best times for your upgrades and so like cluster upgrades by default a clusters nodes have auto upgrade enabled and it is recommended that you do not disable it again this is best practice by google cloud and again like the cluster upgrades a manual upgrade is available and maintenance windows and exclusions are available for all of these upgrades now when a no pool is upgraded gke upgrades one node at a time while a node is being upgraded gke stops scheduling new pods onto it and attempts to schedule its running pods onto other nodes the node is then recreated at the new version but using the same name as before this is similar to other events that recreate the node such as enabling or disabling a feature on the node pool and the upgrade is only complete when all nodes have been recreated and the cluster is in the desired state when a newly upgraded node registers with the control plane gke marks the node as schedulable upgrading a no pool may disrupt workloads running in that pool and so in order to avoid this you can create a new node pool with the desired version and migrate the workload then after migration you can delete the old node pool now surge upgrades let you control the number of nodes gke can upgrade at a time and control how disruptive upgrades are to your workloads you can change how many nodes gke attempts to upgrade at once by changing the surge upgrade parameters on a no pool surge upgrades reduce disruption to your workloads during cluster maintenance and also allow you to control the number of nodes upgraded in parallel surge upgrades also work with the cluster auto scaler to prevent changes to nodes that are being upgraded now surge upgrade behavior is determined by two settings max surge upgrade and max unavailable upgrade now with max surge upgrade this is the number of additional nodes that can be added to the no pool during an upgrade increasing max surge upgrade raises the number of nodes that can be upgraded simultaneously and when it comes to the max unavailable upgrade this is the number of nodes that can be simultaneously unavailable during an upgrade increasing max unavailable upgrade raises the number of nodes that can be upgraded in parallel so with max surge upgrade the higher the number the more parallel upgrades which will end up costing you more money with max unavailable upgrade the higher the number the more disruptive it is and so the more risk you are taking and so during upgrades gke brings down at most the sum of the max surge upgrade added with the max unavailable upgrade so as you can see here there are a slew of options when it comes to deciding on the type of cluster you want as well as the type of upgrades that are available along with when you want them to occur and so your deciding factor in the end will be the workload that you are running and your risk tolerance and this will play a big factor in keeping up time for your cluster as well as saving money in any type of environment and so that’s pretty much all i wanted to cover when it comes to gke cluster and node management so you can now mark this lesson as complete and let’s move on to the next one [Music] welcome back and in this lesson i will be diving into some more theory within kubernetes and gke this time touching on objects and how objects are managed pods are only one type of object but there are many other parts that are involved in the management of these objects and this is what this lesson is set out to teach you now there’s quite a bit to cover here so with that being said let’s dive in now kubernetes objects are persistent entities in kubernetes kubernetes uses these entities to represent the state of your cluster for example it can describe things like what containerized applications are running and on which nodes and what resources are available to those applications a kubernetes object is a record of intent once you create the object kubernetes will constantly work to ensure that object exists by creating an object you’re effectively telling kubernetes what you want your cluster’s workload to look like and this is your cluster’s desired state and you’ve heard me speak about this many times before and this is what i was referring to now almost every kubernetes object includes two nested object fields that govern the object’s configuration the object spec and the object’s status for objects that have a spec you have to set this when you create the object providing a description of the characteristics you want the resource to have its desired state the status describes the current state of the object supplied and updated by kubernetes and its components the kubernetes control plane continually and actively manages every object’s actual state to match the desired state you supplied now each object in your cluster has a name that is unique for that type of resource every kubernetes object also has a uid that is unique across your whole cluster only one object of a given kind can have a given name at a time however if you delete the object you can make a new object with that same name every object created over the whole lifetime of a kubernetes cluster has a distinct uid these distinct uids are also known as uuids which we discussed earlier on in the course now when creating updating or deleting objects in kubernetes this is done through the use of a manifest file where you would specify the desired state of an object that kubernetes will maintain when you apply the manifest each configuration file can contain multiple manifests and is common practice to do so when possible a manifest file is defined in the form of a yaml file or a json file and it is recommended to use yaml now in each yaml file for the kubernetes object that you want to create there are some required values that need to be set the first one is the api version and this defines which version of the kubernetes api you’re using to create this object the kind described in this example as a pod is the kind of object you want to create next up is the metadata and this is the data that helps uniquely identify the object including a string name a uid and an optional namespace and the last required value is the spec and this is what state you desire for the object and the spec in this example is a container by the name of bow tie dash web server and is to be built with the latest nginx web server image as well as having port 80 open on the container now when it comes to objects pods are the smallest most basic deployable objects in kubernetes a pod represents a single instance of a running process in your cluster pods contain one or more containers such as docker containers and when a pod runs multiple containers the containers are managed as a single entity and share the pods resources which also includes shared networking and shared storage for their containers generally one pod is meant to run a single instance of an application on your cluster which is self-contained and isolated now although a pod is meant to run a single instance of your application on your cluster it is not recommended to create individual pods directly instead you generally create a set of identical pods called replicas to run your application a set of replicated pods are created and managed by a controller such as a deployment controllers manage the life cycle of their pods as well as performing horizontal scaling changing the number of pods is necessary now although you might occasionally interact with pods directly to debug troubleshoot or inspect them it’s recommended that you use a controller to manage your pods and so once your pods are created they are then run on nodes in your cluster which we discussed earlier the pod will then remain on its node until its process is complete the pot is deleted the pod is evicted from the node due to lack of resources or the node fails if a node fails pods on the node are automatically scheduled for deletion now a single gke cluster should be able to satisfy the needs of multiple users or groups of users and kubernetes namespaces help different projects teams or customers to share a kubernetes cluster you can think of a namespace as a virtual cluster inside of your kubernetes cluster and you can have multiple namespaces logically isolated from each other they can help you and your teams with organization and security now you can name your namespaces whatever you’d like but kubernetes starts with four initial namespaces the first one is the default namespace and this is for objects with no other namespace so when creating new objects without a namespace your object will automatically be assigned to this namespace cube dash system is the next one and these are for objects created by kubernetes cube-public is created automatically and is readable by all users but is mostly reserved for cluster usage in case that some resources should be visible and readable publicly throughout the whole cluster and finally cube node lease is the namespace for the lease objects associated with each node which improves the performance of the node heartbeats as the cluster scales and so like most resources in google cloud labels are key value pairs that help you organize your resources in this case kubernetes objects labels can be attached to objects at creation time and can be added or modified at any time each object can have a set of key value labels defined and each key must be unique for a given object and labels can be found under metadata in your manifest file and so the one thing to remember about pods is that they are ephemeral they are not designed to run forever and when a pod is terminated it cannot be brought back in general pods do not disappear until they are deleted by a user or by a controller pods do not heal or repair themselves for example if a pod is scheduled on a node which later fails the pod is deleted as well if a pod is evicted from a node for any reason the pod does not replace itself and so here is a diagram of a pod life cycle that shows the different phases of its running time to give you some better clarity of its ephemeral nature when first creating the pod the pod will start impending and this is the pod’s initial phase and is waiting for one or more of the containers to be set up and made ready to run this includes the time a pod spends waiting to be scheduled as well as the time spent downloading container images over the network once the pod has completed the pending phase it is moved on to be scheduled and once it is scheduled it will move into the running phase and this is the phase where the pod has been bound to a node and all of the containers have been created the running phase has at least one container in the pod running or is in the process of starting or restarting and once the workload is complete the pod will move into the succeeded phase and this is where all the containers in the pod have terminated in success and will not be restarted now if all the containers in the pod have not terminated successfully the pod will move into a failed phase and this is where all the containers in the pod have terminated and at least one container has terminated in failure now there’s one more phase in the pod life cycle that i wanted to bring up which is the unknown phase and this is the state of the pod that could not be obtained this phase typically occurs due to an error in communicating with the node where the pod should be running so now when you’re creating pods using a deployment is a common way to do this a deployment runs multiple replicas of your application and automatically replaces any instances that fail or become unresponsive deployments help ensure that one or more instances of your application are available to serve user requests deployments use a pod template which contains a specification for its pods the pod specification determines how each pod should look like for instance what applications should run inside its containers which volumes the pods should mount its labels and more and so when a deployments pod template is changed new pods are automatically created one at a time now i wanted to quickly bring up replica sets for just a moment you’ll hear about replica sets and i wanted to make sure that i covered it replica sets ensures that a specified number of pod replicas are running at any given time however a deployment is a higher level concept that manages replica sets and provides updates to pods along with other features and so using deployments is recommended over using replica sets unless your workload requires it and i will be including a link to replica sets in the lesson text so speaking of workloads in kubernetes workloads are objects that set deployment rules four pods based on these rules kubernetes performs the deployment and updates the workload with the current state of the application workloads let you define the rules for application scheduling scaling and upgrading now deployments which we just discussed is a type of workload and as we’ve seen a deployment runs multiple replicas of your application and automatically replaces any instances that fail or become unresponsive deployments are best used for stateless applications another type of workload is stateful sets and in contrast to deployments these are great for when your application needs to maintain its identity and store data so basically any application that requires some sort of persistent storage daemon sets is another common workload that ensures every node in the cluster runs a copy of that pod and this is for use cases where you’re collecting logs or monitoring node performance now jobs is a workload that launches one or more pods and ensures that a specified number of them successfully terminate jobs are best used to run a finite task to completion as opposed to managing an ongoing desired application state and cron jobs are similar to jobs however cron jobs runs to completion on a cron-based schedule and so the last workload that i wanted to cover are config maps and these store general configuration information and so after you upload a config map any workload can reference it as either an environment variable or a volume mount and so just as a note config maps are not meant to store sensitive data if you’re planning to do this please use secrets now i know this lesson has been extremely heavy in theory but these are fundamental concepts to know when dealing with kubernetes and gke as well as the objects that it supports so i recommend that if you need to go back and review this lesson if things aren’t making sense so that you can better understand it as these concepts all tie in together and will come up in the exam and so that’s pretty much all i wanted to cover in this lesson on pods and object management within gke so you can now mark this lesson as complete and let’s move on to the next one [Music] welcome back and in this lesson i’m going to be diving into kubernetes services now services are a major networking component when it comes to working in kubernetes and can play a major factor when it comes to deciding on how you want to route your traffic within your kubernetes cluster as well in my experience services show up on the exam and so an understanding of how they work and the different types to use are essential to understanding the big picture of kubernetes this lesson will cover an overview on what services are what they do and the different types that are available along with their use cases now there’s a lot to cover here so with that being said let’s dive in now as i had discussed earlier kubernetes pods are ephemeral pods are created and destroyed to match the state of your cluster so these resources are never permanent a perfect example of this is by using a deployment object so you can create and destroy pods dynamically now when it comes to networking in kubernetes each pod gets its own ip address however in a deployment a pod that is running once destroyed will be recreated with a new ip address and there is no real way to keep track of these i p addresses for communication as they change very frequently and this is where services come into play now a service is an abstraction in the sense that it is not a process that listens on some network interface a service can be defined as a logical set of pods an abstraction on top of the pod which provides a single persistent ip address and dns name by which pods can be accessed it allows for routing external traffic into your kubernetes cluster and used inside your cluster for more intelligent routing with services it is also very easy to manage load balancing configuration for traffic between replicas it helps pods scale quickly and easily as the service will automatically handle the recreation of pods and their new ip addresses the main goal of services in kubernetes is to provide persistent access to its pods without the necessity to look for a pod’s ip each time when the pod is recreated and again services also allow for external access from users to the applications inside the cluster without having to know the ip address of the individual pod in order to reach that application now in order for a service to route traffic to the correct pod in the cluster there are some fields in the manifest file that will help determine the end points on where traffic should be routed shown here on the right is the deployment manifest for reference and on the left is the services manifest now as you can see here in the service manifest on the left the kind is clearly defined as service under metadata is the name of the service and this will be the dns name of the service when it is created so when it comes to the spec there is a field here called a selector and this is what defines what pods should be included in the service and it is the labels under the selector that define which pods and labels are what we discussed in the last lesson as arbitrary key value pairs so any pod with these matching labels is what will be added to the service as shown here in the deployment file this workload will be a part of the service and its labels match that of the selector in the services file for type this is the type of service that you will want to use in this example type cluster ip is used but depending on the use case you have a few different ones to choose from now at the bottom here is a list of port configurations protocol being the network protocol to use with the port port being the port that incoming traffic goes to and finally the target port which is the port on the pod that traffic should be sent to and this will make more sense as we go through the upcoming diagrams so touching on selectors and labels for a moment kubernetes has a very unique way of routing traffic and when it comes to services it’s not any different services select pods based on their labels now when a selector request is made to the service it selects all pods in the cluster matching the key value pair under the selector it chooses one of the pods if there are more than one with the same key value pair and forwards the network request to it and so here in this example you can see that the selector specified for the service has a key value pair of app inventory you can see the pod on node 1 on the left holds the label of app inventory as well which matches the key value pair of the selector and so traffic will get routed to that pod because of it if you look at the label for the pod in node 2 on the right the label does not match that of the selector and so it will not route traffic to that pod and so to sum it up the label on the pod matching the selector in the service determines where the network request will get routed to and so now i will be going through the many different service types that are available for routing network traffic within gke starting with cluster ip now a cluster ip service is the default kubernetes service it gives you a service inside your cluster that other apps inside your cluster can access the service is not exposed outside the cluster but can be addressed from within the cluster when you create a service of type cluster ip kubernetes creates a stable ip address that is accessible from nodes in the cluster clients in the cluster call the service by using the cluster ip address and the port value specified in the port field of the service manifest the request is forwarded to one of the member pods on the port specified in the target port field and just as a note this ip address is stable for the lifetime of the service so for this example a client calls the service at 10.176 on tcp port 80. the request is forwarded to one of the member pods on tcp port 80. note that the member pod must have a container that is listening on tcp port 80. if there is no container listening on port 80 clients will see a message like fail to connect or this site can’t be reached think of the case when you have a dns record that you don’t want to change and you want the name to resolve to the same ip address or you merely want a static ip address for your workload this would be a great use case for the use of the cluster ip service now although the service is not accessible by network requests outside of the cluster if you need to connect to the service you can still connect to it with the cloud sdk or cloud shell by using the exposed ip address of the cluster and so i wanted to take a moment to show you what a cluster ip manifest actually looks like and i will be going through the manifest for each service type for you to familiarize yourself with we first have the name of the service which is cluster ip dash service we then have the label used for the selector which is the key value pair of app inventory and then we have the service type which is cluster ip and we have the port number exposed internally in the cluster which is port 80 along with the target port that containers are listening on which again is port 80. and so the next service type we have is node port so when you create a service of type node port you specify a node port value the node port is a static port and is chosen from a pre-configured range between 30 000 and 32 760 you can specify your own value within this range but please note that any value outside of this range will not be accepted by kubernetes as well if you do not choose a value a random value within the range specified will be assigned once this port range has been assigned to the service then the service is accessible by using the ip address of any node along with the no port value the service is then exposed on a port on every node in the cluster the service can then be accessed externally at the node ip along with the node port when using node port services you must make sure that the selected port is not already open on your nodes and so just as a note the no port type is an extension of the cluster i p type so a service of type node port naturally has a cluster i p address and so this method isn’t very secure as it opens up each node to external entry as well this method relies on knowing the ip addresses of the nodes which could change at any time and so going through the manifest of type node port service we start off with the name of the service which is node port dash service the label used for the selector which uses the key value pair of app inventory the type which is node port and notice the case sensitivity here which you will find in most service types along with the port number exposed internally in the cluster which is port 80 and again the port that the containers are listening on which is the target port which is port 80 as well and lastly and most importantly we have the no port value which is marked as you saw in the diagram earlier as port 32002 the next service type we have up is low balancer and this service is exposed as a load balancer in the cluster low balancer services will create an internal kubernetes service that is connected to a cloud provider’s load balancer and in this case google cloud this will create a static publicly addressable ip address and a dns name that can be used to access your cluster from an external source the low balancer type is an extension of the no port type so a service of type load balancer naturally has a cluster ip address if you want to directly expose a service this is the default method all traffic on the port you specify will be forwarded to the service there is no filtering or routing and it means you can send many different types of traffic to it like http https tcp or udp and more the downside here is that for each service you expose with a low balancer you pay for that load balancer and so you can really rack up your bill if you’re using multiple load balancers and shown here is the manifest for type load balancer it shows the name of the service load balancer dash service the label which is used for the selector which is the key value pair of app inventory the service type which is low balancer again notice the case sensitivity along with the port and the target port which are both port 80. and so this is the end of part one of this lesson it was getting a bit long so i decided to break it up this would be a great opportunity for you to get up and have a stretch get yourself a coffee or tea and whenever you’re ready part two will be starting immediately from the end of part one so go ahead and mark this as complete and i’ll see you in the next one [Music] welcome back this is part two of the kubernetes services lesson and we’re going to continue immediately from the end of part one so whenever you’re ready let’s dive in and so the next service type we have is multiport services now for some services there is the need to expose more than one port kubernetes lets you configure multiple port definitions on a service object so when using multiple ports for a service you must give all your ports names and if you have multiple service ports these names must be unique in this example if a client calls the service at 10.176.1 on tcp port 80 the request is forwarded to a member pod on tcp port 80 on either node 1 or node 2. but if a client calls the service at 10.176.133.7 on tcp port 9752 the request is forwarded to the pod on tcp port 9752 that resides on node 1. each member pod must have a container listening on tcp port 80 and a container listening on tcp port 9752 this could be a single container with two threads or two containers running in the same pod and of course as shown here is a manifest showing the multi-port services the name of the service the label used for the selector as well as the service type the port node exposed internally for each separate workload as well as the port that containers are listening on for each workload as well and as you saw before nginx was using target port 80 where appy was using port 9752 moving on to another service type is external name now a service of type external name provides an internal alias for an external dns name internal clients make requests using the internal dns name and the requests are redirected to the external name when you create a service kubernetes creates a dns name that internal clients can use to call the service in this example the internal dns name is bowtie.sql when an internal client makes a request to the internal dns name of bowtie.sql the request gets redirected to bowtie.sql2 dot bow tie inc dot private the external name service type is a bit different than other service types as it’s not associated with a set of pods or an ip address it is a mapping from an internal dns name to an external dns name this service does a simple cname redirection and is a great use case for any external service that resides outside of your cluster and again here is a view of a manifest for type external name here showing the internal dns name along with the external dns name redirect and moving on to the last service type we have the headless service type now sometimes you don’t need or want low balancing and a single service ip in this case you can create headless services by specifying none as the service type in the manifest file this option also allows you to choose other service discovery mechanisms without being tied to kubernetes implementation applications can still use a self-registration pattern with this service and so a great use case for this is when you don’t need any low balancing or routing you only need the service to patch the request to the back end pod no ips needed headless service is typically used with stateful sets where the name of the pods are fixed this is useful in situations like when you’re setting up a mysql cluster where you need to know the name of the master and so here is a manifest for the headless service again the service type is marked as none and so to sum it up kubernetes services provides the interfaces through which pods can communicate with each other they also act as the main gateway for your application services use selectors to identify which pods they should control they expose an ip address and a port that is not necessarily the same port at which the pod is listening and services can expose more than one port and can also route traffic to other services external ip addresses or dns names services make it really easy to create network services in kubernetes each service can be backed with as many pods as needed without having to make your code aware of how each service is backed also please note that there are many other features and use cases within the services that have been mentioned that i’ve not brought up i will also include some links in the lesson text for those who are interested in diving deeper into services this lesson was to merely summarize the different service types and knowing these service types will put you in a great position on the exam for any questions that cover services within gke now i know this has been another lesson that’s been extremely heavy in theory and has been a tremendous amount to take in but not to worry next up is a demo that will put all this theory into practice and we’ll be going ahead and building a cluster along with touching on much of the components discussed within the past few lessons and so that’s pretty much all i wanted to cover when it comes to kubernetes service types so you can now mark this lesson as complete and whenever you’re ready join me in the console [Music] welcome back in this lesson i’ll be going over ingress for gke an object within gke that defines rules for routing traffic to specific services ingress is a well-known topic that comes up in the exam as well as being a common resource that is used in many gke clusters that you will see in most environments something that you will get very familiar with while diving deeper into more complex environments so whenever you’re ready let’s dive in now in gke an ingress object defines rules for routing http and https traffic to applications running in a cluster an ingress object is associated with one or more service objects each of which is associated with a set of pods when you create an ingress object the gke ingress controller creates a google cloud http or https load balancer and configures it according to the information in the ingress and its associated services gke ingress is a built-in and managed ingress controller this controller implements ingress resources as google cloud load balancers for http and https workloads in gke also the load balancer is given a stable ip address that you can associate with a domain name each external http and https load balancer or internal http or https load balancer uses a single url map which references one or more back-end services one back-end service corresponds to each service referenced by the ingress in this example assume that you have associated the load balancers ip address with the domain name bowtieinc.co when a client sends a request to bowtieinc.co the request is routed to a kubernetes service named products on port 80. and when a client sends a request to bowtieinc.co forward slash discontinued the request is routed to a kubernetes service named discontinued on port 21337 ingress is probably the most powerful way to expose your services but can also be very complex as there are also many types of ingress controllers to choose from along with plugins for ingress controllers ingress is the most useful and cost effective if you want to expose multiple services under the same ip address as you only pay for one load balancer if you are using the native gcp integration and comes with a slew of features and so shown here is the ingress manifest which is a bit different from the other manifest that you’ve seen as it holds rules for different paths explain in the previous diagram in the manifest shown here one path directs all traffic to the product’s service name while the other path redirects traffic from discontinued to the back end service name of discontinued and note that each of these service names have their own independent manifest as it is needed to create the service and are referenced within the ingress manifest so the more rules you have for different paths or ports the more services you will need now i wanted to touch on network endpoint groups or any g’s for short for just a second now this is a configuration object that specifies a group of back-end endpoints or services negs are useful for container native load balancing where each container can be represented as an endpoint to the load balancer the negs are used to track pod endpoints dynamically so the google low balancer can route traffic to its appropriate back ends so traffic is low balanced from the load balancer directly to the pod ip as opposed to traversing the vm ip and coupe proxy networking in these conditions services will be annotated automatically indicating that a neg should be created to mirror the pod ips within the service the neg is what allows compute engine load balancers to communicate directly with pods the diagram shown here is the ingress to compute engine resource mappings of the manifest that you saw earlier where the gke ingress controller deploys and manages compute engine low balancer resources based on the ingressed resources that are deployed in the cluster now touching on health checks for just a minute if there are no specified health check parameters for a corresponding service using a back-end custom resource definition a set of default and inferred parameters are used health check parameters for a back-end service should be explicitly defined by creating a back-end config custom resource definition for the service and this should be done if you’re using anthos a backend config custom resource definition should also be used if you have more than one container in the serving pods as well if you need control over the port that’s used for the low balancers health checks now you can specify the backend services health check parameters using the health check parameter of a back-end config custom resource definition referenced by the corresponding service this gives you more flexibility and control over health checks for a google cloud external http or https load balancer or internal http or https load balancer created by an ingress and lastly i wanted to touch on ssl certificates and there are three ways to provide ssl certificates to an http or https load balancer the first way is google managed certificates and these are provisioned deployed renewed and managed for your domains and just as a note managed certificates do not support wildcard domains the second way to provide ssl certificates is through self-managed certificates that are shared with google cloud you can provision your own ssl certificate and create a certificate resource in your google cloud project you can then list the certificate resource in an annotation on an ingress to create an http or https load balancer that uses the certificate and the last way to provide ssl certificates is through self-managed certificates as secret resources so you can provision your own ssl certificate and create a secret to hold it you can then refer to the secret as an ingress specification to create an http or https load balancer that uses this certificate and just as a note you can specify multiple certificates in an ingress manifest the load balancer chooses a certificate if the common name in the certificate matches the host name used in the request and so that pretty much covers all the main topics in this short lesson on ingress for gke so you can now mark this lesson as complete and let’s move on to the next one [Music] welcome back in this lesson i’ll be going over gke storage options now kubernetes currently offers a slew of different storage options and is only enhanced by the added features available in google cloud for gke we’ll also be getting into the different abstractions that kubernetes offers to manage storage and how they can be used for different types of workloads now there’s quite a bit to go over here so with that being said let’s dive in now as i stated before there are several storage options for applications running on gke the choices vary in terms of flexibility and ease of use google cloud offers several storage options that can be used for your specific workload kubernetes also provides storage abstractions which i will be getting into in just a bit the easiest storage options are google cloud’s managed storage products if you need to connect a database to your cluster you can consider using cloud sql datastore or cloud spanner and when it comes to object storage cloud storage would be an excellent option to fill the gap file store is a great option for when your application requires managed network attached storage and if your application requires block storage the best option is to use persistent disks and can be provisioned manually or provisioned dynamically through kubernetes now i wanted to first start off with kubernetes storage abstractions but in order to understand kubernetes storage abstractions i wanted to take a moment to explain how storage is mounted in the concept of docker now docker has a concept of volumes though it is somewhat looser and less managed than kubernetes a docker volume is a directory on disk or in another container docker provides volume drivers but the functionality is somewhat limited a docker container has a writable layer and this is where the data is stored by default making the data ephemeral and so data is not persisted when the container is removed so storing data inside a container is not always recommended now there are three ways to mount data inside a docker container the first way is a docker volume and sits inside the docker area within the host’s file system and can be shared amongst other containers this volume is a docker object and is decoupled from the container they can be attached and shared across multiple containers as well bind mounting is the second way to mount data and is coming directly from the host’s file system bind mounts are great for local application development yet cannot be shared across containers and the last way to mount data is by using temp-fs and is stored in the host’s memory this way is great for ephemeral data and increases performance as it no longer lies in the container’s writable layer now with kubernetes storage abstractions file system and block based storage are provided to your pods but are different than docker in nature volumes are the basic storage unit in kubernetes that decouples the storage from the container and tie it to the pod and not the container like in docker a regular volume simply called volume is basically a directory that the containers in a pod have access to the particular volume type used is what will determine its purpose some volume types are backed by ephemeral storage like empty dir config map and secrets and these volumes do not persist after the pod ceases to exist volumes are useful for caching temporary information sharing files between containers or to load data into a pod other volume types are backed by durable storage and persist beyond the lifetime of a pod like persistent volumes and persistent volume claims a persistent volume is a cluster resource that pods can use for durable storage a persistent volume claim can be used to dynamically provision a persistent volume backed by persistent disks persistent volume claims can also be used to provision other types of backing storage like nfs and i will be getting more into persistent volumes and persistent volume claims in just a bit now as you saw in docker on disk files in a container are the simplest place for an application to write data but files are lost when the container crashes or stops for any other reason as well as being unaccessible to other containers running in the same pod in kubernetes the volume source declared in the pod specification determines how the directory is created the storage medium used and the directory’s initial contents a pod specifies what volumes it contains and the path where containers mount the volume ephemeral volume types live the same amount of time as the pods they are connected to these volumes are created when the pod is created and persist through container restarts only when the pod terminates or is deleted are the volumes terminated as well other volume types are interfaces to durable storage that exist independently of a pod like ephemeral volumes data in a volume backed by durable storage is preserved when the pod is removed the volume is merely unmounted and the data can be handed off to another pod now volumes differ in their storage implementation and their initial contents you can choose the volume source that best fits your use case and i will be going over some common volume sources that are used and you will see in many gke implementations the first volume that i want to bring up is empty dir now an empty dir volume provides an empty directory that containers in the pod can read and write from when the pod is removed from a node for any reason the data in the empty dir is deleted forever an empty dir volume is stored on whatever medium is backing the node which might be a disk ssd or network storage empty der volumes are useful for scratch space and sharing data between multiple containers in a pod the next type of volume that i wanted to go over is config map and config map is a resource that provides a way to inject configuration data into pods the data stored in a config map object can be referenced in a volume of type config map and then consumed through files running in a pod the next volume type is secret and a secret volume is used to make sensitive data such as passwords oauth tokens and ssh keys available to applications the data stored in a secret object can be referenced in a volume of type secret and then consumed through files running in a pod next volume type is downward api and this volume makes downward api data available to applications so this data includes information about the pod and container in which an application is running in an example of this would be to expose information about the pods namespace and ip address to applications and the last volume type that i wanted to touch on is persistent volume claim now a persistent volume claim volume can be used to provision durable storage so that they can be used by applications a pod uses a persistent volume claim to mount a volume that is backed by this durable storage and so now that i’ve covered volumes i wanted to go into a bit of detail about persistent volumes persistent volume resources are used to manage durable storage in a cluster in gke a persistent volume is typically backed by a persistent disk or file store can be used as an nfs solution unlike volumes the persistent volume life cycle is managed by kubernetes and can be dynamically provisioned without the need to manually create and delete the backing storage persistent volume resources are cluster resources that exist independently of pods and continue to persist as the cluster changes and as pods are deleted and recreated moving on to persistent volume claims this is a request for and claim to a persistent volume resource persistent volume claim objects request a specific size access mode and storage class for the persistent volume if an existing persistent volume can satisfy the request or can be provisioned the persistent volume claim is bound to that persistent volume and just as a note pods use claims as volumes the cluster inspects the claim to find the bound volume and mounts that volume for the pod now i wanted to take a moment to go over storage classes and how they apply to the overall storage in gke now these volume implementations such as gce persistent disk are configured through storage class resources gke creates a default storage class for you which uses the standard persistent disk type of ext4 as shown here the default storage class is used when a persistent volume claim doesn’t specify a storage class name and can also be replaced with one of your choosing you can even create your own storage class resources to describe different classes of storage and is helpful when using windows node pools now as i stated before persistent volume claims can automatically provision persistent disks for you when you create this persistent volume claim object kubernetes dynamically creates a corresponding persistent volume object due to the gke default storage class this persistent volume is backed by a new empty compute engine persistent disk you use this disk in a pod by using the claim as a volume when you delete a claim the corresponding persistent volume object and the provision compute engine persistent disk are also deleted now to prevent deletion you can set the reclaim policy of the persistent disk resource or its storage class resource to retain now deployments as shown here in this diagram are designed for stateless applications so all replicas of a deployment share the same persistent volume claim which is why stateful sets are the recommended method of deploying stateful applications that require a unique volume per replica by using stateful sets with persistent volume claim templates you can have applications that can scale up automatically with unique persistent volume claims associated to each replica pod now lastly i wanted to touch on some topics that will determine the storage access that is available for any gke cluster in your environment now i first wanted to start off with access modes and there are three supported modes for your persistent disks that allow read write access and are listed here read write once is where the volume can be mounted as read write by a single node read only many is where the volume can be mounted as a read only by many nodes and lastly read write many is where the volume can be mounted as read write by many nodes and just as a note read write once is the most common use case for persistent disks and works as the default access mode for most applications next i wanted to touch on the type of persistent disks that are available and the benefits and caveats of access for each now going through the persistent disks lesson of this course you probably know by now about the available persistent disks when it comes to zonal versus regional availability and so this may be a refresher for some now going into regional persistent disks these are multi-zonal resources that replicate data between two zones in the same region and can be used similarly to zonal persistent disks in the event of a zonal outage kubernetes can fail over workloads using the volume to the other zone regional persistent disks are great for highly available solutions for stateful workloads on gke now zonal persistent disks are zonal resources and so unless a zone is specified gke assigns the disk to a single zone and chooses the zone at random once a persistent disk is provisioned any pods referencing the disk are scheduled to the same zone as the disk and just as a note using anti-affinity on zones allows stateful set pods to be spread across zones along with the corresponding disks and the last point that i wanted to cover when it comes to persistent volume access is the speed of access now as stated in an earlier lesson the size of persistent disks determine the iops and throughput of the disk gke typically uses persistent disks as boot disks and to back kubernetes persistent volumes so whenever possible use larger and fewer disks to achieve higher iops and throughput and so that pretty much covers everything that i wanted to go over in this lesson on gke storage options so you can now mark this lesson as complete and let’s move on to the next one [Music] welcome back in these next few demos i’m going to be doing a complete walkthrough and putting all the theory we learned into practice through building and interacting with gke clusters and you’ll be building and deploying your own containerized application on this cluster called box of bowties so in this demo we’re going to be setting up our own gke cluster in the console along with going through all the options that are available when deploying it we’re also going to use the command line to configure the cubectl command line tool so that we can interact with the cluster so with that being said let’s dive in and so here in the console i am logged in as tonybowties gmail.com under the project of bow tie inc and so before launching the cluster i need to make sure that my default vpc has been created so i’m going to go over to the navigation menu and i’m going to scroll down to vpc network and as expected the default network is here so i can go ahead and create my cluster and so in order to get to my kubernetes engine console i’m going to go up to the navigation menu and i’m going to scroll down under compute and you will find here kubernetes engine and you’ll see a few different options to choose from and over here on the left hand menu i will be going through these options in the upcoming demos but for now i want to concentrate on creating our cluster now gk makes things pretty easy as i have the option to create a cluster to deploy a container or even taking the quick start and so we’re going to go ahead and click on create our cluster and so here we are prompted with our cluster basics now if i really wanted to i can simply fill out all the fields that you see here and click on create and it will use all the defaults to build my cluster but we’re going to customize it a little bit so we’re going to go ahead and go through all these options so first under name we’re going to name this cluster bowtie dash cluster and so under location type we want to keep things as zonal and if i check off the specify default node locations i’ll be able to make this a multi-zonal cluster as i have the option of selecting from multiple zones where i can situate my nodes and so i can select off a bunch of different zones if i choose but we want to keep it as a single zonal cluster and so i’m going to check these all off and under zone i’m going to click on the drop down menu and i’m going to select us east 1b and just as a note for each zone that you select this is where the control plane will live so if i was to create a multi-zonal cluster as you can see the master zone is the zone where the control plane will be created and is selected as us east 1b as that is the zone that i had selected and so if i change this to let’s say us east 1d you can see that the control plane will change with it so i’m going to change it back to us east 1b and you also have the option of creating a regional cluster and the location selection will change from zone to region and here you will have to specify at least one zone to select but please also remember that the same number of nodes will be deployed to each selected zone so if i have three nodes in this cluster and i decide to select three zones then i will have nine nodes in this cluster and so doing something like this could get quite pricey when you’re looking to be cost conscious okay so moving on i’m going to uncheck specify default node locations i’m going to change the location type back to zonal and make sure that my zone is at us east 1b moving down to the master version this is where we would select either a static version or opt-in to a release channel for the version of kubernetes that you want for your cluster and so with the static version i can choose from a bunch of different versions here all the way back from 1.14.10 all the way to the latest version and so with the release channel i have the release channel selection here and i can choose from the rapid channel the regular channel or the stable channel and so i’m going to keep things as the default with the regular channel as well i’m going to keep the default version as the version of my choice now i could go ahead and simply click on create here but as this demo is a walkthrough i’m going to go ahead and go through all the available options so i’m going to start by going over to the left hand menu and clicking on default pool under no pools now here i have one node pool already with three nodes and this is the default node pool that comes with any cluster but if i was doing something specific i could add another node pool and configure it from here but because i don’t have a need for two node pools i’m gonna go ahead and remove nodepool1 so i’m going
to go up here to remove nodepool and as you can see gke makes it really easy for me to add or remove node pools so i’m going to go back to the default pool and i’m going to keep the name as is i’m gonna keep my number of nodes as three and if i wanted to change the number of nodes i can simply select this i can choose six or however many nodes you need for your workload and so because we’re not deploying a large workload i’m gonna keep this number at 3 and moving right along we do want to check off enable auto scaling and so this way we don’t have to worry about scaling up or scaling down and here i’m going to put the minimum number of nodes as one and i’m going to keep my maximum number of nodes at 3. and so here i’m given the option to select the zone location for my nodes but again for each zone that i select it will run the same amount of nodes so basically i have another option in order to choose from having a zonal or multi-zonal cluster and because we’re creating our cluster in a single zone i’m going to uncheck this and under automation as you can see enable auto upgrade and enable auto repair are both checked off and this is due to the fact that the auto upgrade feature is always enabled for the release channel that i selected but as i pointed out in a previous lesson that this is google’s best practice to have auto upgrade and auto repair enabled and so moving down to the bottom are some fields to change the surge upgrade behavior and so just as a refresher surge upgrades allow you to control the number of nodes gke can upgrade at a time and control how disruptive those upgrades are to your workloads so max surge being the number of additional nodes that can be added to the node pool during an upgrade and max unavailable being the number of nodes that can be simultaneously unavailable during that upgrade and because we’re not worried about disruptions we’ll just leave it set as the default and so moving on we’re going to move back over to the left hand menu and under no pools we’re going to click on nodes and here is where i can choose the type of instance that i want to be using for my nodes and so i’m going to keep the image type as container optimize os and this is the default image type but i also have the option of choosing from others like ubuntu or windows and so i’m going to keep it as the default and under machine configuration i’m going to keep it under general purpose with series e2 but i do want to change the machine type to e2 micro just to be cost conscious and under boot disk size i want to keep it as 10 gigabytes as we don’t really need 100 gigabytes for what we’re doing here and you also have the option of choosing from a different boot disk type you can change it from standard persistent disk to ssd but i’m going to keep things as standard as well i also have the option here to use customer manage keys for encryption on my boot disk as well as selecting from preemptable nodes for some cost savings and so i’m going to now move down to networking and here if i wanted to get really granular i can add a maximum pods per node as well as some network tags but our demo doesn’t require this so i’m going to leave it as is and i’m going to go back over to the left hand menu and click on security and under node security you have the option of changing your service account along with the access scopes and so for this demo we can keep things as the default service account and the access scopes can be left as is i’m going to go back over to the left hand menu and click on metadata and here i can add kubernetes labels as well as the instance metadata and so i know i didn’t get into node taints but just to fill you in on no taints when you submit a workload to run in a cluster the scheduler determines where to place the pods associated with the workload and so the scheduler will place a pod on any node that satisfies the resource requirements for that workload so no taints will give you some more control over which workloads can run on a particular pool of nodes and so they let you mark a node so that the scheduler avoids or prevents using it for certain pods so for instance if you had a node pool that is dedicated to gpus you’d want to keep that node pool specifically for the workload that requires it and although it is in beta this is a great feature to have and so that pretty much covers no pools as we see it here and so this is the end of part one of this demo it was getting a bit long so i decided to break it up this would be a great opportunity for you to get up and have a stretch get yourself a coffee or a tea and whenever you’re ready part two will be starting immediately from the end of part one so you can now mark this as complete and i’ll see you in the next one [Music] this is part two of creating a gke cluster part 2 will be starting immediately from the end of part 1. so with that being said let’s dive in and so i’m going to go back over to the left hand menu and under cluster i’m going to click on automation and here i have the option of enabling a maintenance window for aligning times when auto upgrades are allowed i have the option of adding the window here and i can do it at specified times during the week or i can create a custom maintenance window and so we don’t need a maintenance window right now so i’m going to uncheck this and as well you have the option of doing maintenance exclusions for when you don’t want maintenance to occur ngk gives you the option of doing multiple maintenance exclusions for whenever you need them and because we don’t need any maintenance exclusions i’m going to delete these and here you have the option to enable vertical pod auto scaling and this is where gke will automatically schedule pods onto other nodes that satisfy the resources required for that workload as well here i can enable my node auto provisioning and enabling this option allows gke to automatically manage a set of node pools that can be created and deleted as needed and i have a bunch of fields that i can choose from the resource type the minimum and maximum for cpu and memory the service account as well as adding even more resources like gpus but our workload doesn’t require anything this fancy so i’m going to delete this and i’m going to uncheck enable auto provisioning and lastly we have the auto scaling profile and i have the option from choosing the balance profile which is the default as well as the optimize utilization which is still in beta and so i’m going to keep things as the default and i’m going to move back on over to the left hand menu over to networking and so here i can get really granular with my cluster when it comes to networking i have the option of choosing from a public or a private cluster as well i can choose from a different network and since we only have the default that’s what shows up but if you had different networks here you can choose from them as well as the subnets i can also choose from other networking options like pod address range maximum pods per node and there’s a bunch of other options which i won’t get into any detail with but i encourage you if you’re very curious to go through the docs and to check out these different options now the one thing that i wanted to note here is the enable http low balancing and this is a add-on that is required in order to use google cloud load balancer and so as we discussed previously in the services lesson when you enable service type load balancer a load balancer will be created for you by the cloud provider and so google requires you to check this off so that a controller can be installed in the cluster upon creation and will allow a load balancer to be created when the service is created and so i’m going to leave this checked as we will be deploying a load balancer a little bit later and so moving back over to the left hand menu i’m going to now click on security and there are many options here to choose from that will allow you to really lock down your cluster and again this would all depend on your specific type of workload now i’m not going to go through all these options here but i did want to highlight it for those who are looking to be more security focused with your cluster and so moving down the list in the menu i’m going to click on metadata and so here i can enter a description for my cluster as well as adding labels and so the last option on the cluster menu is features and here i have the option of running cloud run for anthos which will allow you to deploy serverless workloads to anthos clusters and runs on top of gke and here you can enable monitoring for gke and have it be natively monitored by google cloud monitoring and if i was running a third-party product to monitor my cluster i can simply uncheck this and use my third-party monitoring and there’s a whole bunch of other features that i won’t dive into right now but if you’re curious you can always hover over the question mark and get some more information about what it does and so now i’ve pretty much covered all the configuration that’s needed for this cluster and so now i’m going to finally head down to the bottom and click on create and so it may take a few minutes to create this cluster so i’m going to go ahead and pause this video here and i’ll be back faster than you can say cat in the hat okay and the cluster has been created as you can see it’s in the location of us east 1b with three nodes six vcpus and three gigabytes of memory and i can drill down and see exactly the details of the cluster as well if i wanted to edit any of these options i can simply go up to the top click on edit and make the necessary changes and so now you’re probably wondering what will i need to do in order to create this cluster through the command line well it’s a bit simpler than what you think and i’m going to show you right now i’m going to simply go over to the right hand menu and activate cloud shell and bring this up for better viewing and i’m going to paste in my command gcloud container clusters create bow tie dash cluster with the flag num nodes and the number of nodes that i choose which is three and so like i said before if i wanted to simply create a simple cluster i can do so like this but if i wanted to create the cluster exactly how i built my last cluster then i can use this command which has all the necessary flags that i need to make it customize to my liking a not so very exciting demonstration but at the same time shows you how easy yet powerful gke really is and so i’m not going to launch this cluster as i already have one and so now i wanted to show you how to interact with your new gke cluster so i’m going to simply clear my screen and so now in order for me to interact with my cluster i’m going to be using the cube ctl command line tool and this is the tool that is used to interact with any kubernetes cluster no matter the platform now i could use the gcloud container commands but they won’t allow me to get very granular as the cubectl tool and so a caveat of creating your cluster through the console is that you need to run a command in order to retrieve the cluster’s credentials and configure the cubectl command line tool and i’m going to go ahead and paste that in now and the command is gcloud container clusters get dash credentials and the name of my cluster which is bow tie dash cluster along with the zone flag dash dash zone followed by the zone itself which is us east 1b i’m going to go ahead and hit enter and as you can see cubectl has now been configured and so now i’m able to interact with my cluster so just to verify i’m going to run the command cubectl getpods and naturally as no workloads are currently deployed in the cluster there are no pods so i’m going to run the command cube ctl get nodes and as you can see the cubectl command line tool is configured correctly and so now this cluster is ready to have workloads deployed to it and is also configured with the cubectl command line tool so that you’re able to manage the cluster and troubleshoot if necessary now i know that there has been a ton of features that i covered but i wanted to give you the full walkthrough so that you are able to tie in some of the theory from the last few lessons and get a feel for the gke cluster as we will be getting more involved with it over the next couple of demos and so that’s pretty much all i wanted to cover when it comes to creating and setting up a gke cluster so you can now mark this as complete and whenever you’re ready join me in the console in the next one where you will be building your box a bow ties container to deploy to your new cluster but if you are not planning to go straight into the next demo i do recommend that you delete your cluster to avoid any unnecessary costs and recreate it when you are ready to go into the next demo [Music] welcome back now in the last lesson you built a custom gke cluster and configured the cube ctl command line tool to interact with the cluster in this lesson you’re going to be building a docker image for a box of bow ties using cloud build which will then be pushed over to google cloud container registry so that you can deploy it to your current gke cluster and so as you can see there’s a lot to do here so with that being said let’s dive in so now the first thing that you want to do is to clone your repo within cloud shell so you can run the necessary commands to build your image so i’m going to go up here to the top right and i’m going to open up cloud shell i’m going to make sure that i’m in my home directory so i’m going to run the command cd space tilde hit enter and i’m in my home directory if i run the command ls i can see that i only have cloud shell.txt and so now i’m going to clone my github repository and i’ll have a link in the instructions in the github repo as well as having it in the lesson text below and so the command would be git clone along with the https address of the github repo and i’m going to hit enter and it’s finished cloning my repo i’m going to quickly clear my screen and i’m going to run the command ls and i can see my repo here and now i’m going to drill down into the directory by running cd google cloud associate cloud engineer if i run an ls i can see all my clone files and folders and so now the files that we need are going to be found in the box of bowties folder under kubernetes engine and containers so i’m going to change directories to that location and run ls and under box of bow ties is a folder called container which will have all the necessary files that you need in order to build your image we have the jpeg for box of bow ties we have the docker file and we have our index.html and so these are the three files that we need in order to build the image and so as i said before we are going to be using a tool called cloud build which we have not discussed yet cloudbuild is a serverless ci cd platform that allows me to package source code into containers and you can get really fancy with cloud build but we’re not going to be setting up any ci cd pipelines we’re merely using cloud build to build our image and to push it out to container registry as well container registry is google cloud’s private docker repository where you can manage your docker images and integrates with cloud build gke app engine cloud functions and other repos like github or bitbucket and it allows for an amazing build experience with absolutely no heavy lifting and because you’re able to build images without having to leave google cloud i figured that this would be a great time to highlight these services so getting back to it we’ve cloned the repo and so we have our files here in cloud shell and so what you want to do now is you want to make sure the cloud build api has been enabled as this is a service that we haven’t used before now we can go through the console and enable the api there but i’m going to run it here from cloud shell and i’m going to paste in the command gcloud services enable cloudbuild.googleapis.com i’m going to hit enter and you should get a prompt asking you to authorize the api call you definitely want to authorize should take a few seconds all right and the api has been enabled for cloud build so now i’m going to quickly clear my screen and so because i want to show you exactly what cloud build is doing i want to head on over there through the console and so i’m going to go over to the navigation menu and i’m going to scroll down to tools until you come to cloud build and as expected there is nothing here in the build history as well not a lot here to interact with and so now you’re going to run the command that builds the image and so you’re going to paste that command into the cloud shell which is gcloud builds submit dash dash tag gcr.io which is the google cloud container registry our variable for our google cloud project along with the image name of box bow ties version 1.0.0 and please don’t forget the trailing dot at the end i’m going to go ahead and hit enter cloud build will now compress the files and move them to a cloud storage bucket and then cloud build takes those files from the bucket and uses the docker file to execute the docker build process and so i’m going to pause the video here till the build completes and i’ll be back in a flash okay and the image is complete and is now showing up in the build history in the cloud build dashboard and so if i want to drill down into the actual build right beside the green check mark you will see the hot link so you can just simply click on that and here you will see a build summary with the build log the execution details along with the build artifacts and as well the compressed files are stored in cloud storage and it has a hot link right here if i wanted to download the build log i can do so here and i conveniently have a hot link to the image of box of bow ties and this will bring me to my container registry so you can go ahead and click on the link it should open up another tab and bring you right to the page of the image that covers a lot of its details now the great thing i love about container registry is again it’s so tightly coupled with a lot of the other resources within google cloud that i am able to simply deploy right from here and i can deploy to cloud run to gke as well as compute engine now i could simply deploy this image right from here but i wanted to do it from gke so i’m going to go back over to gke in the other tab i’m going to go to the navigation menu go down to kubernetes engine and i’m going to go up to the top menu and click on deploy it’s going to ask for the image you want to deploy and you want to click on select to select a new container image and you should have a menu pop up from the right hand side of your screen and under container registry you should see box of bow ties you can expand the node here and simply click on the image and then hit select and so now the container image has been populated into my image path and you want to scroll down and if i wanted to i could add another container and even add some environment variables and so we’re not looking to do that right now so you can simply click on continue and you’re going to be prompted with some fields to fill out for your configuration on your deployment and so the application name is going to be called box of bow ties i’m going to keep it in the default namespace as well i’m going to keep the key value pair as app box of bow ties for my labels and because this configuration will create a deployment file for me you can always have a look at the manifest by clicking on the view yaml button before it’s deployed and this is always good practice before you deploy any workload so as you can see here at the top i have the kind as deployment the name as well as the namespace my labels replicas of three as well as my selector and my spec down here at the bottom as well this manifest also holds another kind of horizontal pod auto scaler and is coupled with the deployment in this manifest due to the reference of the deployment itself and so it’s always common practice to try and group the manifest together whenever you can and so this is a really cool feature to take advantage of on gke so i’m going to close this now and i’m actually going to close cloud shell as i don’t need it right now as well you can see here that it’s going to deploy to my kubernetes cluster of bow tie cluster in us east 1b and if i wanted to i can deploy it to a new cluster and if i had any other clusters in my environment they would show up here and i’d be able to select from them as well but bow tie cluster is the only one that i have and so now that you’ve completed your configuration for your deployment you can simply click on deploy this is just going to take a couple minutes so i’m just going to pause the video here and i’ll be back as soon as the deployment is done okay the workload has been deployed and i got some default messages that popped up i can set an automated pipeline for this workload but we’re not going to do that for this demo but feel free to try it on your own later if you’d like and we will want to expose our service as we want to see if it’s up and running and we’re going to take care of that in just a bit and so if i scroll through some of the details here i can see that i have some metrics here for cpu memory and disk the cluster namespace labels and all the pods that it’s running on basically a live visual representation of my deployment if i scroll back up to the top i can dive into some details events and even my manifest i can also copy my manifest and download it if i’d like so as you can see a lot of different options and so now i want to verify my deployment and so i’m going to use the cube ctl command line tool to run some commands to verify the information so i’m going to open back up my cloud shell and make this a little bit bigger for better viewing and i’m going to run the command cubectl get all and as you can see here i have a list of all the pods that are running the name of the service the deployment the replica set everything about my cluster and my deployment and you should be seeing the same when running this command and so next you want to pull up the details on your deployments in the cluster and so the command for that is cube ctl get deployments and it came out kind of crammed at the bottom so i’m going to simply clear my screen and run that command again and as you can see the box of bowties deployment is displayed how many replicas that are available how many of those replicas achieve their desired state and along with how long the application has been running and so now i want to dive into my pods and in order to do that i’m going to run the command cube ctl get pods and here i can see all my pods now if i wanted to look at a list of events for a specific pod the command for that would be cubectl describe pod and then the name of one of the pods so i’m going to pick this first one copy that i’m going to paste it and i’m going to hit enter and here i can see all the events that have occurred for this pod as well i also have access to some other information with regards to volumes conditions and even the container and image ids and this is a great command to use for when you’re troubleshooting your pods and you’re trying to get to the bottom of a problem and so now the final step that you want to do is you want to be able to expose your application so you can check to see if it’s running properly and so we’re going to go ahead and do that through the console so i’m going to close down cloud shell and i’m going to go to overview and scroll down to the bottom click on the button that says expose and if i wanted to i can do it from up here in the top right hand corner where it says expose deployment so i’m going to click on expose and this probably looks very familiar to you as this is a graphical representation of the services manifest and so the port mapping here will cover the ports configuration of the services manifest starting here with port target port as well as protocol for target port i’m going to open up port 80. here under service type you have the option of selecting cluster ip node port or load balancer and the service type you want to use is going to be low balancer and we can keep the service name as box of bowties service and again you can view the manifest file for this service and you can copy or download it if you need to but we don’t need this right now so i’m going to close it in a pretty simple process so all i need to do is click on expose and within a minute or two you should have your service up and running with your shiny new low balancer okay and the service has been created and as you can see we’re under the services and ingress from the left hand menu and if i go back to the main page of services in ingress you can see that box a bow tie service is the only one that’s here i also have the option of creating a service type ingress but we don’t want to do that right now so i’m going to go back to services and here you will see your endpoint and this is the hot link that should bring you to your application so you can click on it now you’ll get a redirect notice as it is only http and not https so it’s safe to click on it so i’m going to click on it now and success and here is your box of bow ties what were you expecting and so i wanted to congratulate you on deploying your first application box of bow ties on your gke cluster and so just as a recap you’ve cloned your repo into your cloud shell environment you then built a container image using cloud build and pushed the image to container registry you then created a deployment using this image and verified the deployment using the cube ctl command line tool you then launched a service of type low balancer to expose your application and verified that your application was working so fantastic job on your part and that’s pretty much all i wanted to cover in this part of the demo so you can now mark this as complete and whenever you’re ready join me in the console for the next part of the demo where you will manage your workload on the gke cluster so please be aware of the charges incurred on your currently deployed cluster if you plan to do the next demo at a later date again you can mark this as complete and i’ll see you in the next welcome back in the last couple of demo lessons you built a custom gke cluster and deployed the box of bowties application in this lesson you will be interacting with this workload on gke by scaling the application editing your application and rebuilding your docker image so you can do a rolling update to the current workload in your cluster now there’s a lot to do here so with that being said let’s dive in so continuing where we left off you currently have your box of bow ties workload deployed on your gke cluster and so the first thing you want to do is scale your deployment and you are looking to scale down your cluster to one pod and then back up again to three and this is just to simulate scaling your workload so whether it be ten pods or one the action is still the same so now we can easily do it through the console by drilling down into the box of bowties workload going up to the top menu and clicking on actions and clicking on scale and here i can indicate how many replicas i’d like and scale it accordingly and so i wanted to do this using the command line so i’m going to cancel out of here and then i’m going to open up cloud shell instead okay and now that you have cloud shell open up you want to run the command cube ctl get pods to show the currently running available pods for the box of bowties workload and you may get a pop-up asking you to authorize the api call using your credentials and you definitely want to authorize and here you will get a list of all the pods that are running your box of bow ties workload and so now since you want to scale your replicas down to one you can run this command cube ctl scale deployment and your workload which is box of bowties dash dash replicas is equal to one you can hit enter and it is now scaled and in order to verify that i’m going to run cube ctl get pods and notice that there is only one pod running with my box of bow ties workload and in order for me to scale my deployment back up to three replicas i can simply run the same command but change the replicas from 1 to 3. hit enter it’s been scaled i’m going to run cube ctl get pods and notice that i am now back up to 3 replicas and so as you can see increasing or decreasing the number of replicas in order to scale your application is pretty simple to do okay so now that you’ve learned how to scale your application you’re gonna learn how to perform a rolling update but in order to do that you need to make changes to your application and so what you’re going to do is edit your application then rebuild your docker image and apply a rolling update and in order to do that we can stay here in cloud shell as you’re going to edit the file in cloud shell editor i’m going to first clear my screen i’m going to change directory into my home directory and now you want to change directories to your container folder where the files are that i need to edit i’m going to run ls and here’s the files that i need and so what you’re going to do now is edit the index.html file and the easiest way to do that is to simply type in edit index.html and hit enter and this will open up your editor so you can edit your index.html file and if you remember when we launched our application it looked exactly like this and so instead of what were you expecting we’re going to actually change that text to something a little different and so i’m going to go back to the editor in my other tab and where it says what were you expecting i’m going to actually change this to well i could always use something to eat then i’m going to go back up to the menu click on file and click on save and so now in order for me to deploy this i need to rebuild my container and so i’m going to go back to my terminal i’m going to clear the screen and i’m going to run the same command that i did the last time which is gcloud build submit dash dash tag gcr dot io with the variable for your google cloud project followed by the image box of bowties colon 1.0.1 and so this will be a different version of the image also don’t forget that trailing dot at the end and you can hit enter and again this is the process where cloud build compresses the files moves them to a cloud storage bucket and then takes the files from the bucket and uses the docker file to execute the docker build process and this will take a couple minutes so i’m going to pause the video here and i’ll be back before you can say cat in the hat okay and my new image has been created and so i want to head over to cloud build just to make sure that there are no errors so i’m going to close down cloud shell because i don’t need it right now i’m going to head back up to the navigation menu and scroll down to cloud build and under build history you should see your second build and if you drill down into it you will see that the build was successful and heading over to build artifacts you should now see your new image as version 1.0.1 and so now i’m going to head over to the registry and verify the image there and it seems like everything looks okay so now i’m gonna head back on over to my gke cluster i’m gonna go to the navigation menu down to kubernetes engine and here i’m gonna click on workloads i’m gonna select box of bowties and up at the top menu you can click on actions and select a rolling update and here you are prompted with a pop-up where you can enter in your minimum seconds ready your maximum search percentage as well as your maximum unavailable percentage and so here under container images i am prompted to enter in the sha-256 hash of this docker image now a docker image’s id is a digest which contains a sha-256 hash of the image’s configuration and if i go back over to the open tab for container registry you can see here the digest details to give you a little bit more context along with the sha 256 hash for the image that i need to deploy and so you can copy this digest by simply clicking on the copy button and then you can head back on over to the gke console head over to the container images highlight the hash and paste in the new hash and so when you copy it in make sure it’s still in the same format of gcr dot io forward slash your project name forward slash box of bow ties the at symbol followed by the hash and so once you’ve done that you can click on the update button and this will schedule an update for your application and as you can see here at the top it says that pods are pending as well if i go down to active revisions you can see here that there is a summary and the status that pods are pending and so just as a note rolling updates allow the deployments update to take place with zero downtime by incrementally updating pods instances with new ones so the pods will be scheduled on nodes with available resources and if the nodes do not have enough resources the pods will stay in a pending state but i don’t think we’re going to have any problems with these nodes as this application is very light in resources and if i open up cloud shell and run a cube ctl get pods command you will see that new pods have started and you can tell this by the age of the pod as well if you ran the command keep ctl describe pod along with the pod name you could also see the event logs when the pod was created and if i close cloud shell i can see up here at the top of my deployment details it shows that my replicas have one updated four ready three available and one unavailable and if i click on refresh i can see now that my replicas are all updated and available and so now in order to check your new update you can simply go down to exposing services and click on the endpoints link you’ll get that redirect notice you can simply click on the link and because the old site may be cached in your browser you may have to refresh your web page and success and you have now completed a rolling update in gke so i wanted to congratulate you on making it to the end of this multi-part demo and hope that it’s been extremely useful in excelling your knowledge in gke and so just as a recap you scaled your application to accommodate both less and more replicas you edited your application in the cloud shell editor and rebuilt your container image using cloud build you then applied the new digest to your rolling update and applied that rolling update to your deployment while verifying it all in the end fantastic job on your part as this was a pretty complex and long multi-part demo and you can expect things like what you’ve experienced in this demo to pop up in your role of being a cloud engineer when dealing with gke and so that’s pretty much all i wanted to cover with this multi-part demo working with gke so before you go i wanted to take a few moments to delete all the resources you’ve created one by one so i’m going to go up to the top i’m going to close all my tabs i’m going to head on over to clusters and so i don’t want to delete my cluster just yet but the first thing that i want to do is delete my container images so i’m going to head up to the top and open up cloud shell and i’m going to use the command gcloud container images delete gcr dot io forward slash your google cloud project variable forward slash along with your first image of box of bow ties colon 1.0.0 hit enter it’s going to prompt you if you want to continue you want to hit y for yes and it has now deleted the image as well you want to delete your latest image which is 1.0.1 so i’m going to change the zero to one hit enter it’s going to ask if you want to continue yes and so the container images have now been deleted and so now along with the images you want to delete the artifacts as well and those are stored in cloud storage so i’m going to close down cloud shell i’m going to head on up to the navigation menu and i’m going to head down to storage and you want to select your bucket that has your project name underscore cloud build select the source folder and click on delete and you’re going to get a prompt asking you to delete the selected folder but in order to do this you need to type in the name of the folder so i’m going to type it in now you can click on confirm and so now the folder has been deleted along with the artifacts and so now that we’ve taken care of the images along with the artifacts we need to clean up our gke cluster so i’m going to head back on up to the navigation menu and i’m going to head on over to kubernetes engine and the first thing that i want to delete is the low balancer so i’m going to head on up to services and ingress and you can select box of bow tie service and go up to the top and click on delete you’re going to get a confirmation and you want to click on delete and it’s going to take a couple minutes you do quick refresh and the service has finally been deleted i now want to delete my workload so i’m going to go over to the left hand menu click on workloads select the workload box of bowties and go up to the top and click on delete and you want to delete all resources including the horizontal pod auto scaler so you can simply click on delete and it may take a few minutes to delete gonna go up to the top and hit refresh and my workload has been deleted and so now all that’s left to delete is the gke cluster itself so i’m going to go back to clusters so you’re going to select the cluster and go up to the top and click on delete and you’re going to get a prompt asking you if you want to delete these storage pods and these are default storage pods that are installed with the cluster as well you can delete the cluster while the workload is still in play but i have this habit of being thorough so i wanted to delete the workload before deleting the cluster and so you want to go ahead and click on delete and so that’s pretty much all i have for this demo and this section on google kubernetes engine and again congrats on the great job you can now mark this as complete and i’ll see you in the next one [Music] welcome back and in this lesson i will be covering the features of cloud vpn an essential service for any engineer to know about when looking to connect another network to google cloud whether it be your on-premises network another cloud provider or even when connecting to vpcs this service is a must know for any engineer and for the exam so with that being said let’s dive in now cloudvpn securely connects your peer network to your vpc network through an ipsec vpn connection when i talk about a peer network this is referring to an on-premises vpn device or vpn service a vpn gateway hosted by another cloud provider such as aws or azure or another google cloud vpn gateway and so this is an ipsec or encrypted tunnel from your peer network to your vpc network that traverses the public internet and so for those who don’t know ipsec being short for internet security protocol and this is a set of protocols using algorithms allowing the transport of secure data over an ip network ipsec operates at the network layer so layer 3 of the osi model which allows it to be independent of any applications although it does come with some additional overhead so please be aware and so when creating your cloud vpn traffic traveling between the two networks is encrypted by one vpn gateway and then decrypted by the other vpn gateway now moving on to some details about cloud vpn this is a regional service and so please take that into consideration when connecting your on-premises location to google cloud for the least amount of latency it also means that if that region were to go down you would lose your connection until the region is back up and running now cloud vpn is also a site-to-site vpn only and therefore it does not support site-to-client so this means that if you have a laptop or a computer at home you cannot use this option with a vpn client to connect to google cloud cloudvpn can also be used in conjunction with private google access for your on-premises hosts so if you’re using private google access within gcp you can simply connect to your data center with vpn and have access as if you were already in gcp so if you’re looking to extend private google access to your on-premises data center cloud vpn would be the perfect choice and so when it comes to speeds each cloud vpn tunnel can support up to three gigabits per second total for ingress and egress as well routing options that are available are both static and dynamic but are only available as dynamic for aha vpn and lastly cloudvpn supports ik version 1 and ike version 2 using shared secret and for those of you who are unaware ike stands for internet key exchange and this helps establish a secure authenticated communication channel by using a key exchange algorithm to generate a shared secret key to encrypt communications so know that when you choose cloudvpn that your connection is both private and secure so now there are two types of vpn options that are available in google cloud one being the classic vpn and the other being h a vpn and i’m going to take a moment to go through the differences now with classic vpn this provides a service level agreement of 99.9 percent also known as an sla of three nines while h a vpn provides a four nines sla when configured with two interfaces and two external ips now when it comes to routing classic vpn supports both static and dynamic routing whereas havpn supports dynamic routing only and this must be done through bgp using cloud router classic vpn gateways have a single interface and a single external ip address and support tunnels using static routing as well as dynamic routing and the static routing can be either route based or policy based whereas with havpn it can be configured for two interfaces and two external ips for true ha capabilities and as mentioned earlier when it comes to routing for havpn dynamic routing is the only available option now the one thing about classic vpn is that google cloud is deprecating certain functionality on october 31st of 2021 and is recommending all their customers to move to h a vpn and so know that this has not been reflected in the exam and not sure if and when it will be but know that when you are creating a cloud vpn connection in your current environment h a vpn is the recommended option and so now i wanted to dive into some architecture of how cloud vpn is set up for these two options starting with classic vpn now as i said before classic vpn is a cloud vpn solution that lets you connect your peer network to your vpc network through an ipsec vpn connection in a single region now unlike h a vpn classic vpn offers no redundancy out of the box you would have to create another vpn connection and if the connection were to go down you would have to manually switch over the connection from one to the other now as you can see here when you create a vpn gateway google cloud automatically chooses only one external ip address for its interface and the diagram shown here shows that of a classic vpn network connected from the bowtie dash network vpc in bowtie project to an on-premises network configured using a static route to connect now moving on to h-a-v-p-n again this is a highly available cloud vpn solution that lets you connect your peer network to your vpc network using an ipsec vpn connection in a single region exactly like classic vpn where havpn differs is that it provides four nines sla and as you can see here it supports double the connections so when you create an h a vpn gateway google cloud automatically chooses two external ip addresses one for each of its fixed number of two interfaces each ip address is automatically chosen from a unique address pool to support high availability each of these ha vpn gateway interfaces supports multiple tunnels which allows you to create multiple h a vpn gateways and you can configure an h a vpn gateway with only one active interface and one public ip address however this configuration does not provide a four nines sla now for h a vpn gateway you configure an external peer vpn gateway resource that represents your physical peer gateway in google cloud you can also create this resource as a standalone resource and use it later in this diagram the two interfaces of an h a vpn gateway in the bowtie network vpc living in bowtie project are connected to two peer vpn gateways in an on-premises network and this connection is using dynamic routing with bgp connecting to a cloud router in google cloud now when it comes to the times when using cloudvpn makes sense one of the first things you should think about is whether or not you need public internet access so when you’re sharing files or your company needs a specific sas product that’s only available on the internet vpn would be your only option as well when you’re looking to use interconnect and your peering location is not available so you’re not able to connect your data center to the colocation facility of your choice vpn would be the only other option that you have as well if budget constraints come into play when deciding on connecting to your peer network vpn would always be the way to go as cloud interconnect is going to be the more expensive option and lastly if you don’t need a high speed network and low latency is not really a concern for you and you only have regular outgoing traffic coming from google cloud then vpn would suffice for your everyday needs and so the options shown here are also the deciding factors to look for when it comes to questions in the exam that refer to cloudvpn or connecting networks and so that’s pretty much all i have for this short lesson on cloudvpn so you can now mark this lesson as complete and let’s move on to the next one [Music] welcome back and in this lesson i’m going to go over another connection type that allows for on-premises connectivity to your google cloud vpcs which is cloud interconnect other than vpn this is the other connection type that allows connectivity from your on-premises environment to your google cloud vpc cloud interconnect is the most common connection for most larger organizations and are for those that demand fast low latency connections this lesson will cover the features of cloud interconnect and the different types that are available so with that being said let’s dive in so getting right into it cloud interconnect is a low latency highly available connection between your on-premises data center and google cloud vpc networks also cloud interconnect connections provide internal ip address connection which means internal ip addresses are directly accessible from both networks and so on premises hosts can use internal ip addresses and take advantage of private google access rather than external ip addresses to reach google apis and services traffic between your on-premises network and your vpc network doesn’t traverse the public internet traffic traverses a dedicated connection or through a service provider with a dedicated connection your vpc network’s internal ip addresses are directly accessible from your on-premises network now unlike vpn this connection is not encrypted if you need to encrypt your traffic at the ip layer you can create one or more self-managed vpn gateways in your vpc network and assign a private ip address to each gateway now although this may be a very fast connection it also comes with a very high price tag now unlike vpn this connection type is not encrypted if you need to encrypt your traffic at the ip layer you can create one or more self-managed vpn gateways in your vpc network and assign a private ip address to each gateway now although this may be a very fast connection it also comes with a very high price tag and is the highest price connection type cloud interconnect offers two options for extending your on-premises network dedicated interconnect which provides a direct physical connection between your on-premises network and google’s network as well as partner interconnect which provides connectivity between your on-premises and vpc networks through a supported service provider and so i wanted to take a moment to highlight the different options for cloud interconnect starting with dedicated interconnect now dedicated interconnect provides a direct physical connection between your on-premises network and google’s network dedicated interconnect enables you to transfer large amounts of data between your network and google cloud which can be more cost effective than purchasing additional bandwidth over the public internet for dedicated interconnect you provision a dedicated interconnect connection between the google network and your own router in a common location the following example shown here shows a single dedicated interconnect connection between a vpc network and an on-premises network for this basic setup a dedicated interconnect connection is provisioned between the google network and the on-premises router in a common co-location facility when you create a vlan attachment you associate it with a cloud router this cloud router creates a bgp session for the vlan attachment and its corresponding on-premises peer router these routes are added as custom dynamic routes in your vpc network and so for dedicated interconnect connection capacity is delivered over one or more 10 gigabits per second or 100 gigabits per second ethernet connections with the follow-on maximum capacity supported per interconnect connection so with your 10 gigabit per second connections you can get up to eight connections totaling a speed of 80 gigabits per second with the 100 gigabit per second connection you can connect two of them together to have a total speed of 200 gigabits per second and so for dedicated interconnect your network must physically meet google’s network in a supported co-location facility also known as an interconnect connection location this facility is where a vendor the co-location facility provider provisions a circuit between your network and a google edge point of presence also known as a pop the setup shown here is suitable for non-critical applications that can tolerate some downtime but for sensitive production applications at least two interconnect connections in two different edge availability domains are recommended now partner interconnect provides connectivity between your on-premises network and your vpc network through a supported service provider so this is not a direct connection from your on-premises network to google as the service provider provides a conduit between your on-premises network and google’s pop now a partner interconnect connection is useful if a dedicated interconnect co-location facility is physically out of reach or your workloads don’t warrant an entire 10 gigabit per second connection for partner interconnect 50 megabits per second to 50 gigabits per second vlan attachments are available with the maximum supported attachment size of 50 gigabits per second now service providers have existing physical connections to google’s network that they make available for their customer to use so in this example shown here you would provision a partner interconnect connection with a service provider and connecting your on-premises network to that service provider after connectivity is established with the service provider a partner interconnect connection is requested from the service provider and the service provider configures your vln attachment for use once your connection is provisioned you can start passing traffic between your networks by using the service providers network now there are many more detailed steps involved to get a connection established along with traffic flowing but i just wanted to give you a high level summary of how a connection would be established with a service provider now as well to build a highly available topology you can use multiple service providers as well you must build redundant connections for each service provider in each metropolitan and so now there’s a couple more connection types that run through service providers that are not on the exam but i wanted you to be aware of them if ever the situation arises in your role as a cloud engineer so the first one is direct peering and direct peering enables you to establish a direct peering connection between your business network and google’s edge network and exchange high throughput cloud traffic this capability is available at any of more than 100 locations in 33 countries around the world when established direct peering provides a direct path from your on-premises network to google services including google cloud products that can be exposed through one or more public ip addresses traffic from google’s network to your on-premises network also takes that direct path including traffic from vpc networks in your projects now you can also save money and receive direct egress pricing for your projects after they have established direct peering with google direct peering exists outside of google cloud unless you need to access google workspace applications the recommended methods of access to google cloud are dedicated interconnect or partner interconnect establishing a direct peering connection with google is free and there are no costs per port and no per hour charges you just have to meet google’s technical peering requirements and can then be considered for the direct peering service and moving on to the last connection type is cdn interconnect now i know we haven’t gotten into cdns in the course as the exam does not require you to know it but cdn standing for content delivery network is what caches content at the network edge to deliver files faster to those requesting it one of the main ways to improve website performance now moving on to cdn interconnect this connection type enables select third-party cdn providers like akamai and cloudflare along with others to establish and optimize your cdn population costs by using direct peering links with google’s edge network and enables you to direct your traffic from your vpc networks to the provider’s network and so your egress traffic from google cloud through one of these links benefits from the direct connectivity to the cdn provider and is billed automatically with reduced pricing typical use cases for cdn interconnect is if you’re populating your cdn with large data files from google cloud or you have frequent content updates stored in different cdn locations and so getting into the use cases of when to use cloud interconnect a big purpose for it would be to prevent traffic from traversing the public internet it is a dedicated physical connection right to google’s data centers so when you need an extension of your vpc network to your on-premises network interconnect is definitely the way to go now in speed and low latencies of extreme importance interconnect is always the best option and will support up to 200 gigabits per second as well when you have heavy outgoing traffic or egress traffic leaving google cloud cloud interconnect fits the bill perfectly and lastly when it comes to private google access this travels over the backbone of google’s network and so when you are connected with interconnect this is an extension of that backbone and therefore your on-premises hosts will be able to take advantage of private google access and so i hope this has given you some clarity on the differences between the different connection types and how to extend your google cloud network to a peer or on-premises network so that’s pretty much all i had to cover when it comes to cloud interconnect so you can now mark this lesson as complete and let’s move on to the next one [Music] welcome back in this lesson i’m going to be covering an overview of app engine now this is not a deep dive lesson for app engine as there is so much to cover with this service but i will be listing a lot of the features of app engine to give you a good feel for what it can do and what you will need to know for the exam so with that being said let’s dive in now app engine is a fully managed serverless platform for developing and hosting web applications at scale this is google’s platform as a service offering that was designed for developers so that they can develop their application and let app engine do all the heavy lifting by taking care of provisioning the servers and scaling the instances needed based on demand app engine gives you the flexibility of launching your code as is or you can launch it as a container and uses runtime environments of a variety of different programming languages like python java node.js go ruby php or net applications deployed on app engine that experience regular traffic fluctuations or newly deployed applications where you’re simply unsure about the load are auto scaled accordingly and automatically your apps scale up to the number of instances that are running to provide consistent performance or scale down to minimize idle instances and reduces costs app engine also has the capabilities of being able to deal with rapid scaling for sudden extreme spikes of traffic having multiple versions of your application within each service allows you to quickly switch between different versions of that application for rollbacks testing or other temporary events you can route traffic to one or more specific versions of your application by migrating or splitting traffic and you can use traffic splitting to specify a percentage distribution of traffic across two or more of the versions within a service and allows you to do a b testing or blue green deployment between your versions when rolling out new features app engine supports connecting to back-end storage services such as cloud firestore cloud sql and cloud storage along with connecting to on-premises databases and even external databases that are hosted on other public clouds app engine is available in two separate flavors standard and flexible environments and each environment offers their own set of features that i will get into in just a sec now as i mentioned before app engine is available in standard and flexible environments and depending on your application needs either one will support what you need for your workload or you could even use both simultaneously the features shown here will give you a feel for both types of environments and i’m going to be doing a quick run through summarizing the features of each starting with the standard environment now with the standard environment applications run in a secure sandboxed environment allowing app engine standard to distribute requests across multiple servers and scaling servers to meet traffic demands your application runs with its own secure reliable environment that is independent of the hardware operating system or physical location of the server the source code is written in specific versions of the supported programming languages and with app engine standard it is intended to run for free or at a very low cost where you pay only for what you need and when you need it with app engine standard your application can scale to zero instances when there is no traffic app engine standard is designed for sudden and extreme spikes of traffic which require immediate scaling and pricing for standard app engine is based on instance hours and so when it comes to features for app engine flexible the application instances run within docker containers that includes a custom runtime or source code written in other programming languages these docker containers are then run on compute engine vms app engine flexible will run any source code that is written in a version of any of the supported programming languages for app engine flexible and unlike the standard environment unfortunately there is no free quota for app engine flexible as well app engine flexible is designed for consistent traffic or for applications that experience regular traffic fluctuations and pricing is based on the vm resources and not on instance hours like app engine standard and so where app engine flexible really shines over app engine standard are how the vms are managed so instances are health checked healed as necessary and co-located with other services within the project the vm’s operating system is updated and applied automatically as well vms are restarted on a weekly basis to make sure any necessary operating system and security updates are applied ssh along with root access are available to the vm instances running your containers now deploying applications to app engine is as simple as using the gcloud app deploy command this command automatically builds a container image from your configuration file by using the cloud build service and then deploys that image to app engine now an app engine application is made up of a single application resource that consists of one or more services each service can be configured to use different runtimes and to operate with different performance settings services and app engine are used to factor your large applications into logical components that can securely share app engine features and communicate with one another these app engine services become loosely coupled behaving like microservices now within each service you deploy versions of that service and each version then runs within one or more instances depending on how much traffic you configured it to handle having multiple versions of your application within each service allows you to quickly switch between different versions of that application for rollbacks testing or other temporary events you can route traffic to one or more specific versions of your application by migrating traffic to one specific version or splitting your traffic between two separate versions and so the versions within your services run on one or more instances by default app engine scales your application to match the load your applications will scale up the number of instances that are running to provide consistent performance or scale down to minimize idle instances and reduce costs now when it comes to managing instances app engine can automatically create and shut down instances as traffic fluctuates or you can specify a number of instances to run regardless of the amount of traffic you can also configure how and when new instances are created by specifying a
scaling type for your application and how you do this is you specify the scaling type in your application’s app.yaml file now there are three different types of scaling choices to choose from and the first one being automatic scaling and this scaling type creates instances based on request rate response latencies and other application metrics you can specify thresholds for each of these metrics as well as a minimum number instances to keep running at all times if you use automatic scaling each instance in your application has its own queue for incoming requests before the queues become long enough to have a visible effect on your app’s latency app engine automatically creates one or more new instances to handle the load the second type is basic scaling and this creates instances when your application receives requests each instance is shut down when the application becomes idle basic scaling is fantastic for intermittent workloads or if you’re looking to drive your application by user activity app engine will try to keep your costs low even though it might result in higher latency as the volume of incoming requests increase and so the last scaling type is manual scaling and this is where you specify the number of instances that continuously run regardless of the load so these are instances that are constantly running and this allows complex startup tasks on the instances to have already been completed when receiving requests and applications that rely on the state of the memory over time so this is ideal for instances whose configuration scripts require some time to fully run their course so now that i’ve gone over managing the instances i wanted to take a few moments to go over how app engine manages traffic starting with traffic migration now traffic migration switches the request routing between the versions within a service of your application moving traffic from one or more versions to a single new version so when deploying a new version with the same name of an existing version it causes an immediate traffic migration all instances of the old version are immediately shut down in app engine standard you can choose to route requests to the target version either immediately or gradually you can also choose to enable warm-up requests if you want the traffic gradually migrated to a version gradual traffic migration is not supported in app engine flexible and traffic is migrated immediately now one thing to note is that when you immediately migrate traffic to a new version without any running instances then your application will have a spike in latency for loading requests while instances are being created and so another way to manage traffic on app engine is through traffic splitting now you can use traffic splitting to specify a percentage distribution of traffic across two or more of the versions within a service so in this example if i’m deploying a new version of my service i can decide on how i want to distribute traffic to each version of my application and so i decide that i want to keep my current version in play but roll out the new version of my application to 10 of my users leaving the old version was still 90 of the traffic going to that version and so splitting traffic allows you to conduct a b testing between your versions and provides control over the pace when rolling out features and just as a note when you’ve specified two or more versions for splitting you must choose whether to split traffic by either by either ip address http cookie or do it randomly now again this has not been a deep dive lesson on app engine but i hope this has given you an overview of the features that are available as the exam touches on these features i also wanted to give you some familiarity with the service itself as coming up next i will be going into a demo where we will be launching an application using app engine and trying on some of these features for yourself and so that’s pretty much all i wanted to cover when it comes to app engine so you can now mark this lesson as complete and whenever you’re ready join me in the console where you will deploy an application on app engine and try out some of these features for yourself [Music] welcome back and in this demo you’re going to build another application to deploy on app engine called serverless bowties this demo will run you through the ins and outs of deploying a website application on app engine along with managing it while experiencing no downtime so there’s quite a bit of work to do here so with that being said let’s dive in and so here in my console i am logged in as tonybowtieace gmail.com under project bowtie inc and so the first thing i want to do here is i want to head on over to app engine so in order to do that i’m going to go to the top left-hand navigation menu and i’m going to go down to app engine and because i haven’t created any applications i’m going to be brought to this splash page now in order to deploy this application we’re not going to be doing it through the console but we will be doing it through the command line and so to get started with that i’m going to go up to the top and open up cloud shell i’m going to make this bigger for better viewing and so in order for me to get the code to launch this application i’m going to be cloning my github repository into cloud shell and so for those of you who haven’t deleted your repository from the last demo you can go ahead and skip the cloning step for those of you who need to clone your repository you will find a link to the instructions in the lesson text and there you’ll be able to retrieve the command which will be git clone along with the address of the repo i’m going to hit enter and because i’ve already cloned this repo i’m receiving this error i’m going to do an ls and as you can see here the google cloud associate cloud engineer repo has already been cloned so i’m going to cd into that directory and in order to get the code i’m going to simply run the command git pull to get the latest and i’m going to simply clear my screen and so now that i’ve retrieved all the code that i need in order to deploy it i need to go to that directory and that directory is going to be 11 serverless services forward slash 0 1 serverless bowties and hit enter you’re going to run ls and here you will find two versions of the website application site v1 and site v2 along with the instructions if you want to follow straight from here and so i want to go ahead and deploy my first website application so i’m going to cd into site v1 ls and here you will see the app.yaml which is the configuration file that you will need in order to run the application on app engine and so before i go ahead and deploy this i wanted to take a moment to show you the application configuration so i’m going to go ahead and open it up in cloud shell editor so i’m going to type in edit app.yaml enter and as you can see here my runtime is python 3.7 and as you can see i have a default expiration of two seconds along with an expiration underneath each handler and this is due to the caching issue that happens with app engine and so in order to simulate traffic splitting between the two website applications in order to make things easy i needed to expire the cash and this is an easy way to do it now there may be applications out there that do need that caching and so the expiration may be a lot higher but for the purposes of this demo two seconds expiration should suffice as well explain the two handlers here the first one showing the files that will be uploaded to the cloud storage bucket as well as the second stating what static files will be presented and so i’m going to go ahead back over to my terminal and i’m going to go ahead and clear my screen and i’m going to go ahead and run the command gcloud app deploy with the flag dash dash version and this is going to be version one so i’m going to go ahead and hit enter and you may get a pop-up asking you to authorize this api call using your credentials and you want to click on authorize and you’re going to be prompted to enter in a region that you want to deploy your website application to we want to keep this in us east one so i’m going to type in 15 hit enter and you’re going to be prompted to verify your configuration for your application before it’s deployed you’re also going to be prompted if you want to continue definitely yes so i’m going to hit y enter and so now as you’ve seen the files have been uploaded to cloud storage and app engine is going to take a few minutes to create the service along with the version so i’m going to let it do the needful and i’ll be back before you know it okay and my application has been deployed now although you don’t see it here in the console it has been deployed all i need to do is refresh my screen but i wanted to just point out a couple things that is shown here in the terminal the first one being the default service now the first time you deploy a version of your application it will always deploy to the default service initially and only then will you be able to deploy another named service to app engine now here where it says setting traffic split for service this is referring to the configuration for traffic splitting being applied in the background which i will be getting into a little bit later and lastly the url shown for the deployed service will always start with the name of your project followed by.ue.r.appspot.com which is why in production google recommends to run app engine in a completely separate project before this demo running it in the same project that we’ve been using will suffice okay so let’s go ahead and take a look at the application so i’m going to go back up to the top here to the navigation menu and i’m gonna go down to app engine and go over to services and so here you will see the default service with version one and if i go over to versions i will see here my version the status the traffic allocation along with any instances that it needs the run time the specific environment and i’ll have some diagnostic tools here that i could use and so because this is a static website application we won’t be using any instances and so this will always show a zero so now i want to head back on over to services and i’m going to launch my application by simply clicking on this hot link and success serverless bow ties for all and so it looks like my application has been successfully deployed so i’m going to close down this tab now there’s a couple of things that i wanted to run through here on the left hand menu just for your information so here i can click on instances and if i was running any instances i am able to see a summary of those instances and i can click on the drop down here and choose a different metric and find out any information that i need as well i can click on this drop down and select a version if i had multiple versions which i do not clicking on task queues here is where i can manage my task queues but this is a legacy service that will soon be deprecated clicking on cron jobs here i can schedule any tasks that i need to run at a specific time on a recurring basis i can edit or add any firewall rules if i need to and as you can see the default firewall rule is open to the world now you probably noticed memcache as being one of the options here in the menu but this is a legacy service that will soon be deprecated memcache is a distributed in-memory data store that is bundled into the python to runtime acting as a cache for specific tasks and google recommends moving to memory store for redis if you’re planning on applying caching for your app engine application and so i’m not sure how much longer this will be here and lastly under settings here is where you can change your settings for your application i can add any custom domains any ssl certificates as well as setting up email for any applications that want to send email out to your users okay and now that we’ve done that walkthrough i want to go ahead and deploy my second version of the application and so i’m going to go ahead back down to cloud shell i’m going to quickly clear my screen and i want to move into the site v2 directory so i’m going to hit cd dot dot which will bring you back one directory you do an ls and i’m going to change directories into site v2 and do an ls just to verify and yes you will see serverless bow ties too i’m going to quickly clear my screen and i’m going to run the same command as before which is gcloud app deploy with the version flag dash dash version and instead of one i’m going to launch version 2. so i’m going to hit enter i’m going to be prompted if i want to continue yes i do and as you can see the files have been uploaded to cloud storage for version 2 of the website application and app engine is going to take a few minutes to create the service along with the version so i’m going to let it cook here for a couple minutes and i’ll be back before you can say cat in the hat okay so version 2 has been deployed and so if i go up here to the console and i click on refresh you should see version 2 of your service and as you can see 100 of the traffic has been allocated to version 2 automatically and this is the default behavior for whenever you launch a new version of your service the only way to avoid this is to deploy your new version with the no promote flag and so if i go back to services here on the left and i click on the default service you should see success for version two and so i know that my website application for version 2 has been deployed successfully so i’m going to close down this tab again and i’m going to go back to versions and so what i want to do now is i want to simulate an a b test or blue green deployment by migrating my traffic back to the old version in this case being version one so in production let’s say that you would release a new version and the version doesn’t go according to plan you can always go back to the previous version and app engine allows you to do that very easily and so i’m going to click on version 1 and i’m going to go up to the top menu and click on migrate traffic you’ll be prompted if you want to migrate traffic yes i do so i’m going to click on migrate and it should take a minute here and traffic should migrate over to version one and success traffic has been migrated and so we want to verify that this has happened i’m gonna go back to services i’m gonna click on the default service and yes the traffic has been allocated to version one okay so i’m going to shut down this tab i’m going to go back to versions and so now what i want to do is i want to simulate splitting the traffic between the two versions and so in order for you to do this you can go up to the top menu click on split traffic and you’ll be prompted with a new menu here and here i can choose from different versions and because i only have two versions i’m going to add version 2 and in order to allocate the traffic between the two i can either use this slider and as you can see the allocation percentage will change or i can simply just type it in and so i’m going to leave this at 50 percent so fifty percent of version one fifty percent of version two i’m going to split traffic randomly i’m gonna move this down just a little bit and so that’s exactly how you wanna allocate your traffic and so once you’ve completed that you can simply click on save it’s going to take a moment to update the settings and it’s been successful so if i head back on over to the previous page you can see here that traffic has been allocated to both versions and so now in order to verify this what you’re going to do is go over to services and click on the default hot link and you’ll see version one but if i continuously refresh my screen i can see that here i have version two so because it’s random i have a 50 chance of getting version 1 and a 50 chance of getting version 2. and so this is a simulation of splitting traffic to different versions and usually with a b testing only a small percentage of the traffic is routed to the new version until verification can be made that the new version deployed has indeed been successful and this can be done by receiving feedback from the users and so now i wanted to take a quick moment to congratulate you on making it through this demo and hope that it has been extremely useful in excelling your knowledge in deploying and managing applications on app engine so just as a recap you’ve cloned the repo to cloud shell you then deployed version one of your application into app engine you verified its launch and then you deployed version two of the application and verified its launch as well you then migrated traffic from version two over to version one and then you went ahead and split traffic between both versions and allotted 50 of the traffic allocation to each version and so now before you go i want to make sure that we clean up any resources that we’ve deployed so that we don’t incur any unnecessary costs and so the way to do this is very simple so first step you want to go over to the left hand menu and click on settings and simply click on disable application you’re going to be prompted to type in the app’s id for me it’s bowtie inc so i’m going to type that in and i’m going to click on disable now unfortunately with app engine you can’t actually delete the application it can only be disabled and so now here i’m going to hit the hot link to go over to the cloud storage bucket and as you can see here i have no files but i’m going to move back to my buckets and i’m going to move into the staging bucket which is appended with your project id.appspot.com and as you can see here there’s a whole bunch of different files as well if i drill down into the directory marked as ae for app engine i can see here that i have some more directories along with the manifest and so now if you want to keep your application in order to run it later you don’t need to delete this bucket but because i don’t need it i’m going to go ahead and delete the bucket hit delete paste in my bucket name hit delete as well under us.artifacts you will find a directory called containers and as explained in the last lesson code build builds a container for your application before deploying it to app engine so i’m going to drill down into images so here’s all the container digests and i don’t need any of these so i’m gonna go ahead and delete this bucket as well and so this is the last step in order to delete all the directories and files that we use to deploy our application in an app engine okay and so i’m gonna head back on over to app engine and so now that cleanup has been taken care of that’s pretty much all i wanted to cover in this demo for deploying and managing applications on app engine so you can now mark this as complete and i’ll see you in the next one and again congrats on a job well done [Music] welcome back in this lesson i will be diving into another serverless product from google cloud by the name of cloud functions an extremely useful and advanced service that can be used with almost every service on the platform now there’s quite a bit to cover here so with that being said let’s dive in now cloud functions as i said before are a serverless execution environment and what i mean by this is like app engine there is no need to provision any servers or updating vms as the infrastructure is all handled by google but unlike app engine you will never see the servers so the provisioning of resources happens when the code is executed now cloud functions are a function as a service offering and this is where you upload code that is purposefully written in a supported programming language and when your code is triggered it is executed in a fully managed environment and your billed for when that code is executed cloud functions run in a runtime environment and support many different runtimes like python java node.js go and net core cloud functions are event driven so when something happens in your environment you can choose whether or not you’d like to respond to this event if you do then your code can be executed in response to the event these triggers can be one of a few different types such as http pub sub cloud storage and now firestore and firebase which are in beta and have yet to be seen in the exam cloud functions are priced according to how long your function runs and how many resources you provision for your function if your function makes an outbound network request there are also additional data transfer fees cloud functions also include a perpetual free tier which allows you 2 million invocations or executions of your function now cloud functions themselves are very simple but have a few steps to execute before actually running so i wanted to give you a walkthrough on exactly how cloud functions work now after selecting the name and region you want your function to live in you would then select the trigger you wish to use and you can choose from the many i listed earlier being http cloud storage pub sub cloud firestore and firebase a trigger is a declaration that you are interested in a certain event or set of events binding a function to a trigger allows you to capture and act on these events authentication configuration is the next step and can be selected with public access or configured through iam now there are some optional settings that can be configured where you would provide the amount of memory the function will need to run networking preferences and even selection for a service account now once all the settings have been solidified your written code can then be put into the function now the functions code supports a variety of languages as stated before like python java node.js or go now when writing your code there are two distinct types of cloud functions that you could use http functions and background functions with http functions you invoke them from standard http requests these http requests wait for the response and support handling of common http request methods like get put post delete and options when you use cloud functions a tls certificate is automatically provisioned for you so all http functions can be invoked via a secure connection now when it comes to background functions these are used to handle events from your gcp infrastructure such as messages on a pub sub topic or changes in a cloud storage bucket now once you have put all this together you are ready to deploy your code now there are two things that will happen when deploying your code the first one is the binding of your trigger to your function once you bind a trigger you cannot bind another one to the same function only one trigger can be bound to a function at a time now the second thing that will happen when you deploy your function’s source code to cloud functions is that source code is stored in a cloud storage bucket as a zip file cloud build then automatically builds your code into a container image that pushes that image to container registry cloud functions accesses this image when it needs to run the container to execute your function the process of building the image is entirely automatic and requires no manual intervention and so at this point of the process the building of your function is now complete now that the function has been created we now wait for an event to happen and events are things that happen within your cloud environment that you might want to take action on these might be changes to data in cloud sql files added to cloud storage or a new vm being created currently cloud functions supports events from the same services used for triggers that i have just mentioned including other google services like bigquery cloud sql and cloud spanner now when an event triggers the execution of your cloud function data associated with the event is passed via the functions parameters the type of event determines the parameters that are passed to your function cloud functions handles incoming requests by assigning them to instances of your function now depending on the volume of requests as well as the number of existing function instances cloud functions may assign a request to an existing instance or create a new one so the cloud function will grab the image from cloud registry and hand off the image along with the event data to the instance for processing now each instance of a function handles only one concurrent request at a time this means that while your code is processing one request there is no possibility of a second request being routed to the same instance thus the original request can use the full amount of resources that you requested and this is the memory that you assign to your cloud function when deploying it now to allow google to automatically manage and scale the functions they must be stateless functions are not meant to be persistent nor is the data that is passed on to the function and so once the function has run and all data has been processed by the server it is then passed on to either a vpc or to the internet now by default functions have public internet access unless configured otherwise functions can also be private and used within your vpc but must be configured before deployment now there are so many use cases for cloud functions and there are many that have already been created by google for you to try out and can be located in the documentation that i’ve supplied in the lesson text below now the exam doesn’t go into too much depth on cloud functions but i did want to give you some exposure to this fantastic serverless product from google as it is so commonly used in many production environments in a simple and easy way to take in data process it and return a result from any event you are given and i have no doubt that once you get the hang of deploying them that you will be a huge fan of them as well and so that’s pretty much all i had to cover when it comes to cloud functions so you can now mark this lesson as complete and whenever you’re ready join me in the next one where we go hands-on in the console creating and deploying your very first function welcome back and in this demo we will be diving into creating and deploying our very first cloud function we’re going to take a tour of all the options in the console but we’re going to do most of the work in cloud shell to get a good feel for doing it in the command line so with that being said let’s dive in and so i’m logged in here as tony bowties gmail.com and i’m in the project of bowtie inc and so the first thing i want to do is head on over to cloud functions in the console so i’m going to go up to the top left to the navigation menu and i’m going to scroll down to cloud functions and as you can see here cloud functions is getting ready and this is because we’ve never used it before and the api is being enabled okay and the api has been enabled and we can go ahead and start creating our function so you can go ahead and click create function and you will be prompted with some fields to fill out for the configuration of your cloud function and so under basics for function name i’m going to name this hello underscore world for region i’m going to select us east one and under trigger for trigger type we’re gonna keep this as http although if i click on the drop down menu you can see that i will have options for cloud pub sub cloud storage and the ones that i mentioned before that are in beta so we’re going to keep things as http and here under url is the url for the actual cloud function under authentication i have the option of choosing require authentication or allow unauthenticated invocations and as you can see this is clearly marked saying that check this if you are creating a public api or website which we are and so this is the authentication method that you want to select and so now that we have all the fields filled out for the basic configuration i’m going to go ahead and click on save and just to give you a quick run through of what else is available i’m going to click on the drop down here and this will give me access to variables networking and advanced settings the first field here memory allocated i can actually add more memory depending what i am doing with my cloud function but i’m going to keep it as the default if you have a cloud function that runs a little bit longer and you need more time to run the cloud function you can add additional time for the timeout and as well i have the option of choosing a different service account for this cloud function and so moving on under environment variables you will see the options to add build environment variables along with runtime environment variables and the last option being connections here you can change the different networking settings for ingress and egress traffic under ingress settings i can allow all traffic which is the default i can allow internal traffic only as well i can allow internal traffic and traffic from cloud low balancing now as well when it comes to the egress settings as i said before by default your cloud function is able to send requests to the internet but not to resources in your vpc network and so this is where you would create a vpc connector to send requests from your cloud function to resources in your vpc so if i click on create a connector it’ll open up a new tab and bring me to vpc network to add serverless vpc access and so i don’t want to do that right now so i’m going to close down this tab and i’m going to go ahead and leave everything else as is and click on next and so now that the configuration is done i can dive right into the code and so google cloud gives you a inline editor right here along with the different runtime environments so if i click on the drop down menu you can see i have the options of net core go java node.js and python 3.7 and 3.8 and so for this demo i’m going to keep it as node.js 10. the entry point will be hello world and i’m going to keep the code exactly as is and this is a default cloud function that is packaged with any runtime whenever you create a function from the console and so if i had any different code i can change it here but i’m not going to do that i’m going to leave everything else as is and click on deploy and it’ll take a couple minutes here to create my cloud function and so i’m going to pause the video here for just a quick sec and i’ll be back in a flash okay and my cloud function has been deployed and i got a green check mark which means that i’m all good and so i want to dive right into it for just a second so i can get some more details here i have the metrics for my cloud function the invocations per second execution time memory utilization and active instances i have my versions up here at the top but since i only have one version only one version shows up if i click on details it’ll show me the general information along with the networking settings the source will show me the code for this cloud function as well as the variables the trigger permissions logs and testing and here i can write in some code and test the function and so in order for me to invoke this function i can simply go to trigger and it’ll show me the url but a quick way to do this through the command line is to simply open up cloud shell and make this a little bigger for better viewing and i’m going to paste in the command gcloud functions describe along with the function name which is hello underscore world along with the region flag dash dash region with the region that my cloud function has been deployed in which is us east one and i’m going to hit enter it’s going to ask me to authorize my api call yes i want to authorize it and this command should output some information on your screen and so what we’re looking for here is the http trigger which you will find here under https trigger and it is the same as what you see here in the console and so just know if you want to grab the http url trigger you can also do it from the command line and so i’m going to now trigger it by going to this url and you should see in the top left hand side of your screen hello world not as exciting as spinning bow ties but this example gives you an idea of what an http function can do and so i’m going to close down this tab and so now what i want to do is i want to deploy another function but i want to do it now through the command line and so i’m going to now quickly clear my screen and so since i’ve already uploaded the code to the repo i’m going to simply clone that repo and run it from here so i’m going to simply do a cd tilde to make sure i’m in my home directory for those of you who haven’t deleted the directory you can simply cd into it so i’m going to run cd google cloud associate cloud engineer hit enter and i’m going to run a get pull command and it pull down all the files that i needed i’m going to quickly clear my screen and so i’m going to change directories into the directory that has my code and so you’re going to find it under 11 serverless services under zero to you called hit enter and again i will have a link in the lesson text for the full instructions on this demo and it will list the directory where you can find this code okay so moving forward i’m going to run ls and you should see three files here main.py requirements.txt and the text file with the instructions and so now that i have everything in place in order to deploy my code i’m going to paste in the command to actually deploy my function which is gcloud functions deploy the name of the function which is you underscore called the flag for the runtime dash dash runtime and the runtime is going to be python 3.8 the flag for the trigger which is going to be http and because i’m a nice guy and i want everyone to have access to this i’m going to tag it with the flag dash dash allow unauthenticated so i’m going to hit enter okay and this function should take a couple minutes to deploy so i’m going to sit here and let it cook and i’ll be back before you can say cat in the hat okay and our function has been deployed i’m going to do a quick refresh here in the console and it deployed successfully as you can see the green check mark is here okay and so now that it’s been deployed we want to trigger our function and so because i just deployed this function the url trigger is conveniently located here in my screen so you can go ahead and click on it and hello lover of bow ties you called now although this may be similar to the hello world demo but i did add a small feature that might spice things up and so if you go up to the url and you type in question mark name equals and your name and since my name is anthony i’m going to type in anthony hit enter and hello anthony you called and so this is a perfect example of the many different ways you can use functions and although i’ve only highlighted some very simple demonstrations there are many different ways that you can use functions such as running pipelines running batch jobs and even event driven security now although the exam doesn’t go into too much depth on cloud functions it’s always good to know its use cases and where its strengths lie for when you do decide to use it in your role as a cloud engineer now before you go be sure to delete all the resources you’ve created by deleting the functions and the storage buckets that house the code for the cloud functions and i will walk you through the steps right now okay so first i’m going to close down this tab and next you’re going to select all the functions and you’re going to simply click on delete you’re going to get a prompt to delete the functions you’re going to click on delete and it’s going to take a minute or two and the functions are deleted i’m going to close down my cloud shell and i’m going to head over to cloud storage and as you can see here both these buckets that start with gcf standing for google cloud functions can be safely deleted as inside them are the files that were used for the cloud function so i’m going to go back out i’m going to select both of these and i’m going to click on delete you get a prompt to delete two buckets you can simply type in delete and click on delete and the buckets have now been deleted and you’ve pretty much finished your cleanup and so just as a recap you created a default cloud function that was available from the console and then verified it by triggering the http url you then deployed another function from the command line by pulling the code from the repo and using it for deployment and then you verified that function by triggering it using the http url as well and then you modify the url for a different output great job on another successful demo so you can now mark this as complete and let’s move on to the next one [Music] welcome back in this lesson we’re going to dive into cloud storage the go to storage service from google cloud if you’re an engineer working in google cloud you’ve probably used this many times as a storage solution and if you haven’t this is definitely a service that you will need to know for both the exam and your day-to-day role as a cloud engineer now there’s quite a bit to cover here so with that being said let’s dive in now cloud storage is a consistent scalable large capacity highly durable object storage and this is unlimited storage for objects with no minimum object size but please remember that this is object storage and is not designed to store an operating system on but to store whole objects like pictures or videos cloud storage has worldwide accessibility and worldwide storage locations so anywhere that there is a region or zone cloud storage is available from there and can be accessed at any time through an internet connection cloud storage is great for storing data from data analytics jobs text files with code pictures of the latest fashion from paris and videos of your favorite house dj at the shelter cloud storage excels for content delivery big data sets and backups and are all stored as objects in buckets and this is the heart of cloud storage that i will be diving into so starting with buckets these are the basic containers or construct that holds your data everything that you store in cloud storage must be contained in a bucket you can use buckets to organize your data and control access to your data but unlike directories and folders you cannot nest buckets and i’ll get into that in just a minute now when you create a bucket you must specify a globally unique name as every bucket resides in a single cloud storage namespace as well as a name you must specify a geographic location where the bucket and its contents are stored and you have three available geography choices to choose from from region dual region and multi-region and so just as a note choosing dual region and multi-region is considered geo-redundant for dual region geo-redundancy is achieved using a specific pair of regions for multi-region geo-redundancy is achieved using a continent that contains two or more geographic places basically the more regions your data is available in the greater your availability for that data after you’ve chosen a geographic location a default storage class must be chosen and this applies to objects added to the bucket that don’t have a storage class explicitly specified and i’ll be diving into storage classes in just a bit and so after you create a bucket you can still change its default storage class to any class supported in the buckets location with some stipulations you can only change the bucket name and location by deleting and recreating the bucket as well once dual region is selected it cannot be changed to multi-region and when selecting multi-region you will not be able to change the bucket to be dual region and lastly you will need to choose what level of access you want others to have on your bucket whether you want to apply permissions using uniform or fine grained access uniform bucket level access allows you to use iam alone to manage permissions iam applies permissions to all the objects contained inside the bucket or groups of objects with common name prefixes the find green option enables you to use iam and access control lists or acls together to manage permissions acls are a legacy access control system for cloud storage designed for interoperability with amazon s3 for those of you who use aws you can specify access and apply permissions at both the bucket level and per individual object and i will also be diving more into depth with access control in just a bit and just as a note labels are an optional item for bucket creation like every other resource creation process in gcp now that we’ve covered buckets i wanted to cover what is stored in those buckets which is objects and objects are the individual pieces of data or data chunks that you store in a cloud storage bucket and there is no limit on the number of objects that you can create in a bucket so you can think of objects kind of like files objects have two components object data and object metadata object data is typically a file that you want to store in cloud storage and in this case it is the picture of the plaid bow tie and object metadata is a collection of name value pairs that describe the various properties of that object an object’s name is treated as a piece of object metadata in cloud storage and must be unique within the bucket cloud storage uses a flat namespace to store objects which means that cloud storage isn’t a file system hierarchy but sees all objects in a given bucket as independent with no relationship towards each other for convenience tools such as the console and gsutil work with objects that use the slash character as if they were stored in a virtual hierarchy for example you can name one object slash bow ties slash spring 2021 slash plaid bowtie.jpg when using the cloud console you can then navigate to these objects as if they were in a hierarchical directory structure under the folders bow ties and spring 2021 now i mentioned before that the part of the bucket creation is the selection of a storage class the storage class you set for an object affects the object’s availability and pricing model so when you create a bucket you can specify a default storage class for the bucket when you add objects to the bucket they inherit this storage class unless explicitly set otherwise now i wanted to touch on these four storage classes now to give you a better understanding of the differences between them the first one is standard storage and is considered best for hot data or frequently accessed data and is best for short-term use as it does not have any specified storage duration and this is excellent for use in analytical workloads and transcoding and the price for this storage class comes in at two cents per gigabyte per month next up is near line storage and this is considered hot data as well and is a low-cost storage class for storing in frequently accessed data nearline storage has a slightly lower availability a 30-day minimum storage duration and comes with the cost for data access nearline storage is ideal if you’re looking to continuously add files but only plan to access them once a month and is perfect for data backup and data archiving the price for this storage class comes in at a penny per gigabyte per month now cold line storage is considered cold data as it enters into more of the longer term storage classes and is a very low cost storage class for storing and frequently accessed data it comes with slightly lower availability than nearline storage a 90-day minimum storage duration and comes with the cost for data access that is higher than the retrieval cost for nearline storage coldline storage is ideal for data you plan to read or modify at most once a quarter and is perfect for data backup and data archiving the price for this storage class comes in at less than half of a penny per gigabyte per month and finally archive storage is the lowest cost highly durable storage service for data archiving online backup and disaster recovery and even coming in at a lowest cost the data access is still available within milliseconds archive storage comes in at a higher cost for data retrieval as well as a day minimum storage duration and is the best choice for data that you plan to access less than once a year archive storage also comes with the highest price for data retrieval and it is ideal for archive data storage that’s used for regulatory purposes or disaster recovery data in the event that there is an oopsies in your environment the price of the storage class comes in at a ridiculously low price per gigabyte per month at a fraction of a penny per gigabyte per month now when it comes to choosing your geographic location this will determine the availability of your data here as you can see the highest availability is the standard multi-region whereas archive has the lowest availability when stored in a regional setting now when it comes to the durability of your data meaning the measurement of how healthy and resilient your data is from data loss or data corruption google cloud boasts 11 9’s durability annually on all data stored in any storage class on cloud storage so know that your data is stored safely and will be there holding the same integrity from the day you stored it now when it comes to granting permissions to your cloud storage buckets and the objects within them there are four different options to choose from the first is iam permissions and these are the standard permissions that control all your other resources in google cloud and follow the same top-down hierarchy that we discussed earlier the next available option are access control list or acls and these define who has access to your buckets and objects as well as what type of access they have and these can work in tandem with im permissions moving on to sign urls these are time limited reader write access urls that can be created by you to give access to the object in question for the duration that you specify and lastly is sign policy documents and these are documents to specify what can be uploaded to a bucket and i will be going into each one of these in a bit of detail now cloud storage offers two systems for granting users permission to access your buckets and objects iam and access control lists these systems act in parallel in order for a user to access a cloud storage resource only one of the systems needs to grant the user permission im is always the recommended method when it comes to giving access to buckets and the objects within those buckets granting roles at the bucket level does not affect any existing roles that you granted at the project level and vice versa giving you two levels of granularity to customize your permissions so for instance you can give a user permission to read objects in any bucket but permissions to create objects only in one specific bucket the roles that are available through iam are the primitive standard storage roles or the legacy roles which are equivalent to acls now acls are there if you need to customize access and really get granular with individual objects within a bucket and are used to define who has access to your buckets and objects as well as what level of access they have each acl consists of one or more entries and gives a specific user or group the ability to perform specific actions each entry consists of two pieces of information a permission which defines what actions can be performed and a scope which defines who can perform the specified actions now acls should be used with caution as iam roles and acls overlap cloud storage will grant a broader permission so if you allow specific users access to an object in a bucket and then an acl is applied to that object to make it public then it will be publicly accessible so please be aware now a signed url is a url that provides limited permission and time to make a request sign urls contain authentication information allowing users without credentials to perform specific actions on a resource when you generate a signed url you specify a user or service account which must have sufficient permission to make the request that the sign url will make after you generate a signed url anyone who possesses it can use the sign url to perform specified actions such as reading an object within a specified period of time now if you want to provide public access to a user who doesn’t have an account you can provide a signed url to that user which gives the user read write or delete access to that resource for a limited time you specify an expiration date when you create the sign url so anyone who knows the url can access the resource until the expiration time for the url is reached or the key used to sign the url is rotated and the command to create the sign url is shown here and as you can see has been assigned for a limited time of 10 minutes so as you’ve seen when it comes to cloud storage there are so many configuration options to choose from and lots of different ways to store and give access and this makes this resource from google cloud such a flexible option and full of great potential for many different types of workloads this is also a service that comes up a lot in the exam as one of the many different storage options to choose from and so knowing the features storage classes pricing and access options will definitely give you a leg up when you are presented with questions regarding storage and so that’s pretty much all i wanted to cover when it comes to this overview on cloud storage so you can now mark this lesson as complete and let’s move on to the next one [Music] welcome back and in this lesson i will be covering object versioning and life cycle management a feature within cloud storage that is used to manage and sort through older files that need to be deleted along with files that are not in high need of regular access knowing the capabilities of these two features can really help organize accumulated objects in storage buckets and cut down on costs so without further ado let’s dive in now to understand a bit more about objects i wanted to dive into immutability and versioning now objects are immutable which means that an uploaded object cannot change throughout its storage lifetime an object’s storage lifetime is the time between a successful object creation or upload and successful object deletion this means that you cannot edit objects in place instead objects are always replaced with a new version so after the upload of the new object completes the new version of the object is served to readers this replacement marks the end of one object’s life cycle and the beginning of a new one now to support the retrieval of objects that are deleted or replaced cloud storage offers the object versioning feature object versioning retains a non-current object version when the live object version gets replaced or deleted enabling object versioning increases storage costs which can be partially mitigated by configuring object lifecycle management to delete older object versions but more on that in just a bit cloud storage uses two properties that together identify the version of an object the generation which identifies the version of the object’s data and the meta generation which identifies the version of the object’s metadata these properties are always present with every version of the object even if object versioning is not enabled these properties can be used to enforce ordering of updates so in order to enable object versioning you would do that by enabling it on a bucket once enabled older versions remain in your bucket when a replacement or deletion occurs so by default when you replace an object cloud storage deletes the old version and adds a new version these older versions retain the name of the object but are uniquely identified by their generation number when object versioning has created an older version of an object you can use the generation number to refer to the older version this allows you to restore a replaced object in your bucket or permanently delete older object versions that you no longer need and so touching back on cost for just a minute these versions can really add up and start costing you some serious money if you have thousands of files with hundreds of versions and this is where life cycle management comes into play now cloud storage offers the object lifecycle management feature in order to support some common use cases like setting a time to live or ttl for objects retaining non-current versions of objects or downgrading storage classes of objects to help manage costs now in order to apply this feature to your objects you would assign a lifecycle management configuration to a bucket the configuration contains a set of rules which apply to current and feature objects in the bucket when an object meets the criteria of one of the rules cloud storage automatically performs the specified action on the object and so some example use cases are shown here so if you’re looking to downgrade the storage class of objects older than 365 days to cold line storage for compliance purposes along with saving money life cycle management is perfect for this another use case is when you want to delete objects created before january 1st of 2020 and this is another great use case to save money as well with keeping only the three most recent versions of each object in a bucket with versioning enabled to keep from version objects building up object lifecycle management has so many other use cases across a myriad of industries and when used correctly is a great way to achieve object management along with saving money now i wanted to take a moment to dive into the lifecycle management configuration each lifecycle management configuration contains a set of components these are a set of rules conditions and the action when the conditions are met rules are any set of conditions for any action conditions is something an object must meet before the action defined in the rule occurs on the object and there are various conditions to choose from that allows you to get pretty granular and finally the action which is where you would have the option to delete or set storage class now when you delete current versions this will move the current version into a non-current state and when you delete a non-current version you will permanently delete the version and cannot get it back and so when you set the storage class it will transition the object to a different storage class so when defining a rule you can specify any set of conditions for any action if you specify multiple conditions in a rule an object has to match all of the conditions for the action to be taken so if you have three conditions and one of those conditions have not been met then the action will not take place if you specify multiple rules that contain the same action the action is taken when an object matches the conditions in any of these rules now if multiple rules have their conditions satisfied simultaneously for a single object cloud storage will either perform the delete action as it takes precedence over the set storage class action or the set storage class action that switches the object to the storage class with the lowest at rest storage pricing takes precedence so for example if you have one rule that deletes an object and another rule that changes the object storage class but both rules use the exact same condition the delete action always occurs when the condition is met or if you have one rule that changes the object storage class to near line storage and another rule that changes the object storage class to cold line storage but both rules use the exact same condition the object storage class always changes to cold line storage when the condition is met and so some considerations that i wanted to point out when it comes to cloud storage is that when it comes to object life cycle management changes are in accordance to object creation date as well once an object is deleted it cannot be undeleted so please be careful when permanently deleting a version as well life cycle rules can take up to 24 hours to take effect so be aware when setting them and always be sure to test these life cycle rules in development first before rolling them out into production and so that’s pretty much all i had to cover when it comes to versioning and object life cycle management and so you can now mark this lesson as complete and whenever you’re ready join me in the console where we go hands-on with versioning object life cycle management and cloud storage as a whole [Music] welcome back in this demo we’re going to cement the knowledge that we learned from the past couple lessons on cloud storage and really dive into the nitty gritty when it comes to the features and configuration you’re first going to create a cloud storage bucket and upload some files to it and then interact with the bucket and the files using the console as well you’re going to get your hands dirty using the gsutil command line tool and this is the tool for managing cloud storage from the command line now there’s quite a bit of work to do here so with that being said let’s dive in and so i am logged in here as tony bowties at gmail.com along with being in project bowtie inc and so the first thing i want to do is i want to create a cloud storage bucket so in order for me to do that i’m going to head over to the navigation menu and i’m going to scroll down to storage and here i already have a couple of buckets that i created from earlier lessons and you may have a couple buckets as well but you’re going to go ahead and create a new bucket by going up to the top here and click on create bucket now i know that we’ve gone through this before in previous lessons but this time i wanted to go through all the configuration options that are available and so the first thing that you’re prompted to do here is to name your bucket as explained in an earlier lesson it needs to be a globally unique name and so you can pick any name you choose and so for me i’m going to call this bucket bowtie inc dash 2021 i’m going to hit continue and if it wasn’t a globally unique name it would error out and you would have to enter in a new name but since this bucket name is globally unique i’m able to move forward for location type you can select from region dual region and multi region with multi region under location you can select from either the americas europe or asia pacific and under dual region you have the options of again choosing from america’s europe and asia pacific and you will be given the regions for each and so for this demo we’re going to go ahead and choose region and we’re going to keep the location as u.s east one and once you’ve selected that you can go ahead and hit continue and you’re going to be prompted to choose a default storage class and here you have the option of selecting from the four storage classes that we discussed in an earlier lesson and so for this demo you can keep it as standard and simply click on continue and so here you’re prompted to choose access control and because we’re going to be diving into acls you can keep this as the default fine grain access control you can go ahead and click continue and under encryption you can keep it as the default google manage key but know that you always have the option of choosing a customer manage key and once you’ve uploaded your customer manage key you can select it from here and because i have no customer managed keys no other keys show up so i’m going to click on google manage keys and here under retention policy i know i haven’t touched into that but just to give you some context when placing a retention policy on a bucket it ensures that all current and future objects in the bucket can’t be deleted or replaced until they reach the age that you define in the retention policy so if you try to delete or replace objects where the age is less than the retention period it will obviously fail and this is great for compliance purposes in areas where logs need to be audited by regulators every year or where government required retention periods apply as well with the retention policy you have the option of locking that retention policy and when you lock a retention policy on a bucket you prevent the policy from ever being removed or the retention period from ever being reduced and this feature is irreversible so please be aware if you’re ever experimenting with lock retention policies so if i set a retention policy here i can retain objects for a certain amount of seconds days months and years and for this demo we’re not going to set any retention policies so i’m going to check that off and i’m going to go ahead and add a label with the key being environment and the value being test and just as a note before you go ahead and click on create over on the right hand side you will see a monthly cost estimate and you will be given an estimate with storage and retrieval as well as how much it costs for operations your sla and your estimated monthly cost and so before creating any buckets you can always do a price check to see how much it’ll cost for storage size retrieval to get a good idea of how much it’ll cost you monthly okay so once you’re all done here you can simply click on create and it’ll go ahead and create your bucket and so now that your bucket is created we want to add some files and so we first want to go into copying files from an instance to your cloud storage bucket and so in order to do that we need to create an instance and so we’re gonna go back over to the navigation menu we’re gonna scroll down to compute engine and we’re gonna create our instance and for those who do not have your default vpc set up please be sure to create one before going ahead and creating your instance i’m going to go ahead and click on create i’m going to name this instance bowtie instance going to give it a label of environment test click on save the region is going to be east one and you can keep the default zone as us east 1b the machine type we’re going to change it to e2micro and you’re going to scroll down to access scopes and here your instance is going to need access to your cloud storage bucket and so it’s going to need cloud storage access so you’re going to click on set access for each api scroll down to storage and for this demo we’ll select full gonna leave everything else as the default and simply click on create and so we’ll give it a couple minutes here for instance to create okay and my instance has been created and so now i want to create some files and copy them over to cloud storage so i’m going to first navigate over to cloud storage and into my bucket and this way you can see the files that you upload and so next you’re going to open up cloud shell and make this a little bigger for better viewing and so now you’re going to ssh into your instance by using the command gcloud compute ssh along with
your instance name the zone flag dash dash zone with the zone of us east 1b i’m going to go ahead and hit enter and you may be prompted with a message asking to authorize this api call and you want to hit authorize and you’re going to be prompted to enter a passphrase for your key pair enter it in again and one more time and success we’re logged into the instance i’m going to quickly clear my screen and so i know i could have sshed into the instance from the compute engine console but i wanted to display both the console and the shell on the same screen to make viewing a bit easier as i add and remove files to and from the bucket okay and so now that you’re logged in you want to create your first file that you can copy over to your bucket so you can enter in the command sudo nano file a bow ties dot text hit enter and this will allow you to open up the nano editor to edit the file of bowties.txt and here you can enter in any message that you’d like for me i’m going to enter in learning to tie a bow tie takes time okay and i’m going to hit ctrl o to save hit enter to verify the file name to right and ctrl x to exit and so now i want to copy this file up to my bucket and so here is where i’m going to use the gsutil command so i’m going to type in gsutil cp for copy the name of the file which is file of bowties text along with gs colon forward slash forward slash and the name of your bucket which in my case is bow tie ink dash 2021 and this should copy my file file a bowties.txt up to my bucket of bow tie inc 2021 i’m gonna hit enter okay and it’s finished copying over and if i go up here to the top right and click on refresh i can see that my file successfully uploaded and this is a great and easy method to upload any files that you may have to cloud storage okay and so now that you’ve copied files from your instance to your bucket you’re going to now copy some files from the repo to be uploaded to cloud storage for our next step so you’re gonna go ahead and exit out of the instance by just simply typing in exit i’m gonna quickly clear the screen and so here i need to clone my repo if you already have clone the repo then you can skip this step i’m going to cd tilde to make sure i’m in my home directory i’m going to do an ls and so i can see here that i’ve already cloned my repo so i’m going to cd into that directory and i’m going to run the command git pull to get the latest files fantastic i’m going to now clear my screen and i’m going to cd back to my home directory and so now i want to copy up the files that i want to work with to my cloud storage bucket and they are two jpegs by the name of pink elephant-bowtie as well as plaid bowtie and these files can be found in the repo marked 12 storage services under zero one cloud storage management and i will be providing this in the lesson text as well as can be found in the instructions and so i’m going to simply cd into that directory by typing in cd google cloud associate cloud engineer 12 storage services and 0 1 cloud storage management i’m going to list all the files in the directory and as you can see here pink elephant dash bow tie and plaid bow tie are both here and so i’m going to quickly clear my screen and so now for me to copy these files i’m going to use the command gsutil cp for copy star.jpg which is all the jpegs that are available along with gs colon forward slash forward slash and the bucket name which is bow tie inc dash 2021 i’m going to hit enter and it says that it’s successfully copied the files i’m going to simply go up to the top right hand corner and do another refresh and success the files have been successfully uploaded another perfect example of copying files from another source to your bucket using the gsutil command line tool and so this is the end of part one of this demo it was getting a bit long so i decided to break it up and this would be a great opportunity for you to get up and have a stretch get yourself a coffee or tea and whenever you’re ready part two will be starting immediately from the end of part one so you can complete this video and i will see you in part two [Music] this is part two of the managing cloud storage access demo and we’ll be starting exactly where we left off in part 1. so with that being said let’s dive in and so now that we’ve uploaded all these files we next want to make this bucket publicly available now please know that leaving a bucket public is not common practice and should only be used on the rare occasion that you are hosting a static website from your bucket and should always be kept private whenever possible especially in a production environment so please note that this is only for the purposes of this demo and so i’m going to quickly show this to you in the console so i’m going to shut down the cloud shell for just a minute and i’m going to go to the top menu and click on permissions and under permissions i’m going to click on add here you can add new members and because you want to make it publicly available you want to use the all users member so you type in all and you should get a pop-up bringing up all users and all authenticated users you want to click on all users and the role that you want to select for this demo is going to be storage object viewer so i’m going to type in storage object viewer and here it should pop up and select that and then you can click on save you’re going to be prompted to make sure that this is what you want to do that you want to make this bucket public and so yes we do so you can simply click on allow public access and you will get a banner up here at the top saying that this bucket is public to internet and is a great fail safe to have in case you were to ever mistakenly make your bucket public and if i head back over to objects you can see that public access is available to all the files in the bucket and so just to verify this i’m going to copy the public url for pink elephant dash bowtie i’m going to open up a new tab paste in the url hit enter and as you can see i have public access to this picture and close this tab and so now that we’ve done our demo to make the bucket publicly accessible we should go ahead and remove public access so in order to remove public permissions i can simply go up to permissions and simply click on remove public permissions i’m going to get a prompt to make sure this is exactly what i want to do and yes it is so you can click on remove public permissions a very simple and elegant solution in order to remove public access from your bucket and if you go back to objects you’ll see that all the public access has been removed from all the files and so now that you’ve experienced how to add public access to a bucket i wanted to get a little bit more granular and so we’re going to go ahead and apply acl permissions for one specific object and because i like pink elephants let’s go ahead and select pink elephant dash bow tie and so here i can go up to the top menu and click on edit permissions and i’ll be prompted with a new window for permissions that are currently available for this object you can click on add entry click on the drop down and select public from the drop-down and it will automatically auto populate the name which is all users and the access which will be reader i’m going to go ahead and click on save and a public url will be generated and so just to verify this i’m going to click on the public url and success i now have public access to this picture yet once again i’m going to close down this tab and so now that you’ve configured this object for public access i want to show you how to remove public access using the command line this time so you’re going to go up to the top right hand corner and open up cloud shell i’m going to quickly clear my screen and i’m going to paste in the command here which is gsutil acl ch for change minus d which is delete the name of the user which is all users and if this was a regular user you could enter in their email address along with gs colon forward slash forward slash the bucket name which in my case is bow tie ink dash 2021 and the name of the file which is pink elephant bow tie dot jpeg i’m going to hit enter and it says that it’s been successfully updated and so if i go back up here to the console and i back out and go back into the file i can see here that the public url has been removed okay and now there’s one last step that we need to do before ending this demo and this is to create a signed url for the file so in order to create a signed url we first need to create a private key and so we’re gonna do this using a service account and so i’m gonna head on over to iam so i’m going to go up to the navigation menu i’m going to go to i am an admin and here with the menu on the left i’m going to click on service accounts here up at the top menu you’re going to click on create service account and under service account name you can enter in any name but for me i’m going to enter in signed url i’m going to leave everything else as is i’m going to simply click on create i’m going to close down cloud shell because i don’t really need it right now just select a role and i’m going to give it the role of storage object viewer i’m going to click on continue and i’m going to leave the rest blank and simply click on done and you should see a service account with the name of signed url and so in order to create a key i’m going to simply go over to actions and i’m going to click on the three dots and i’m going to select create key from the drop down menu and here i’m going to be prompted with what type of key that i want to create and you want to make sure that json is selected and simply click on create and this is where your key will be automatically downloaded to your downloads folder i’m going to click on close and so once you have your key downloaded you’re able to start the process of generating a signed url and so i’m going to go ahead and use cloud shell in order to generate this signed url so i’m going to go ahead back up to the top and open up cloud shell again and then you can open up the cloud shell editor going to go up to the top menu in editor and click on file and you’re going to select upload files and here’s where you upload your key from your downloads folder and i can see my key has been uploaded right here and you can rename your key file to something a little bit more human readable so i’m going to right click i’m going to click on rename and you can rename this file as privatekey.json hit ok and so once you have your key uploaded and renamed you can now go back into the terminal to generate a signed url i’m going to quickly clear the screen i’m going to make sure that the private key is in my path by typing in ls and as you can see here privatekey.json is indeed in my path and so before i generate this key i’m going to head back on over to cloud storage i’m going to drill down into my bucket and as you can see here pink elephant dash bow tie does not have a public url and so when the sign url is generated you will get a public url that will not be shown here in the console and will be private to only the user that generated it and the users that the url has been distributed to okay and once you have everything in place you can then go ahead and paste in the command gsutil sign url minus d the allotted time which is 10 minutes the private key which is private key dot json along with gs colon forward slash forward slash your bucket name which in my case is bow tie ink dash 2021 along with the file name of pinkelephant-bowtie.jpg i’m going to hit enter and so i purposely left this error here so you can see that when you generate a signed url you need pi open ssl in order to generate it and so the caveat here is that because python 2 is being deprecated the command pip install pi openssl will not work pi open ssl needs to be installed with python3 and so to install it you’re going to run the command pip3 install pi open ssl and hit enter and so once it’s finished installing you can now generate your signed url i’m going to quickly clear my screen paste in the command again hit enter and success you’ve now generated a sign url for the object pink elephant bowtie.jpg and because this is a signed url you will see under public url there is no url there available even though it is publicly accessible and so just to verify this i’m going to highlight the link here i’m going to copy it i’m going to open up a new tab i’m going to paste in this url hit enter and success this sign url is working and anyone who has access to it has viewing permissions of the file for 10 minutes and so again this is a great method for giving someone access to an object who doesn’t have an account and will give them a limited time to view or edit this object and so i wanted to congratulate you on making it through this demo and hope that it has been extremely useful in excelling your knowledge on managing buckets files and access to the buckets and files in cloud storage and so just as a recap you created a cloud storage bucket you then created an instance and copied a file from that instance to the bucket you then clone your repo to cloud shell and copy two jpeg files to your cloud storage bucket you then assigned and then removed public access to your bucket and then applied an acl to a file in the bucket making it public as well as removing public access right after you then created a service account private key and generated a signed url to an object in that bucket congratulations again on a job well done and so that’s pretty much all i wanted to cover in this demo on managing cloud storage access so you can now mark this as complete and let’s move on to the next one [Music] welcome back in this demo we’re going to be getting into the weeds with object versioning and life cycle management using both the console and the command line we’re going to go through how versioning works and what happens when objects get promoted along with creation configuration and editing these life cycle policies and so with that being said let’s dive in so we’re going to be starting off from where we left off in the last demo with all the resources intact that we created before and we’re going to go ahead and dive right into versioning and so the first thing that you want to do is turn on versioning for your current bucket so in my case for bow tie ink dash 2021 and we’re going to do this through the command line so i’m going to first go up to the top right hand corner and open up cloud shell and so you first want to see if versioning is turned on for your bucket and you can do this by using the command gsutil versioning get along with gs colon forward slash forward slash with your bucket name and hit enter and you may be prompted with a message asking you to authorize this api call you definitely want to authorize and as expected versioning is not turned on on this bucket hence the return of suspended and so in order to turn versioning on we’re going to use a similar command gsutil versioning and instead of get we’re going to use set on gs colon forward slash forward slash and the bucket name and hit enter and versioning has been enabled and so if i run the command gsutil version in get again i’ll get a response of enabled okay great now that we have versioning enabled we can go ahead with the next step which is to delete one of the files in the bucket and so you can go ahead and select plaid bowtie.jpg and simply click on delete you can confirm the deletion and the file has been deleted now technically the file has not been deleted it is merely been converted to a non-current version and so in order to check the current and non-current versions i’m going to use the command gsutil ls minus a along with the bucket name of g s colon forward slash forward slash bow tie inc dash 2021 i’m gonna hit enter and as you can see here plaid bow tie still shows up the ls minus a command is a linux command to show all files including the hidden files and so what’s different about these files is right after the dot text or dot jpg you will see a hashtag number and this is the generation number and this determines the version of each object and so what i want to do now is bring back the non-current version and make it current so i’m going to promote the non-current version of plaid bowtie.jpg to the current version and so in order to do this i’m going to run the command gsutil and v for move along with the bucket of gs colon forward slash forward slash bowtie inc hyphen 2021 and the name of the file of plaid bow tie dot jpeg along with the generation number and i’m going to copy it from the currently listed i’m going to paste it in and so now we need to put in the target which is going to be the same without the generation number and paste that in then hit enter okay operation completed and so if i go up to the top right hand corner and click on refresh i can see that now there is a current version for plaid bow tie now just know that using the move command actually deletes the non-current version and gives the new current version a new generation number and so in order to verify this i’m going to quickly clear my screen and i’m going to run the command gsutil ls minus a along with the bucket name a bow tie inc dash 2021 and the generation number here is different than that of the last now if i use the cp or copy command it would leave the non-current version and create a new version on top of that leaving two objects with two different generation numbers okay so with that step being done you now want to log into your linux instance and we’re going to be doing some versioning for file of bowties.text so i’m going to go ahead and clear my screen again and i’m going to run the command gcloud compute ssh bowtie instance which is the name of my instance along with the zone flag dash dash zone of the zone us east 1b i’m going to hit enter and you should be prompted for the passphrase of your key and i’m in and so here you want to edit file a bowties.txt to a different version so you can go ahead and run the command sudo nano file a bow ties dot text and hit enter and you should have learning to tie a bow tie takes time and what you want to do is append version 2 right at the end ctrl o to save enter to verify the file name to right and control x to exit and so now we want to copy file a bow ties dot text to your current bucket mine being bow tie ink dash 2021 so i’m going to go ahead and run the command gsutil cp the name of the file which is file of bowties dot text and the target which is going to be bowtie inc 2021 and hit enter and it’s copied the file to the bucket and so if i hit refresh in the console you can see that there is only one version of file of bowties.text and so to check on all the versions that i have i’m going to go back to my cloud shell i’m going to quickly clear my screen and i’m going to run the command gsutil ls minus a along with the target bucket hit enter and as you can see here there are now two versions of file of bowties.text and if i quickly open this up i’m gonna click on the url you can see here that this is version two and so this should be the latest generation of file of bowties.txt that you edited over in your instance i’m going to close this tab now and so what i want to do now is i want to promote the non-current version to be the current version in essence making version 2 the non-current version and so i’m going to run the command gsutil cp and i’m going to take the older generation number and i’m going to copy it and paste it here and the target is going to be the same without the generation number and paste it and hit enter okay and the file has been copied over so i’m going to do a quick refresh in the console i’m going to drill down into file a bowties.txt and when i click on the url link it should come up as version 1. and so this is a way to promote non-current versions to current versions using the gsutil copy command or the gsutil move command i’m going to close on this tab now i’m going to quickly clear my screen and if i run the command gsutil ls minus a again you can see that i have even more files and so these files and versions of files will eventually accumulate and continuously take up space along with costing you money and so in order to mitigate this a good idea would be to put life cycle policies into place and so you’re gonna go ahead now and add a life cycle policy to the bucket and this will help manage the ever-growing accumulation of files as more files are being added to the bucket and more versions are being produced something that is very common that is seen in many different environments and so we’re going to go ahead and get this done in the console so i’m going to close down cloud shell and i’m going to go back to the main page of the bucket and under the menu you can click on lifecycle and here you’ll be able to add the lifecycle rules and so here you’re going to click on add a rule and the first thing that you’re prompted to do is to select an action and so the first rule you’re going to apply is to delete non-current objects after seven days so you’re gonna click on delete object you’re gonna be prompted with a warning gonna hit continue and you’ll be prompted to select object conditions and as discussed in an earlier lesson there are many conditions to choose from and multiple conditions can be selected so here you’re going to select days since becoming non-current and in the empty field you’re going to type in 7. you can click on continue and before you click on create i wanted just to note that any life cycle rule can take up to 24 hours to take effect so i’m going to click on create and here you can see the rule has been applied to delete objects after seven days when object becomes non-current and so now that we added a delete rule we’re going to go ahead and add another rule to move current files that are not being used to a storage class that can save the company money and so let’s go ahead and create another lifecycle rule but this time to use this set storage class action and so the files that accumulate that have been there for over 90 days you want to set the storage class the cold line so this way it’ll save you some money and so you’re going to click on add a rule you’re going to select set storage class to cold line and as a note here it says archive objects will not be changed to cold line so you can move forward with the storage class but you can’t move backwards in other words i can’t move from cold line to near line or archive the cold line i can only move from near line to cold line or cold line to archive so i’m going to go ahead and click continue for the object conditions you want to select age and in the field you want to enter 90 days and here you want to hit continue and finally click on create and so in order to actually see these rules take effect like i said before it’ll take up to 24 hours and so before we end this demo i wanted to show you another way to edit a life cycle policy by editing the json file itself so you can head on up to the top right and open up cloud shell i’m going to bring this down a little bit and you’re going to run the command gsutil lifecycle get along with the bucket name and output it to a file called lifecycle.json and hit enter and no errors so that’s a good sign next i’m going to run the command ls and as you can see here the lifecycle.json file has been written and so i’d like to edit this file where it changes the set to cold line rule from 90 days to 120 days as tony bowtie’s manager thinks that they should keep the files a little bit longer before sending it to coldline and so in order to edit this file you’re going to run the command sudo nano along with the name of the file of lifecycle.js you hit enter and it’s going to be a long string but if you use your arrow keys and move down and then back you’ll see the set to cold line rule with the age of 90 days so i’m going to move over here and i’m going to edit this to 120 and i’m going to hit ctrl o to save enter to verify file name to write and ctrl x to exit and just know that you can also edit this file in cloud shell editor and so in order for me to put this lifecycle policy in place i need to set this as the new lifecycle policy and so in order for me to do that i’m going to run the command gsutil lifecycle set along with the name of the json file which is lifecycle.json along with the bucket name and hit enter and it looks like it said it and i’m going to do quick refresh in the console just to verify and success the rule has been changed from 90 days to 120 days congratulations on completing this demo now a lot of what you’ve experienced here is more of what you will see in the architect exam as the cloud engineer exam focuses on more of the high level theory of these cloud storage features but i wanted to show you some real life scenarios and how to apply the theory that was shown in previous lessons into practice and so just as a recap you set versioning on the current bucket that you are working in and you deleted a file and made it non-current you then brought it back to be current again you then edited a file on your instance and copied it over to replace the current version of that file in your bucket you then promoted the non-current version as the new one and moved into lifecycle rules where you created two separate rules you created a rule to delete files along with the rule to set storage class after a certain age of the file and the last step you took was to copy the lifecycle policy to your cloud shell and edited that policy and set it to a newer edited version and so that pretty much covers this demo on object versioning and lifecycle management congratulations again on a job well done and so before you go make sure you delete all the resources you’ve created for the past couple of demos as you want to make sure that you’re not accumulating any unnecessary costs and so i’m going to do a quick run through on deleting these resources and so i’m going to quickly close down cloud shell and i’m going to head on over to the navigation menu go to compute engine i’m going to delete my instance and i’m going to head back on over to cloud storage and delete the bucket there i’m going to confirm the deletion i’m going to click on delete and so that covers the deletion of all the resources so you can now mark this as complete and i’ll see you in the next one welcome back and in this lesson i’m going to be covering cloud sql one of google cloud’s many database offerings that offers reliable secure and scalable sql databases without having to worry about the complexity to set it all up now there’s quite a bit to cover here so with that being said let’s dive in now cloud sql is a fully managed cloud native relational database service that offers mysql postgres and sql server engines with built-in support for replication cloud sql is a database as a service offering from google where google takes care of all the underlying infrastructure for the database along with the operating system and the database software now because there are a few different types of database offerings from google cloud sql was designed for low latency transactional and relational database workloads it’s also available in three different flavors of databases mysql postgres and the newest edition is sql server and all of them support standard apis for connectivity cloud sql offers replication using different types of read replicas which i will get into a little bit later and offers capabilities for high availability for continuous access to your data cloud sql also offers backups in two different flavors and allows you to restore your database from these backups with the same amount of ease now along with your backups comes point in time recovery for when you want to restore a database from a specific point in time cloud sql storage relies on connected persistent disks in the same zone that are available in regular hard disk drives or ssds that currently give you up to 30 terabytes of storage capacity and because the same technologies lie in the background for persistent disks automatic storage increase is available to resize your disks for more storage cloud sql also offers encryption at rest and in transit for securing data entering and leaving your instance and when it comes to costs you are billed for cpu memory and storage of the instance along with egress traffic as well please be aware that there is a licensing cost when it comes to windows instances now cloud sql instances are not available in the same instance types as compute engine and are only available in the shared core standard and high memory cpu types and when you see them they will be clearly marked with a db on the beginning of the cpu type you cannot customize these instances like you can with compute engine and so memory will be pre-defined when choosing the instance type now storage types for cloud sql are only available in hard disk drives and ssds you are able to size them according to your needs and as stated earlier can be sized up to 30 terabytes in size and when entering the danger zone of having a full disk you do have the option of enabling automatic storage increase so you never have to worry about filling up your disk before that 30 terabyte limit now when it comes to connecting to your cloud sql instance you can configure it with a public or private ip but know that after configuring the instance with a private ip it cannot be changed although connecting with the private ip is preferred when connecting from a client on a resource with access to a vpc as well it is always best practice to use private i p addresses for any database in your environment whenever you can now moving on to authentication options the recommended method to connecting to your cloud sql instance is using cloud sql proxy the cloud sql proxy allows you to authorize and secure your connections using iam permissions unless using the cloud sql proxy connections to an instance’s public ip address are only allowed if the connection comes from an authorized network authorized networks are ip addresses or ranges that the user has specified as having permission to connect once you are authorized you can connect to your instance through external clients or applications and even other google cloud services like compute engine gke app engine cloud functions and cloud run now i wanted to focus a moment here on the recommended method for connecting to your instance which is cloud sql proxy now as mentioned before the cloud sql proxy allows you to authorize and secure your connections using iam permissions the proxy validates connections using credentials for a user or service account and wrapping the connection in an ssl tls layer that is authorized for a cloud sql instance using the cloud sql proxy is the recommended method for authenticating connections to a cloud sql instance as it is the most secure the client proxy is an open source library distributed as an executable binary and is available for linux macos and windows the client proxy acts as an intermediary server that listens for incoming connections wraps them in ssl or tls and then passes them to a cloud sql instance the cloud sql proxy handles authentication with cloud sql providing secure access to cloud sql instances without the need to manage allowed ip addresses or configure ssl connections as well this is also the best solution for applications that hold ephemeral eyepiece and while the proxy can listen on any port it only creates outgoing connections to your cloud sql instance on port 3307 now when it comes to database replication it’s more than just copying your data from one database to another the primary reason for using replication is to scale the use of data in a database without degrading performance other reasons include migrating data between regions and platforms and from an on-premises database to cloud sql you could also promote a replica if the original instance becomes corrupted and i’ll be getting into promoting replicas a little bit later now when it comes to a cloud sql instance the instance that is replicated is called a primary instance and the copies are called read replicas the primary instance and read replicas all reside in cloud sql read replicas are read-only and you cannot write to them the read replica processes queries read requests and analytics traffics thus reducing the load on the primary instance read replicas can have more cpus in memory than the primary instance but they cannot have any less and you can have up to 10 read replicas per primary instance and you can connect to a replica directly using its connection name and ip address cloud sql supports the following types of replicas read replicas cross region read replicas external read replicas and cloud sql replicas when replicating from an external server now when it comes to read replicas you would use it to offload work from a cloud sql instance the read replica is an exact copy of the primary instance and data and other changes on the primary instance are updated in almost real time on the read replica a read replica is created in a different region from the primary instance and you can create a cross region read replica the same way as you would create an in-region replica this improves read performance by making replicas available closer to your application’s region it also provides additional disaster recovery capability to guard you against a regional failure it also lets you migrate data from one region to another with minimum downtime and lastly when it comes to external read replicas these are external mysql instances that replicate from a cloud sql primary instance for example a mysql instance running on compute engine is considered an external instance and so just as a quick note here before you can create a read replica of a primary cloud sql instance the instance must meet the following requirements automated backups must be enabled binary logging must be enabled which requires point-in-time recovery to be enabled and at least one backup must have been created after binary logging was enabled and so when you have read replicas in your environment it gives you the flexibility of promoting those replicas if needed now promoting replicas is a feature that can be used for when your primary database becomes corrupted or unreachable now you can promote an in-region read replica or cross-region re-replica depending on where you have your read replicas hosted so when you promote a read replica the instance stops replication and converts the instance to a standalone cloud sql primary instance with read and write capabilities please note that this cannot be undone and also note that when your new primary instance has started your other read replicas are not transferred over from the old primary instance you will need to reconnect your other read replicas to your new primary instance and as you can see here promoting a replica is done manually and intentionally whereas high availability has a standby instance that automatically becomes the primary in case of a failure horizontal outage now when it comes to promoting cross-region replicas there are two common scenarios for promotion regional migration which performs a planned migration of a database to a different region and disaster recovery and this is where you would fail over a database to another region in the event that the primary instances region becomes unavailable both use cases involve setting up cross-region replication and then promoting the replica the main difference between them is whether the promotion of the replica is planned or unplanned now if you’re promoting your replicas for a regional migration you can use a cross region replica to migrate your database to another region with minimal downtime and this is so you can create a replica in another region wait until the replication catches up promote it and then direct your applications to the newly promoted instance the steps involved in promotion are the same as for promoting an in-region replica and so when you’re promoting replicas for disaster recovery cross-region replicas can be used as part of this disaster recovery procedure you can promote a cross-region replica to fail over to another region should the primary instances region become unavailable for an extended period of time so in this example the entire u.s east 1 region has gone down yet the reed replica in the europe region is still up and running and although there may be a little bit more latency for your customers in north america i’m able to promote this read replica connect it to the needed resources and get back to business now moving along to high availability cloud sql offers aha capabilities out of the box the aha configuration sometimes called a cluster provides data redundancy so a cloud sql instance configured for ha is also called a regional instance and is located in a primary and secondary zone within the configured region within a regional instance the configuration is made up of a primary instance and a standby instance and through synchronous replication to each zone’s persistent disk all rights made to the primary instance are also made to the standby instance each second the primary instance writes to a system database as a heartbeat signal if multiple heartbeats aren’t detected failover is initiated and so if an ha-configured instance becomes unresponsive cloud sql automatically switches to serving data from the standby instance and this is called a failover in this example the primary instance or zone fails and failover is initiated so if the primary instance is unresponsive for approximately 60 seconds or the zone containing the primary instance experiences an outage failover will initiate the standby instance immediately starts serving data upon reconnection through a shared static ip address with the primary instance and the standby instance now serves data from the secondary zone and now when the primary instance is available again a fail back will happen and this is when traffic will be redirected back to the primary instance and the standby instance will go back into standby mode as well the regional persistent disk will pick up replication to the persistent disk in that same zone and with regards to billing an ha configured instance is charged at double the price of a standalone instance and this includes cpu ram and storage also note that the standby instance cannot be used for read queries and this is where it differs from read replicas as well a very important note here is that automatic backups and point in time recovery must be enabled for high availability and so the last topic that i wanted to touch on is backups and backups help you restore lost data to your cloud sql instance you can also restore an instance that is having problems from a backup you enable backups for any instance that contains necessary data backups protect your data from loss or damage enabling automated backups along with binary logging is also required for some operations such as clone and replica creation by default cloud sql stores backup data in two regions for redundancy one region can be the same region that the instance is in and the other is a different region if there are two regions in a continent the backup data remains on the same continent cloud sql also lets you select a custom location for your backup data and this is great if you need to comply with data residency regulations for your business now cloud sql performs two types of backups on-demand backups and automated backups now with on-demand backups you can create a backup at any time and this is useful for when you’re making risky changes that may go sideways you can always create on-demand backups for any instance whether the instance has automatic backups enabled or not and these backups persist until you delete them or until their instance is deleted now when it comes to automated backups these use a four hour backup window these backups start during the backup window and just as a note when possible you should schedule your backups when your instance has the least activity automated backups occur every day when your instance is running at any time in the 36 hour window and by default up to seven most recent backups are retained you can also configure how many automated backups to retain from 1 to 365. now i’ve touched on this topic many times in this lesson and i wanted to highlight it for just a second and this is point-in-time recovery so point-in-time recovery helps you recover an instance to a specific point in time for example if an error causes a loss of data you can recover a database to its state before the error happened a point in time recovery always creates a new instance and you cannot perform a point in time recovery to an existing instance and point in time recovery is enabled by default when you create a new cloud sql instance and so when it comes to billing by default cloud sql retains seven days of automated backups plus all on-demand backups for an instance and so i know there is a lot to retain in this lesson on cloud sql but be sure that these concepts and knowing the difference between them as well as when to use each feature will be a sure help in the exam along with giving you the knowledge you need to use cloud sql in your role as a cloud engineer and so that’s pretty much all i had to cover when it comes to cloud sql so you can now mark this lesson as complete and let’s move on to the next one welcome back and in this lesson i wanted to touch on google cloud’s global relational database called cloud spanner now cloud spanner is the same in some ways as cloud sql when it comes to asset transactions sql querying and strong consistency but differs in the way that data is handled under the hood than cloud sql and so knowing this database only at a high level is needed for the exam but i’ll be going into a bit more detail just to give you a better understanding on how it works so with that being said let’s dive in now cloud spanner is a fully managed relational database service that is both strongly consistent and horizontally scalable cloud spanner is another database as a service offering from google and so it strips away all the headaches of setting up and maintaining the infrastructure and software needed to run your database in the cloud now being strongly consistent in this context is when data will get passed on to all the replicas as soon as a write request comes to one of the replicas of the database cloud spanner uses truetime a highly available distributed atomic clock system that is provided to applications on all google servers it applies a time stamp to every transaction on commit and so transactions in other regions are always executed sequentially cloud spanner can distribute and manage data at a global scale and support globally consistent reads along with strongly consistent distributed transactions now being fully managed cloud spanner handles any replicas that are needed for availability of your data and optimizes performance by automatically sharding the data based on request load and size of the data part of why cloud spanner’s high availability is due to its automatic synchronous data replication between all replicas in independent zones cloud spanner scales horizontally automatically within regions but it can also scale across regions for workloads that have higher availability requirements making data available faster to users at a global scale along with node redundancy quietly added for every node deployed in the instance and when you quickly add up all these features of cloud spanner it’s no wonder that it’s available to achieve five nines availability on a multi-regional instance and four nines availability on a regional instance cloud spanner is highly secure and offers data layer encryption audit logging and iam integration cloud spanner was designed to fit the needs of specific industries such as financial services ad tech retail and global supply chain along with gaming and pricing for cloud spanner comes in at 90 cents per node per hour with the cost of storage coming in at 30 cents per gigabyte per month definitely not cheap but the features are plentiful now this isn’t in the exam but i did want to take a moment to dive into the architecture for a bit more context as to why this database is of a different breed than the typical sql database now to use cloud spanner you must first create a cloud spanner instance this instance is an allocation of resources that is used by cloud spanner databases created in that instance instance creation includes two important choices the instance configuration and the node count and these choices determine the location and the amount of the instances cpu and memory along with its storage resources your configuration choice is permanent for an instance and only the node count can be changed later if needed an instance configuration defines the geographic placement and replication of the database in that instance either regional or multi-region and please note that when you choose a multi-zone configuration it allows you to replicate the databases data not just in multiple zones but in multiple zones across multiple regions and when it comes to the node count this determines the number of nodes to allocate to that instance these nodes allocate the amount of cpu memory and storage needed for your instance to either increase throughput or storage capacity there is no instance types to choose from like cloud sql and so when you need more power you simply add another node now for any regional configuration cloud spanner maintains exactly three read write replicas each within a different zone in that region each read write replica contains a full copy of your operational database that is able to serve rewrite and read only requests cloud spanner uses replicas in different zones so that if a single zone failure occurs your database remains available in a multi-region instance configuration the instance is allotted a combination of four read write and read only replicas and just as a note a three node configuration minimum is what is recommended for production by google and as cloud spanner gets populated with data sharding happens which is also known as a split and cloud spanner creates replicas of each database split to improve performance and availability all of the data in a split is physically stored together in a replica and cloud spanner serves each replica out of an independent failure zone and within each replica set one replica is elected to act as the leader leader replicas are responsible for handling rights while any read write or read only replica can serve a read request without communicating with the leader and so this is the inner workings of cloud spanner at a high level and not meant to confuse you but to give you a better context of how cloud spanner although it is a relational sql database is so different than its cloud sql cousin now before ending this lesson i wanted to touch on node performance for a quick moment and so each cloud spanner node can provide up to 10 000 queries per second or qps of reads or 2000 qps of writes each node provides up to two terabytes of storage and so if you need to scale up the serving and storage resources in your instance you add more nodes to that instance and remember as noted earlier that adding a node does not increase the number of replicas but rather increases the resources each replica has in the instance adding nodes gives each replica more cpu and ram which increases the replicas throughput and so if you’re looking to scale up automatically you can scale the numbers of nodes in your instance based on the cloud monitoring metrics on cpu or storage utilization in conjunction with using cloud functions to trigger and so when you are deciding on a relational database that provides global distribution and horizontally scalable that handles transactional workloads in google cloud cloud spanner will always be the obvious choice over cloud sql and so that’s pretty much all i have to cover when it comes to this overview on cloud spanner so you can now mark this lesson as complete and let’s move on to the next one [Music] welcome back and in this lesson we will be going over the available nosql databases available in google cloud this lesson is meant to be another overview just to familiarize you with the nosql database options as they show up in the exam this lesson is not meant to go in depth on databases but an overview and will give you a good understanding on what features are available for each and their use cases so with that being said let’s dive in now there are four managed nosql databases available in google cloud and i will be briefly going over them and i’ll be starting this off by discussing bigtable now cloud bigtable is a fully managed wide column nosql database designed for terabyte and petabyte scale workloads that offers low latency and high throughput bigtable is built for real-time application serving workloads as well as large-scale analytical workloads cloud bigtable is a regional service and if using replication a copy is stored in a different zone or region for durability cloud bigtable is designed for storing very large amounts of single keyed data while still being able to provide very low latency and because throughput scales linearly you can increase the queries per second by adding more bigtable nodes when you need them bigtable throughput can be dynamically adjusted by adding or removing cluster nodes without restarting meaning you can increase the size of a bigtable cluster for just a few hours to handle a large load and then reduce the cluster size again and do it all without any downtime bigtable is an ideal source for map reduce operations and integrates easily with all the existing big data tools such as hadoop dataproc and dataflow along with apache hbase and when it comes to price bigtable is definitely no joke pricing for bigtable starts at 65 cents per hour per node or over 450 dollars a month for a one node configuration with no data now you can use bigtable to store and query all of the following types of data such as cpu and memory usage over time for multiple servers marketing data such as purchase histories and customer preferences financial data such as transaction histories stock prices and currency exchange rates iot data or internet of things such as usage reports from energy meters and home appliances and lastly graph data such as information about how users are connected to one another cloud bigtable excels as a storage engine as it can batch mapreduce operations stream processing or analytics as well as being used for storage for machine learning applications now moving on to the next nosql database is cloud datastore and cloud datastore is a highly scalable nosql document database built for automatic scaling high performance and ease of application development datastore is redundant within your location to minimize impact from points of failures and therefore can offer high availability of reads and rights cloud datastore can execute atomic transactions where a set of operations either all succeed or none occur cloud datastore uses a distributed architecture to automatically manage scaling so you never have to worry about scaling manually as well what’s very unique about cloud datastore is that it has a sql-like query language that’s available called gql also known as gql gql maps roughly to sql however a sql role column lookup is limited to a single value whereas in gql a property can be a multiple value property this consistency model allows an application to handle large amounts of data and users while still being able to deliver a great user experience data is automatically encrypted before it is written to disk and automatically decrypted when read by an authorized user now this does not reflect in the exam as of yet and i will be updating this lesson if and when it happens but firestore is the newest version of datastore and introduces several improvements over datastore existing datastore users can access these improvements by creating a new firestore database instance in datastore mode and in the near future all existing datastore databases will be automatically upgraded to firestore in datastore mode now moving right along cloud datastore holds a really cool feature for developers that’s called datastore emulator and this provides local emulation of the production datastore environment so that you can use to develop and test your application locally this is a component of the google cloud sdks gcloud tool and can be installed by using the gcloud components install command that we discussed earlier on in the course and so moving on to use cases for datastore it is ideal for applications that rely on highly available structured data at scale you can use datastore for things like product catalogs that provide real-time inventory and product details for a retailer user profiles that deliver a customized experience based on the user’s past activities and preferences as well as transactions based on asset properties for example transferring funds from one bank account to another next up we have firestore for firebase and so this is a flexible scalable nosql cloud database to store and sync data for client and server side development and is available for native c plus unity node.js java go and python sdks in addition to rest and rpc apis pretty much covering the gamut of most major programming languages now with cloud firestore you store data in documents that contain fields mapping to values these documents are stored in collections which are containers for your documents that you can use to organize your data and build queries documents support many different data types as well you can also create sub collections within documents and build hierarchical data structures cloud firestore is serverless with absolutely no servers to manage update or maintain and with automatic multi-region replication and strong consistency google is able to hold a five nines availability guarantee and so when it comes to querying in cloud firestore it is expressive efficient and flexible you can create shallow queries to retrieve data at the document level without needing to retrieve the entire collection or any nested subcollections cloud firestore uses data synchronization to update data in real time for any connected device as well it also caches data that your application is actively using so that the application can write read listen to and query data even if the device is offline when the device comes back online cloud firestore synchronizes any local changes back to cloud firestore you can also secure your data in cloud firestore with firebase authentication and cloud firestore security rules for android ios and javascript or you can use iam for server side languages and when it comes to costs firestore falls into the always available free tier where you can use one database holding five gigabytes or if you need more you can move into their paid option now firebase also has another database sharing similar features like having no servers to deploy and maintain real-time updates along with the free tier in this database is called real time database and is used for more basic querying simple data structure and keeping things to one database it’s something i like to call firestore lite real time database does not show up in the exam but i wanted to bring it to light as it is part of the firebase family just know that you can use both databases within the same firebase application or project as both can store the same types of data client libraries work in a similar manner and both hold real-time updates now although firebase is a development platform and not a database service i wanted to give it a quick mention for those of you who are unfamiliar with the tie-in to firestore with firebase firebase is a mobile application development platform that provides tools and cloud services to help enable developers to develop applications faster and more easily and since it ties in nicely with firestore it becomes the perfect platform for mobile application development okay so moving on to our last nosql database is memorystore and memorystore is a fully managed service from google cloud for either redis or memcached in memory datastore to build application caches and this is a common service used in many production environments specifically when the need for caching arises memory store automates the administration tasks for redis and memcached like enabling high availability failover patching and monitoring so you don’t have to and when it comes to memory store for redis instances in the standard tier these are replicated across zones monitored for health and have fast automatic failover standard tier instances also provide an sla of three nines availability memory store for redis also provides the ability to scale instant sizes seamlessly so that you can start small and increase the size of the instance as needed memory store is protected from the internet using vpc networks and private ip and also comes with iam integration systems are monitored around the clock ensuring that your data is protected at all times and know that the versions are always kept up to date with the latest critical patches ensuring your instances are secure now when it comes to use cases of course the first thing you will see is caching and this is the main reason to use memory store as it provides low latency access and high throughput for heavily accessed data compared to accessing the data from a disk common examples of caching is session management frequently accessed queries scripts or pages so when using memory store for leaderboards and gaming this is a common use case in the gaming industry as well as using it for player profiles memory store is also a perfect solution for stream processing combined with data flow memory store for redis provides a scalable fast in memory store for storing intermediate data that thousands of clients can access with very low latency and so when it comes to nosql databases these are all the available options on google cloud and as i said before it will only show up on the exam at merely a high level and so knowing what each of these databases are used for will be a huge benefit along with being an entry to diving deeper into possibly using these services within your day-to-day job as a cloud engineer and so that’s pretty much all i wanted to cover when it comes to nosql databases available in google cloud so you can now mark this lesson as complete and let’s move on to the next one [Music] welcome back and in this lesson we’ll be going over the big data ecosystem in an overview just to familiarize you with the services that are available in google cloud and are the services that will show up in the exam this lesson is not meant to go in depth but is an overview and will give you a good understanding on what these services can do and how they all work together to make sense of big data as a whole so getting right into it i wanted to first ask the question what is big data i mean many people talk about it but what is it really well big data refers to massive amounts of data that would typically be too expensive to store manage and analyze using traditional database systems either relational or monolithic as the amount of data that we have been seeing over the past few years has started to increase these systems have become very inefficient because of their lack of flexibility for storing unstructured data such as images text or video as well as accommodating high velocity or real-time data or scaling to support very large petabyte scale data volumes for this reason the past few years has seen the mainstream adoption of new approaches to managing and processing big data including apache hadoop and nosql database systems however those options often prove to be complex to deploy manage and use in an on-premises situation now the ability to consistently get business value from data fast and efficiently is now becoming the de facto of successful organizations across every industry the more data a company has access to the more business insights and business value they’re able to achieve like gain useful insights increase revenue get or retain customers and even improve operations and because machine learning models get more efficient as they are trained with more data machine learning and big data are highly complementary all in all big data brings some really great value to the table that is impossible for any organization to turn down and so now that we’ve gone through that overview of what big data is i wanted to dive into some shorter overviews of the services available for the big data ecosystem on google cloud and so the first service that i’d like to start with is bigquery now bigquery is a fully managed serverless data warehouse that enables scalable analysis over petabytes of data this service supports querying using sql and holds built-in machine learning capabilities you start by ingesting data into bigquery and then you are able to take advantage of all the power it provides so big data would ingest that data by doing a batch upload or by streaming it in real time and you can use any of the currently available google cloud services to load data into bigquery you can take a manual batch ingestion approach or stream using pub sub etl data and with bigquery data transfer service you can automatically transfer data from external google data sources and partner sas applications to bigquery on a scheduled and fully managed basis and the best part is batch and export is free bigquery’s high-speed streaming api provides an incredible foundation for real-time analytics making business data immediately available for analysis and you can also leverage pub sub and data flow to stream data into bigquery bigquery transparently and automatically provides highly durable replicated storage in multiple locations for high availability as well as being able to achieve easy resource bigquery keeps a seven day history of changes in case something were to go wrong bigquery supports standard sql querying which reduces the need for code rewrites you can simply use it as you would for querying any other sql compliant database and with dataproc and dataflow bigquery provides integration with the apache big data ecosystem allowing existing hadoop spark and beam workloads to read or write data directly from bigquery using the storage api
bigquery also makes it very easy to access this data by using the cloud console using the bq command line tool or making calls to the bigquery rest api using a variety of client libraries such as java.net or python there are also a variety of third-party tools that you can use to interact with bigquery when visualizing the data or loading the data bigquery provides strong security and governance controls with fine-grained controls through integration with identity and access management bigquery gives you the option of geographic data control without the headaches of setting up and managing clusters and other computing resources in different zones and regions bigquery also provides fine grain identity and access management and rest assured that your data is always encrypted at rest and in transit now the way that bigquery calculates billing charges is by queries and by storage storing data in bigquery is comparable in price with storing data in cloud storage which makes it an easy decision for storing data in bigquery there is no upper limit to the amount of data that can be stored in bigquery so if tables are not edited for 90 days the price of storage for that table drops by 50 percent query costs are also available as on-demand and flat rate pricing and when it comes to on-demand pricing you are only charged for bytes read not bytes returned in the end bigquery scales seamlessly to store and analyze petabytes to exabytes of data with ease now there are so many more features to list but if you are interested feel free to dive into the other features with the supplied link in the lesson text now moving on to the next service is pub sub and pub sub is a fully managed real-time messaging service that allows you to send and receive messages between independent applications it acts as messaging oriented middleware or event ingestion and delivery for streaming analytics pipelines and so a publisher application creates and send messages to a topic subscriber applications create a subscription to a topic and receives messages from it and so i wanted to take a moment to show you exactly how it works so first the publisher creates messages and sends them to the messaging service on a specified topic a topic is a named entity that represents a feed of messages a publisher application creates a topic in the pub sub service and sends messages to that topic a message contains a payload and optional attributes that describe the content the service as a whole ensures that published messages are retained on behalf of subscriptions and so a published message is retained for a subscription in a message queue shown here as message storage until it is acknowledged by any subscriber consuming messages from that subscription pub sub then forwards messages from a topic to all of its subscriptions individually a subscriber then receives messages either by pub sub pushing them to the subscriber’s chosen endpoint or by the subscriber pulling them from the service the subscriber then sends an acknowledgement to the pub sub service for each received message the service then removes acknowledged messages from the subscriptions message queue and some of the use cases for pub sub is balancing large task queues distributing event notifications and real-time data streaming from various sources and so the next service that i wanted to get into is composer now composer is a managed workflow orchestration service that is built on apache airflow this is a workflow automation tool for developers that’s based on the open source apache airflow project similar to an on-premises deployment cloud composer deploys multiple components to run airflow in the cloud airflow is a platform created by the community to programmatically author schedule and monitor workflows the airflow scheduler as you see here executes the tasks on an array of workers while following the specified dependencies and storing the data in a database and having a ui component for easy management now breaking down these workflows for just a sec in data analytics a workflow represents a series of tasks for ingesting transforming analyzing or utilizing data in airflow workflows are created using dags which are a collection of tasks that you want to schedule and run and organizes these tasks to ensure that each task is executed at the right time in the right order or with the right issue handling now in order to run the specialized workflows provision environments are needed and so composer deploys these self-contained environments on google kubernetes engine that work with other google cloud services using connectors built into airflow the beauty of composer is that you can create one or more of these environments in a single google cloud project using any supported region without having to do all the heavy lifting of creating a full-blown apache airflow environment now when it comes to data flow dataflow is a serverless fully managed processing service for executing apache beam pipelines for batch and real-time data streaming the apache beam sdk is an open source programming model that enables you to develop both batch and streaming pipelines using one of the apache beam sdks you build a program that defines the pipeline then one of apache beam’s supported distributed processing back-ends such as data flow executes that pipeline the data flow service then takes care of all the low-level details like coordinating individual workers sharding data sets auto scaling and exactly once processing now in its simplest form google cloud data flow reads the data from a source transforms it and then writes the data back to a sink now getting a bit more granular with how this pipeline works data flow reads the data presented from a data source once the data has been read it is put together into a collection of data sets called a p collection and this allows the data to be read distributed and processed across multiple machines now at each step in which the data is transformed a new p collection is created and once the final collection has been created it is written to async and this is the full pipeline of how data goes from source to sync this pipeline within data flow is called a job and finally here is a high-level overview of what a data flow job would look like when you involve other services within google cloud and put together in an end-to-end solution from retrieving the data to visualizing it and finally when it comes to pricing data flow jobs are billed in per second increments so you’re only charged for when you are processing your data now moving on to data proc this is a fast and easy way to run spark hadoop hive or pig on google cloud in an on-premises environment it takes 5 to 30 minutes to create spark and hadoop clusters data proc clusters take 90 seconds or less on average to be built in google cloud dataproc has built-in integration with other google cloud platform services and use spark and hadoop clusters without any admin assistance so when you’re done with the cluster you can simply turn it off so you don’t spend money on an idle cluster as well there’s no need to worry about data loss because data proc is integrated with cloud storage bigquery and cloud bigtable the great thing about dataproc is you don’t need to learn new tools or apis to use it spark hadoop pig and hive are all supported and frequently updated and when it comes to pricing you are billed at one cent per vcpu in your cluster per hour on top of the other resources you use you also have the flexibility of using preemptable instances for even lower compute cost now although cloud data proc and cloud data flow can both be used to implement etl data warehousing solutions they each have their strengths and weaknesses and so i wanted to take a quick moment to point them out now with dataproc you can easily spin up clusters through the console the sdk or the api and turn it off when you don’t need it with dataflow it is serverless and fully managed so there are never any servers to worry about and when it comes to having any dependencies to tools in the hadoop or spark ecosystem data proc would be the way to go but if you’re looking to make your jobs more portable across different execution engines apache beam allows you to do this and is only available on data flow moving on to the next service is cloud data lab now cloud data lab is an interactive developer tool created to explore analyze transform and visualize data and build machine learning models from your data data lab uses open sourced jupyter notebooks a well-known format used in the world of data science it runs on compute engine and connects to multiple cloud services easily so you can focus on your data science tasks it also integrates with all of the google services that help you simplify data processing like bigquery and cloud storage cloud data lab is packaged as a container and run in a vm instance cloud data lab uses notebooks instead of text files containing code notebooks bring together code documentation written as markdown and the results of code execution whether it’s text image or html or javascript like a code editor or ide notebooks help you write code and they allow you to execute code in an interactive and iterative manner rendering the results alongside the code cloud data lab notebooks can be stored in google cloud source repository this git repository is cloned onto persistent disk when attached to the vm now when it comes to prepping your data before consumption whether it be data cleansing cleaning prepping or alteration this is where data prep hits it out of the park dataprep is a serverless intelligent data service for visually exploring cleaning and preparing structured and unstructured data for analysis reporting and machine learning it automatically detects schemas data types possible joins and anomalies such as missing values outliers and duplicates so you don’t have to the architecture that i’m about to show you is how data prep shines the raw data that’s available from various different sources is ingested into cloud data prep to clean and prepare the data data prep then sends the data off to cloud data flow to refine that data and then sent off to cloud storage or bigquery for storage before being analyzed by one of the many available bi tools now these big data services are used by many data analysts in the field and it’s great to know what services that can be used to help process the data needed for their specific job as well for the exam you only need to know these services at a high level and not to know them in depth but if you seem interested in diving into any of these services to know more about them i highly encourage you to dive in after the course and really take a look at them and that’s pretty much all i have to cover in this lesson on the services that are available for the big data ecosystem in google cloud so you can now mark this lesson as complete and let’s move on to the next one [Music] welcome back this lesson is going to be based on the foundation of machine learning i’m going to go over what machine learning is what it can do for us the machine learning ecosystem on google cloud and hopefully answer any questions along the way this lesson will be a high level overview of the services available on google cloud yet these services that are available are a need to know as they come up in the exam and hopefully will give you some really cool ideas on the possibilities of building something truly fantastic on google cloud so what is machine learning well machine learning is functionality that helps enable software to perform tasks without any explicit programming or rules traditionally considered a subcategory of artificial intelligence machine learning involves statistical techniques such as deep learning also known as neural networks that are inspired by theories about how the human brain processes information it is trained to recognize patterns in collected data using algorithmic models and this collected data includes video images speech or text and because machine learning is very expensive to run on-premises is an efficient place for machine learning due to the use of massive computation at scale and as explained before machine learning is always better with big data so now i wanted to touch on what can machine learning do for us well it can categorize images such as photos faces or satellite imagery it can look for keywords in text documents or emails it can flag potentially fraudulent transactions when it comes to credit cards or debit cards it can enable software to respond accurately to voice commands it can also translate languages in text or audio and these are just some of the common functions that machine learning can do for us so getting into google’s machine learning platform itself machine learning has been a cornerstone of google’s internal systems for years primarily because their need to automate data-driven systems on a massive scale and doing this has provided unique insight into the right techniques infrastructure and frameworks that help their customers get optimal value out of machine learning the originally developed open source framework for use inside of google called tensorflow is now the standard in the data science community in addition to heavily contributing to the academic and open source communities google’s machine learning researchers helped bring that functionality into google products such as g suite search and photos in addition to google’s internal operations when it comes to data center automation now here is an overview of all the machine learning services that we will be covering and that you will need to know only at a high level for the exam and we’ll start off with the site api services starting with the vision api the vision api offers powerful pre-trained machine learning models that allow you to assign labels to images and quickly classify them into millions of pre-defined categories vision api can read printed and handwritten text it can detect objects and faces and build metadata into an image catalog of your choice now when it comes to video intelligence it has pre-trained machine learning models that automatically recognizes more than 20 000 objects places and actions in stored and streaming video you can gain insights from video in near real time using the video intelligence streaming video apis and trigger events based on objects detected you can easily search a video catalog the same way you search text documents and extract metadata that can be used to index organize and search video content now moving on to the language apis we start off with the natural language api and this uses machine learning to reveal the structure and meaning of text you can extract information about people places and events and better understand social media sentiment and customer conversations natural language enables you to analyze text and also integrate it with your document storage on cloud storage now with the translation api it enables you to dynamically translate between languages using google’s pre-trained or custom machine learning models translation api instantly translates text into more than 100 languages for your website and apps with optional customization features following another grouping of machine learning is the conversation apis first up we have dialog flow dialog flow is a natural language understanding platform that makes it easy to design and integrate a conversational user interface into your application or device it could be a mobile app a web application a bot or an interactive voice response system using dialogflow you can provide new and engaging ways for users to interact with your product dialogflow can analyze multiple types of input from your customers including text or audio inputs like from a phone or voice recording and it can also respond to your customers in a couple of ways either through text or with synthetic speech now with the speech-to-text api this api accurately converts speech into text it can transcribe content with accurate captions and deliver better user experience in products through voice commands going the other way from text to speech this api enables developers to synthesize natural sounding speech with over a hundred different voices available in multiple languages and variants text to speech allows you to create lifelike interactions with their users across many applications and devices and to finish off our machine learning segment i wanted to touch on auto ml automl is a suite of machine learning products that enables developers with very limited machine learning expertise to train high quality models specific to their business needs in other words using automl allows making deep learning easier to use and relies on google’s state-of-the-art transfer learning and neural architecture search technology so you can now generate high quality training data and be able to deploy new models based on your data in minutes automl is available for vision video intelligence translation natural language tables inference and recommendation apis now i know this has been a lot to cover for this machine learning lesson and the ecosystem around it but is a necessity for the exam and will also help you build really cool products when it comes to your role as an engineer again all the services that i have discussed in this lesson should be known at a high level only although my recommendation would be to dive deeper into these services by checking out the links in the lesson text below and having some fun with these products getting to know these services will really help up your game when it comes to getting to know these services a little bit more in depth and will really help you gain more momentum when it comes to building any applications or applying them to any currently running applications i personally found it extremely valuable and really cemented my knowledge when it came to machine learning i also had a ton of fun doing it and so that’s all i have for this lesson on machine learning so you can now mark this lesson as complete and let’s move on to the next one [Music] welcome back and in this lesson we’ll be diving into a suite of tools used on the google cloud platform that allow you to operate monitor and troubleshoot your environment known as operation suite and previously known as stackdriver this lesson will be mostly conceptual and gear more towards what the suite of tools do as it plays a big part not only in the exam but for the needs of gaining insight from all the resources that exist in your environment now there are a few tools to cover here so with that being said let’s dive in now the operation suite is a suite of tools for logging monitoring and application diagnostics operation suite ingests this data and generates insights using dashboards charts and alerts this suite of tools are available for both gcp and aws you can connect to aws using an aws role and a gcp service account you can also monitor vms with specific agents that again both run on gcp for compute engine and aws ec2 operation suite also allows the added functionality of monitoring any applications that’s running on those vms operation suite is also available for any on-premises infrastructure or hybrid cloud environments operation suite has a native integration within gcp out of the box so there’s no real configurations that you need to do and integrates with almost all the resources on google cloud such as the previously mentioned compute engine gke app engine and bigquery and you can find and fix issues faster due to the many different tools an operation suite can reduce downtime with real-time alerting you can also find support from a growing partner ecosystem of technology integration tools to expand your operations security and compliance capabilities now the operation suite comprises of six available products that covers the gamut of all the available tools you will need that allows you to monitor troubleshoot and improve application performance on your google cloud environment and i will be going over these products in a bit of detail starting with monitoring now cloud monitoring collects measurements or metrics to help you understand how your applications and system services are performing giving you the information about the source of the measurements time stamped values and information of those values that can be broken down through time series data cloud monitoring can then take the data provided and use pre-defined dashboards that require no setup or configuration effort cloud monitoring also gives you the flexibility to create custom dashboards that display the content you select you can use the widgets available or you can install a dashboard configuration that is stored in github now in order for you to start using cloud monitoring you need to configure a workspace now workspaces organize monitoring information in cloud monitoring this is a single pane of glass where you can view everything that you’re monitoring in your environment it is also best practice to use a multi-project workspace so you can monitor multiple projects from a single pane of glass now as i mentioned earlier cloud monitoring has an agent and this gathers system and application metrics from your vm and sends them to cloud monitoring you can monitor your vms without the agent but you will only get specific metrics such as cpu disk traffic network traffic and uptime using the agent is optional but is recommended by google and with the agent it allows you to monitor many third-party applications and just as a note cloud logging has an agent as well and works well together with cloud monitoring to create visualize and alert on metrics based on log data but more on that a little bit later cloud monitoring is also available for gke and this will allow you to monitor your clusters as it manages the monitoring and logging together and this will monitor clusters infrastructure its workloads and services as well as your nodes pods and containers so when it comes to alerting this is defined by policies and conditions so an a learning policy defines the conditions under which a service is considered unhealthy when these conditions are met the policy is triggered and it opens a new incident and sends off a notification a policy belongs to an individual workspace and each workspace can contain up to 500 policies now conditions determine when an alerting policy is triggered so all conditions watch for three separate things the first one is a metric the second one is a behavior in some way and the third one is for a period of time describing a condition includes a metric to be measured and a test for determining when the metric reaches a state that you want to know about so when an alert is triggered you could be notified using notification channels such as email sms as well as third party tools such as pagerduty and slack now moving on to cloud logging cloud logging is a central repository for log data from multiple sources and as described earlier logging can come not just from google but with aws as well as on-premises environments cloud logging handles real-time log management and analysis and has tight integration with cloud monitoring it collects platform system and application logs and you also have the option of exporting logs to other sources such as long-term storage like cloud storage or for analysis like bigquery you can also export to third-party tools as well now diving into the concepts of cloud logging these are associated primarily with gcp projects so logs viewer only shows logs from one specific project now when it comes to log entries log entry records a status or an event a project receives log entries when services being used produce log entries and to get down to the basics logs are a named collection of log entries within a google cloud resource and just as a note each log entry includes the name of its log logs only exist if they have log entries and the retention period is the length of time for which your logs are kept so digging into the types of logs that cloud logging handles there are three different types of logs there are audit logs transparency logs and agent logs now with audit logs these are logs that define who did what where and when they also show admin activity and data access as well as system events continuing on to access transparency logs these are logs for actions taken by google so when google staff is accessing your data due to a support ticket the actions that are taken by the google staff are logged within cloud logging now when it comes to agent logs these are the logs that come from agents that are installed on vms the logging agent sends system and third-party logs on the vm instance to cloud logging moving on to error reporting this looks at real-time error monitoring and alerting it counts analyzes and aggregates the errors that happen in your gcp environment and then alerts you when a new application error occurs details of the error can be sent through the api and notifications are still in beta error reporting is integrated into cloud functions and google app engine standard which is enabled automatically error reporting is in beta for compute engine kubernetes engine and app engine flexible as well as aws ec2 air reporting can be installed in a variety of languages such as go java.net node.js python php and ruby now moving into debugger this tool debugs a running application without slowing it down it captures and inspects the call stack and local variables in your application this tool debugs a running application without slowing it down it captures and inspects the call stack and local variables in your application this is also known as taking a snapshot once the snapshot has been taken a log point can be injected to allow you to start debugging debugger can be used with or without access to your application source code and if your repo is not local it can be hooked into a remote git repo such as github git lab or bitbucket debugger is integrated with google app engine automatically and can be installed on google compute engine gke and google app engine debugger is integrated with google app engine automatically and can be installed on gke debugger is integrated with google app engine automatically and can be installed on google compute engine google kubernetes engine google app engine and cloud run and just as a note installation on these products is all dependent on the library and again debugger can be installed like trace on non-gcp environments and is available to be installed using a variety of different languages next up is trace and trace helps you understand how long it takes your application to handle incoming requests from users and applications trace collects latency data from app engine https load balancers and applications using the trace api this is also integrated with google app engine standard and is applied automatically so you would use trace for something like a website that is taking forever to load to troubleshoot that specific issue trace can be installed on google compute engine google kubernetes engine and google app engine as well it can also be installed on non-gcp environments and it can be installed using a variety of different languages as shown here and coming up on the last tool of the bunch is profiler now profiler gathers cpu usage and memory allocation information from your applications continuously and this helps you discover patterns of resource consumption to help you better troubleshoot profiler is low profile and therefore won’t take up a lot of memory or cpu on your system as well in order to use profiler an agent needs to be installed profiler can be installed on compute engine kubernetes engine and app engine as well and of course it can be installed on non-gcp environments and profiler can be installed using the following languages just go java node.js and python and so just as a note for the exam only a high level overview of these tools are needed and so this concludes this lesson on a high level overview of operation suite so you can now mark this lesson as complete and let’s move on to the next one
Affiliate Disclosure: This blog may contain affiliate links, which means I may earn a small commission if you click on the link and make a purchase. This comes at no additional cost to you. I only recommend products or services that I believe will add value to my readers. Your support helps keep this blog running and allows me to continue providing you with quality content. Thank you for your support!
The text is a comprehensive guide to database design and management using PostgreSQL. It begins with the fundamentals of table and column creation, emphasizing the importance of data types and primary keys. The author demonstrates how to build relationships between tables using foreign keys, reducing data redundancy. The guide advances to more complex topics like creating custom data types, indexes, and views. Finally, the author explains the creation and use of functions, stored procedures, triggers, and cursors with real-world examples, providing a hands-on approach to database manipulation.
Database Design and Management Study Guide
Quiz
Answer each question in 2-3 sentences.
What does the NOT NULL constraint do when defining a column in a database table?
Why is it important to choose the correct data type for a column when creating a database table? Give an example of what could go wrong if you choose the wrong type.
What is the purpose of a primary key in a database table, and how does the SERIAL data type help in creating one?
Describe the difference between CHARACTER and VARCHAR data types in the context of storing strings, and when might you choose one over the other?
What is a foreign key, and why is it important for relating data between different tables in a relational database?
Explain the difference between an INNER JOIN and a LEFT JOIN.
What does the CREATE INDEX command do, and why is indexing important for database performance?
What is the purpose of the GROUP BY clause in a SELECT statement?
What is a database “view”, and what are some of the benefits of using them?
Explain how a database trigger works.
Quiz Answer Key
The NOT NULL constraint ensures that a column cannot contain a null value, meaning a value must be provided for that column when a new row is inserted or when an existing row is updated. This helps maintain data integrity by preventing incomplete or missing data in critical fields.
Choosing the correct data type is crucial for efficient storage and accurate data representation and retrieval. For example, using a SMALLINT for zip codes in the United States is problematic, because its maximum value is insufficient to store all possible zip codes.
A primary key uniquely identifies each row in a table and ensures data integrity and the SERIAL data type automates the generation of unique integer values, simplifying the creation of primary keys.
CHARACTER(n) stores fixed-length strings, padding shorter strings with spaces, while VARCHAR(n) stores variable-length strings up to a maximum length of ‘n’. VARCHAR is generally preferred as it uses space more efficiently by only storing the characters present in the string.
A foreign key establishes a link between rows in two tables. It ensures referential integrity by enforcing that values in the foreign key column must exist as values in the primary key column of the related table, maintaining consistency across tables.
An INNER JOIN returns only the rows where there is a match in both tables based on the join condition. A LEFT JOIN returns all rows from the left table and the matching rows from the right table and if there is no match, columns from the right table will be null.
The CREATE INDEX command creates an index on one or more columns of a table. Indexing improves the speed of data retrieval operations, especially SELECT statements with WHERE clauses, by allowing the database to quickly locate specific rows without scanning the entire table.
The GROUP BY clause groups rows that have the same values in specified columns into summary rows. It’s often used with aggregate functions (e.g., SUM, AVG, COUNT) to calculate statistics for each group.
A database view is a virtual table based on the result set of a stored query. Views can simplify complex queries, provide a level of data abstraction, and enforce security by restricting access to certain data through the view definition.
A database trigger is a stored procedure that automatically executes in response to certain events on a particular table, such as inserts, updates, or deletes and they are useful for enforcing data integrity, auditing changes, and implementing complex business rules.
Essay Questions
Discuss the importance of normalization in database design. Explain the first three normal forms (1NF, 2NF, 3NF) and provide examples of how to normalize a poorly designed table.
Describe the different types of relationships that can exist between tables in a relational database (one-to-one, one-to-many, many-to-many). Explain how to implement each type of relationship using primary keys and foreign keys.
Explain the concept of database transactions and the ACID properties (Atomicity, Consistency, Isolation, Durability). Provide examples of how transactions are used to ensure data integrity in concurrent environments.
Discuss the various data types available in PostgreSQL and explain when to use each data type.
Discuss the process of creating database functions and triggers. Discuss the differences between SQL functions and PGSQL functions, and why you might choose one over the other.
Glossary of Key Terms
Data Type: A classification that specifies the type of value a column can hold (e.g., integer, string, date).
Constraint: A rule enforced on data columns to maintain data integrity and accuracy (e.g., NOT NULL, UNIQUE, PRIMARY KEY, FOREIGN KEY).
Primary Key: A column (or set of columns) that uniquely identifies each row in a table.
Foreign Key: A column in one table that refers to the primary key of another table, establishing a link between the two tables.
SERIAL: A PostgreSQL data type that automatically generates unique, sequential integer values, often used for primary keys.
VARCHAR: A variable-length character string data type, allowing strings of varying lengths up to a specified maximum.
CHARACTER: A fixed-length character string data type, padding shorter strings with spaces to reach the specified length.
Normalization: The process of organizing data in a database to reduce redundancy and improve data integrity.
Join: An operation that combines rows from two or more tables based on a related column.
Index: A data structure that improves the speed of data retrieval operations on a table.
GROUP BY: A SQL clause that groups rows with the same values in specified columns.
View: A virtual table based on the result set of a stored query.
Transaction: A sequence of database operations treated as a single logical unit of work.
ACID Properties: A set of properties that guarantee reliable processing of database transactions (Atomicity, Consistency, Isolation, Durability).
Trigger: A stored procedure that automatically executes in response to certain events on a particular table.
SQL Function: Function written in standard SQL.
PGSQL Function: Function written using PGSQL which is heavily influenced by Oracle systems.
Cursor: A database object that allows you to retrieve data from a result set one row at a time.
Loop: A programming construct that repeats a block of code until a certain condition is met.
Array: A data structure that stores a collection of elements of the same type.
Do Block: A block of code that can be executed independently.
Trigger Function: The database procedure that is executed in response to a triggering event.
Trigger Event: An event on a table that causes a trigger function to execute (e.g., INSERT, UPDATE, DELETE).
PostgreSQL Database Tutorial: A Practical Guide
Okay, here’s a briefing document summarizing the key themes and ideas from the provided source, which appears to be a transcript or notes from a tutorial about working with PostgreSQL databases.
Briefing Document: PostgreSQL Database Tutorial
Source: Excerpts from a PostgreSQL database tutorial transcript.
Overall Theme: This document provides a hands-on guide to fundamental PostgreSQL database operations, covering table creation, data types, data insertion, querying, functions, triggers, cursors, and stored procedures. It emphasizes practical examples and step-by-step instructions.
Key Concepts and Ideas:
Table Creation and Data Types:
Creating tables with specific columns and data types.
Emphasis on choosing appropriate data types for different kinds of data (e.g., VARCHAR, CHARACTER, DATE, TIMESTAMP, SERIAL, INTEGER, TEXT).
Use of NOT NULL constraints to enforce required fields.
Definition of primary keys (SERIAL for auto-incrementing IDs).
Example: “we are not going to allow them to leave this piece of data empty they have to give us a first name they also have to give us a last name”
Explanation of character data types: fixed-length (CHARACTER) versus variable-length (VARCHAR). VARCHAR is recommended for most string data to prevent wasted space.
Detailed explanation of date and time data types, including different formats and time zone handling.
Data Manipulation (INSERT, SELECT, UPDATE, DELETE):
Inserting data into tables using INSERT INTO statements.
Retrieving data using SELECT statements with WHERE clauses for filtering.
The asterisk * is used as shorthand to select all columns in the SELECT statement.
Example: “anytime you want to go and get information from a table you say select the star represents anything customer represents the customer table”
Updating data using UPDATE statements.
Deleting data using DELETE or TRUNCATE statements.
Using ALTER TABLE to modify table structure (add/drop columns, change data types).
Renaming tables and columns using ALTER TABLE RENAME.
Querying and Filtering Data:
Using WHERE clauses to filter data based on conditions.
Using comparison operators (=, >, <, >=, <=, !=) and logical operators (AND, OR, NOT).
Using ORDER BY to sort results in ascending (ASC) or descending (DESC) order.
Using LIMIT to restrict the number of rows returned.
Using the CONCAT function to combine the first name and last name into one field named “Name” as an alias.
Using the DISTINCT keyword to eliminate duplicate values.
Using the SIMILAR TO and LIKE operators for pattern matching using regular expressions.
Example: “we want to return all customers whose first name begins with a D or whose last name begins uh let’s make it even more complicated whose last name ends with an N”
Aggregate Functions and Grouping:
Using aggregate functions (COUNT, SUM, AVG, MIN, MAX) to perform calculations on data.
Using GROUP BY to group rows based on one or more columns, allowing aggregate functions to be applied to each group.
Using HAVING to filter groups based on conditions.
Joining Tables:
Using JOIN clauses to combine data from multiple tables based on related columns.
Different types of joins: INNER JOIN, LEFT JOIN, RIGHT JOIN, FULL OUTER JOIN, CROSS JOIN.
Explanation of foreign keys: “whenever we are using IDs from other tables these are going their primary Keys down here but up here they are actually referred to as a foreign key”
Using UNION to combine the results of multiple SELECT statements.
Views:
Creating views as stored queries that can be used like tables.
Example: “a view is extremely useful they are basically select statements that uh that’s result is stored in your database”
Functions (SQL and PL/pgSQL):
Creating custom functions using SQL and PL/pgSQL.
SQL functions are simpler and use SQL statements.
PL/pgSQL functions offer more flexibility and control flow (loops, conditionals).
Using CREATE OR REPLACE FUNCTION to define functions.
Specifying input parameters and return data types.
Using dollar quoting ($$) to define function bodies.
Using DECLARE to define variables within PL/pgSQL functions.
Assigning values to variables using the := operator.
Using control flow statements (IF, ELSEIF, ELSE, CASE).
Looping constructs (LOOP, FOR, WHILE).
Using the RETURN statement to return values from functions.
Functions for specific database actions like the check if a salesperson has a state assigned and if not I want to change it to the state of Pennsylvania
Stored Procedures:
Triggers:
Creating triggers that automatically execute functions in response to database events (e.g., INSERT, UPDATE, DELETE).
Accessing old and new values using OLD and NEW keywords.
Demonstration of log distributor name changes.
Cursors:
Declaring, opening, fetching from, and closing cursors to iterate over result sets.
The tutorial goes through examples that print a list of employee names as well as customers names depending on where the person is located.
Quotes:
“the number of characters remember this is just a string like in other languages not null means that if they decide they want to create a new customer we are not going to allow them to leave this piece of data empty they have to give us a first name they also have to give us a last name”
“anytime you want to go and get information from a table you say select the star represents anything customer represents the customer table”
“whenever we are using IDs from other tables these are going their primary Keys down here but up here they are actually referred to as a foreign key”
“a view is extremely useful they are basically select statements that uh that’s result is stored in your database”
“we want to return all customers whose first name begins with a D or whose last name begins uh let’s make it even more complicated whose last name ends with an N”
Target Audience: This tutorial appears to be aimed at beginners or those with some database experience who want to learn the fundamentals of working with PostgreSQL. The step-by-step instructions and practical examples make it suitable for self-paced learning.
Potential Uses: This briefing document can be used to:
Quickly understand the scope and content of the PostgreSQL tutorial.
Identify key concepts and techniques covered in the tutorial.
Serve as a reference guide to PostgreSQL syntax and features.
Help learners prioritize topics and focus their learning efforts.
PostgreSQL Database Management: FAQ
FAQ on PostgreSQL Database Management
Here are some frequently asked questions about creating and managing databases, tables, and data within the PostgreSQL environment, based on the provided source material.
Questions
1. What are the basic data types available in PostgreSQL and how do I define them when creating a table column?
PostgreSQL offers various data types, including character types (like CHARACTER(5) for a fixed-length string and VARCHAR(number) for variable-length strings), numeric types (like INTEGER, SMALLINT), date/time types (DATE, TIME, TIMESTAMP, INTERVAL), boolean (BOOLEAN), currency, binary, JSON, range, geometric, arrays, XML, UUIDs and custom types. When defining a column, you specify the data type and any length constraints. For example: first_name VARCHAR(30) NOT NULL. NOT NULL ensures the field cannot be left empty.
2. What does “NOT NULL” mean when defining a column in a table?
NOT NULL is a constraint that ensures a column in a table cannot contain a null value. This means that a value must be provided for that column when inserting or updating data. This constraint helps maintain data integrity and prevents missing or undefined values in critical fields.
3. How do I automatically generate unique IDs for each record in a table?
Use the SERIAL data type for the ID column. SERIAL creates an auto-incrementing integer. Designate the column as the PRIMARY KEY to ensure uniqueness and create an index on the ID column to boost performance. Example: id SERIAL PRIMARY KEY.
4. How can I insert data into a table and retrieve it?
Use the INSERT INTO statement followed by the table name and column list, then the VALUES keyword and a list of values corresponding to the columns. Example: INSERT INTO customer (first_name, last_name) VALUES (‘Christopher’, ‘Jones’);. To retrieve data, use the SELECT statement. For example: SELECT * FROM customer ORDER BY id ASC; fetches all columns from the customer table, ordered by the id column in ascending order.
5. What are foreign keys and how are they used to relate tables to each other?
A foreign key is a column (or set of columns) in one table that refers to the primary key of another table. It establishes a link between the two tables. To define a foreign key, use the REFERENCES keyword followed by the referenced table and column. Example: product_type_id INTEGER REFERENCES product_type(id). It’s vital to use an INTEGER type, not SERIAL, because PostgreSQL assigns serial values automatically which we do not want in this case.
6. How can I modify an existing table structure, such as adding, renaming, or dropping a column?
Use the ALTER TABLE statement. To add a column: ALTER TABLE sales_item ADD COLUMN weekday VARCHAR(30);. To rename a column: ALTER TABLE sales_item RENAME COLUMN day_of_week TO weekday;. To drop a column: ALTER TABLE sales_item DROP COLUMN weekday;. To modify a column’s NOT NULL constraint: ALTER TABLE sales_item ALTER COLUMN day_of_week SET NOT NULL;. To change the column to integer: ALTER TABLE customer ALTER COLUMN zip TYPE integer;.
7. How do I perform conditional queries using logical operators and the WHERE clause?
The WHERE clause filters the results based on specified conditions. You can use logical operators like AND, OR, and NOT to combine multiple conditions. Comparison operators like =, >, <, >=, <=, and != (or <>) are used to compare values. Example: SELECT * FROM sales_order WHERE time_order_taken > ‘2018-12-01’ AND time_order_taken < ‘2018-12-31’; (selects orders placed in December 2018). You can use ORDER BY to order results. DESC puts data in descending order. LIMIT limits the amount of rows.
8. How can I combine data from multiple tables using JOINs?
JOINs combine rows from two or more tables based on a related column.
INNER JOIN: Returns rows only when there is a match in both tables based on the join condition. Example: SELECT Item.ItemID, price FROM item INNER JOIN sales_item ON sales_items.salesID.
LEFT OUTER JOIN (or LEFT JOIN): Returns all rows from the left table and the matching rows from the right table. If there is no match, the right side will contain NULL values.
CROSS JOIN: Includes data from each row in both tables.
Equal joins are used while checking for equality between common columns. Unions are used to combine results from 2+ select statements. The number and data type of each column must be the same to work correctly.
Postgres Data Types Quick Reference
Postgres offers various data types, which can be grouped into categories such as character types, numeric types, boolean types, and date/time types. Additionally, custom data types can be created.
Character types:
Character(length): Stores a maximum number of characters, as specified by the length parameter.
Variable Character(length): Stores a string of characters up to the specified length. This is commonly used.
Text: Stores any length of characters.
Numeric types:
Serial: An auto-incrementing integer. Commonly used for primary key identification. There are variations, such as small serial.
Integer: Stores signed whole numbers.
Float: Numbers with decimals.
Decimal(precision, scale): Specifies the number of digits (precision) and the number of digits after the decimal point (scale).
Numeric: Can store real values, with options for specifying precision.
Boolean types:
Boolean: Can store true, false, or null values. True can be represented as 1, ‘t’, ‘y’, ‘yes’, or ‘on’, while false can be represented as 0, ‘f’, ‘n’, ‘no’, or ‘off’. It is recommended to use true and false.
Date/time types:
Date: Stores year, month, and day. Dates are stored as year-month-day, regardless of the input format.
Time: Stores time values, with or without a time zone.
Timestamp: Stores date and time information.
Interval: Represents a duration of time, which can be added to or subtracted from date/times.
Other data types:
Currency
Binary
JSON
Range
Geometric
Arrays
XML
UUIDs
It is important to choose the correct data type when creating tables. For example, zip codes should be stored as integers rather than small integers, as small integers have a maximum value that is too small to accommodate all U.S. zip codes.
Custom data types can also be created. For example, an enumerated type called SexType can be created to limit the values in a column to ‘M’ or ‘F’. The syntax is CREATE TYPE type_name AS ENUM (value1, value2, …).
Postgres Table Creation and Modification Guide
When creating tables in Postgres, it’s important to consider how the tables will represent real-world objects or groups, and how different tables will relate to each other. The goal is to reduce redundant data.
To create a table, you can use a query tool such as PG admin.
General steps for creating tables:
Open the query tool in PG Admin.
Use the CREATE TABLE statement, followed by the table name and column definitions.
Each column definition includes the column name, data type, and any constraints.
Specify a primary key to uniquely identify entities in the table.
Referencing data in other tables requires the use of foreign keys. A foreign key is used to identify a row in another table. When creating a foreign key, it should be assigned an integer type, and it should reference the table and column it refers to using the REFERENCES keyword. For example, to reference the ID column in a table called product_type, you would use type_id INTEGER REFERENCES product_type(ID).
Column definitions and data types:
column_name DATA_TYPE [constraints]
first_name VARCHAR(30) NOT NULL
Constraints:
NOT NULL indicates that a column cannot be left empty when a new row is created.
PRIMARY KEY indicates the column is a unique identifier for the table.
SERIAL is used to auto-increment the primary key.
Example SQL code for creating a table:
CREATE TABLE customer (
ID SERIAL PRIMARY KEY,
first_name VARCHAR(30) NOT NULL,
last_name VARCHAR(30) NOT NULL,
email VARCHAR(60),
company VARCHAR(60),
street VARCHAR(60) NOT NULL,
city VARCHAR(30) NOT NULL,
state CHARACTER(2) NOT NULL,
zip VARCHAR(20) NOT NULL,
phone_number VARCHAR(20) NOT NULL,
birth_date DATE,
sex CHARACTER(1) NOT NULL,
date_entered TIMESTAMP
);
It is possible to modify a table after it has been created. To do so, use the ALTER TABLE command. You can add a column, modify a column, rename a column, or drop a column.
SQL Queries: Concepts and Commands
SQL queries are commands that are sent to a database to retrieve or change data. Databases contain many tables of data organized into rows and columns. To start creating a database that will track orders for a company, you should ensure that one table represents one real-world object or group. For example, customers, orders, sales items, and sales orders should all have their own separate tables. Columns then store one piece of information, such as a name, address, or state.
Here’s an overview of key SQL query concepts:
SELECT: Used to choose which columns to display in the result.
FROM: Specifies the table to retrieve the data from.
WHERE: Filters rows based on specified conditions.
It is possible to stack conditional statements using logical operators such as AND, OR, and NOT.
ORDER BY: Sorts the result set based on a column or columns.
The keyword DESCENDING can be used to order the results from highest to lowest.
LIMIT: Restricts the number of rows returned in the result set.
AS: Assigns an alias to a column or table, which can be used to rename a column.
DISTINCT: Retrieves unique values from a column, eliminating duplicates.
GROUP BY: Groups rows with the same values in a column into a summary row.
HAVING: Filters the results of a GROUP BY query based on a condition.
Joins: Used to combine rows from two or more tables based on a related column.
INNER JOIN: Returns rows when there is a match in both tables.
The ON keyword defines the join condition.
OUTER JOIN: Returns all rows from one table, even if there are no matches in the other table.
LEFT OUTER JOIN: Returns all rows from the left table.
CROSS JOIN: Combines each row of one table with each row of another table.
UNION: Combines the result sets of two or more SELECT statements into a single result set.
Each SELECT statement must return the same number of columns with compatible data types.
Arithmetic Operators: Include addition, subtraction, multiplication, and division.
Here are a few other commands that can be used with queries:
INSERT INTO: Adds new rows to a table.
UPDATE: Modifies existing data in a table.
DELETE: Removes rows from a table.
TRUNCATE: Deletes all data inside of a table.
DROP: Removes a table altogether.
PGSQL Functions, Triggers, Cursors, and Stored Procedures
PGSQL is heavily influenced by Oracle SQL and allows for looping, conditionals, functions, data types, and more.
Creating PGSQL Functions
The basic layout of a PGSQL function includes CREATE OR REPLACE FUNCTION, the function name, parameters with their types, the return type, AS, body tags, BEGIN with the statements, END, the end of the dollar tags, and definition of the language as PLPGSQL.
The basic syntax is:
CREATE OR REPLACE FUNCTION function_name(parameters)
RETURNS return_type AS
body_tag
BEGIN
statements;
END;
body_tag
LANGUAGE plpgsql;
To check if a salesperson has a state assigned and, if not, change it to Pennsylvania, use UPDATE sales_person SET state = ‘Pennsylvania’ WHERE state IS NULL.
Dollar quotes can be used instead of single quotes within SQL statements.
Variables in Functions
Variables can be declared within a function using a DECLARE block before the BEGIN block. For example:
DECLARE
answer INTEGER;
To assign a value to a variable, use the := operator. For example:
answer := value1 + value2;
To return a value from a function, use the RETURN statement.
Returning Query Results
To return multiple rows from a function, use RETURNS SETOF followed by the table name.
To return a table, use RETURNS TABLE and define the columns and their data types. For example:
RETURNS TABLE (
name VARCHAR,
supplier VARCHAR,
price NUMERIC
)
To execute a query and return the results, use RETURN QUERY followed by the SELECT statement.
Conditional Statements
IF, ELSEIF, and ELSE statements can be used to execute different code based on conditions. Each IF block must be terminated with END IF.
CASE statements can also be used to execute different code depending on an exact value. Each CASE block must be terminated with END CASE.
Loops
Loops can be used to iterate over a set of statements. Each loop must be terminated with END LOOP.
FOR loops can be used to iterate over a range of values.
FOREACH loops can be used to iterate over the elements of an array.
Arrays
Arrays can be created by specifying the data type followed by []. For example:
array_name INTEGER[];
Values can be assigned to an array using the ARRAY keyword. For example:
array_name := ARRAY;
Aggregate Functions
Aggregate functions such as SUM, COUNT, AVG, MIN, and MAX can be used to perform calculations on a set of values.
Stored Procedures
Stored procedures are similar to functions but cannot return values. However, INOUT can be used as a workaround.
Stored procedures can execute transactions, which functions cannot.
Stored procedures are executed using the CALL command.
If a stored procedure does not have parameters, it is called a static procedure; otherwise, it is dynamic.
Triggers
Triggers are used to automatically execute an action when a specific event occurs, such as INSERT, UPDATE, DELETE, or TRUNCATE.
Triggers can be associated with tables, foreign tables, or views.
Triggers can execute BEFORE, AFTER, or INSTEAD OF an event.
Row-level triggers are called for each row that is modified, while statement-level triggers execute once regardless of the number of rows.
To create a trigger function, use the CREATE OR REPLACE FUNCTION statement and specify RETURNS TRIGGER.
To bind a function to a trigger, use the CREATE TRIGGER statement and specify the event, table, and function to execute.
Conditional triggers can be created using the WHEN clause in the CREATE TRIGGER statement.
Cursors
Cursors are used to step through rows of data and can be used to select, update, or delete rows.
To declare a cursor, use the DECLARE statement and specify the cursor name and the SELECT statement to be used.
To open a cursor, use the OPEN statement.
To fetch rows from a cursor, use the FETCH statement.
To close a cursor, use the CLOSE statement.
Cursors can be used with functions to return a list of customers in a provided state.
Sales Database Design and Implementation
To create a sales database, you can create tables representing real-world objects and define how these tables relate to each other to reduce data redundancy. You can use a tool such as PG admin to create and manage the database.
Key tables for a sales database may include:
Customer: Information about customers, such as first name, last name, email, company, street address, city, state, zip code, phone number, birth date, sex, and date entered. The table should include a unique identification number for each customer.
Salesperson: Information about sales people, such as first name, last name, email, company, street address, city, state, zip code, phone number, and date hired.
Product Type: Categories of products, such as business, casual, or athletic.
Product: Generic information about a product, such as the supplier, shoe name, and description. This table can connect to the Product Type table.
Item: Specific information about a product, such as size, color, picture, and price. This table can connect to the Product table.
Sales Order: Information about a sale, such as the customer, salesperson, time order taken, purchase order number, credit card information, and name on the card.
Sales Item: Information about the specific items in a sale, such as the item, quantity, discount, and tax rate. This table can connect to both the Item and Sales Order tables.
Transaction Type: Information on whether a sale was cash, credit, debit, etc.
Past Due: Information on customer debt and payment history.
When creating these tables, it is important to select appropriate data types for each column. It is also important to use primary and foreign keys to define relationships between tables. You can use SQL queries to add, modify, and retrieve data from the tables.
Views can be created to simplify complex queries and provide a customized view of the data. Functions and stored procedures can be used to encapsulate reusable logic and automate tasks. Triggers can be used to automatically execute actions in response to certain events, such as data modification. Cursors can be used to iterate through rows of data and perform operations on each row.
PostgreSQL Tutorial Full Course 2022
The Original Text
well hello Internet and welcome to my full course on postgres in this one video You’re basically going to get a 1 000 page book crammed into one video in the description underneath the video there is a table of contents where you can jump around and learn everything you want there’s loads of examples in this video and also there’s a link to a transcript as well as all the code used here and I have a lot to do so let’s get into it all right so the installation is actually covered at the end of the video so you can jump to the table of contents if you want to see that first but the big question is why use postgres well postgres is an object relational database that is just as fast as MySQL however it adheres more closely to SQL standards and it excels at concurrency postgres is also Superior at avoiding data corruption and postgres also on top of that provides more advanced data types and allows for the creation of even custom data types operators index types and so forth and postgres is normally the best option when extensibility scalability and data Integrity are the most important to you now just as a general overview a database is data that is structured into rows and columns kind of like a spreadsheet and to receive or change data in a data base you send commands that are called queries and the database in turn returns a result based on that request databases contain many tables of data organized into rows and columns like I said and each column is going to represent one type of data that the database stores each row is going to then contain multiple pieces of data specific to each entity you are describing so for example we store information here as you can see in this slide in regards to students and each individual value is going to be stored in what is called a cell and then you’re going to have primary keys and they are used to define unique entities in your table as you can see here the one or the column labeled ID provides a unique value associated with each student now we are going to be working with our database using PG admin and everything is exactly the same on Windows Mac and Linux there’s actually only one difference between the windows and Linux versions and the Mac versions and that is that the Linux and Mac versions are going to have an extra database called users that’s it everything else is the same so whenever you first open up PG admin you’re going to type in some type of password and my password is turtledove never use that because I use that in all my tutorials okay and you just log in here then you’re going to go over to servers now the very first thing we’re going to want to do here is we’re going to want to create a new database so just go into databases actually before I do that I’m going to show you how to change the theme so you’re just going to go into actually let’s go into runtime and I’m going to zoom in and increase this the fonts everywhere so we can just go like this and zoom in here or you can just hold down your control and zoom in like that also so it’s a little bit bigger and easier to read and I know that everybody prefer a dark theme so you just click on file preferences go down to themes and you can click on dark and save and refresh and reload and there you go now you have that and then if we decide well they went and changed let’s go and zoom in here so you can see this better okay so we’re going to go over to servers and we’re going to go to databases and we’re going to right click on this we’re going to say create database and we’re going to create a new one and let’s call this sales CB2 and owner postgres you don’t need to do anything else here with this and you can just click on save all right and that is going to give you a brand new database that we’re going to be able to work with and down inside of schemas over here you’re going to see all the functions we’re going to be creating we’re going to be creating tons of functions and tables and all that but we haven’t created anything yet now if you want to go and open up the query tool which of course you’re going to want to you’re going to want to go to your database and right click on it and come down here to Quarry tool and click on that and there it opens and this is where you’re going to put your queries in and this is of course where you’re going to get the output from your queries now of course whenever you are going to start creating a database which we’re going to create a really large database that’s going to track orders for a company you want to think about things like and you want to make sure that one table is going to represent one real world object or one real world group so for example customers orders sales items sales orders all those different things are all going to have their own separate tables columns are then going to store one piece of information like a name an address a state then you have to start thinking to yourself how do different tables relate well of course if we have a sales order we’re going to need to relate our customer table over to our sales order table and maybe a sales person table over also to that sales order table and this will make more sense as we create our databases and the real goal here in designing a database is to reduce the use of redundant data now one way to go and create a database is to use a real world example so on the left side of the screen here you can see an invoice which is going to represent our shoes store which is what we’re going to base our entire database on and you can look at it and you can see what things do we have here that we would like to put inside of our table well let’s first off look at customer and that would be under our bill to so we have such things as name so we have first name last name and we have email so we have that inside of there company Street you can see we’re just basically placing in all the individual different things that we would like to track and then once again where you see ID serial primary key this is going to represent the unique identification number for said customer and I’m just going to jump over now and create this and then I’ll explain what all of these different data types mean like variable character this means this is going to be a string of characters that is going to be up to 30 characters in length and so forth and so on but we’ll get into that as we build our database so we’re just going to go in here to our core tool and I’m going to say create and let’s zoom in here even more so make sure you can see this okay so I’m going to say create table and we’ll start off just by creating our customer table and then we have to just go and place in all the things we want so we want to track the first name variable number of characters remember this is just a string like in other languages not null means that if they decide they want to create a new customer we are not going to allow them to leave this piece of data empty they have to give us a first name they also have to give us a last name so we’re going to say variable number of characters and then we’re going to say 30 again not in all and I’ll get into the more specifics of the data type after I create this we also said we want to email so and you just want to estimate the maximum length of an email that you may get so I’m going to say it’s 60 just to put something inside of here company and again I’m going to have this also be a string make sure you spell everything right otherwise that’ll cause all kinds of problems so we’ll say variable number of characters and let’s make this 60 as well and then we have our street and again very whole number of characters you’re going to use variable number of characters a lot whenever you’re working with databases and not and all and then we also want our city and we’re basically just copying everything that we had from our little order sheet so it’s very very simple and do not put commas like that yeah get rid of that got ahead of myself a little bit not null then put a comma and get rid of this one also okay and the we’re gonna basically make everything a required field just to keep everything very simple and state I’m going to this is based in the United States so this is what we’ll be working with this time I’m going to use a character because I only am ever going to have two characters for my state and then we can have a zip code and there are very specific data types used by postgres that you will get well acquainted with and I’m going to use basically the maximum size that I need no more than that phone number this is going to be a variable number of characters and 20 and again not null that means required and then I’m going to have maybe you want to offer a special to the customer on a birth date well there is a date data type also and let’s go and just put null inside of there for that value because maybe we don’t you know get that I’m going to have our sex of our character which is going to be one individual character and I’m going to say that that’s going to be not null and then what else do we want well date entered maybe we want to go and get the time stamp for whenever this customer became a customer so we would know how long they’ve been with us and maybe an anniversary date or something along those lines and then you’re going to have cereal and we’re gonna have this marked as a primary key a unique identification number and serial just briefly is going to be serial like that is going to be an auto incrementing number so an integer that’s Auto incrementing incrementing and that just means that every time you add a new customer it’s automatically going to handle that for you and after we go and create our table we just come up here and click on execute and if it says create a table everything looks good here then you can come down inside of here but you’re not gonna well sometimes you see it sometimes you don’t so let’s move this over here it does have our customer table inside of there if it ever doesn’t you’ll ever look for something and it’s not there just right click here and click on refresh and then you will see it but you can see there is our customer table and we can right click on it and click on properties and it’s going to show us some information about it and you can direct directly change the table inside of here so here is the column so if you decide you want to change any of these or get rid of that null ability or make something a primary key or a default value you can do that directly inside of this tool which is very useful and we’ll get more into all of these other specific things here as we continue but for now we’re not going to do anything and we’re going to say yes we’re happy with our table we do not want to update anything now what I want to do is go over a lot of these different data types because they can be slightly confusing all right so first off you’re going to have your character types and up here you can see that what I’m basically saying is I want to store a maximum number of five characters you can also just create a variable number of characters this is a data type and this is going to store any length of characters you can also like I did previously to find what I consider to be my maximum number of characters and then you’re going to also have the text data type which is also going to store any length of characters in regards to the numeric types there are many you have serial and these are basically whole numbers that auto increment like I said before every time you add a new customer it automatically if you have one customer you have another one now all of a sudden the ID if it’s marked serial is automatically going to become 2 and 3 and 4 and always you’re always going to use these for your identifications with your primary keys and there are different types of Serial mainly you’re going to use just the regular serial data type but there’s also a small cereal and you can see the ranges of values this is the minimum this is the maximum and these are unsigned integers other data types you also have your integer data types and these are whole numbers only so you’re not going to have any fractions of these but these are signed and you can see the minimums as well as the maximums for all of these different data types as well then you have floats these are numbers with decimals here four data types you’re going to have decimals and you can see here how many digits you are going to be able to hold inside of them and then how many values after the decimal and this is the data type decimal then you’re going to have another one which is numeric and you can get real values you can have double precision and you can see here the number of places of precision after the decimal place for all of those and then you could also just simply use float which is exactly the same as our double up here you’re also going to have Boolean data types they’re either going to have true false or null values sort of because true can also be represented with one a t a y a the word yes or on and false can also be represented with zero f and no and off but I highly recommend that you use true and false because that will save you a lot of headaches in the future and of course Boolean types can also have no value or null you’re going to have date time data types and here we have date and just so you understand and no matter how you enter your date it is automatically going to be translated into the year first then the month and then the day so this is the way it’s going to be stored so it’s going to be stored maybe differently than you previously entered it then you’re going to have times and if you would come in here and Define a time just like this with pm and then you have your two colons here time without time zone this is going to be how it’s stored you’re also going to be able to store in multiple different time zones so this would be UTC format you can see how I am going and changing all of these and how you can put your different time zones inside of here so Eastern Standard Time Pacific Time universal time and so forth and so on all right and that was how we’ll lay those out but it’s better to see these in the real world than to worry too much about them we also have time stamps they’re going to have date information as well as time information and you can see exactly how those are going to be stored and also we’re going to have intervals and they are going to represent a duration of time so you can have one day like this and you can also come in here one day one hour one minute and one second and you can see exactly how they would be stored inside of our database and what’s cool about intervals is that you can actually add and subtract intervals from different date times to play around with those Concepts there are other different data types you’re also going to have currency binary Json range geometric arrays XML uuids as well and on top of that you can even make custom data types all right so now let’s come in here and actually insert some information into our customer table how you do that is you go insert into and you list your table that you want to insert data into so I’m going to say customer and then you can come in and say first name and just list out all of your different columns and I went and did that for you to save a little bit of time you can also just go down to the next line that’s perfectly fine but if you look at this you’re going to notice that there is one column that is missing and that is ID why is that well ID is auto incrementing so that means this first customer that we enter is going to automatically be assigned the ID of one and then the next one gets the next one then you’re going to say values and inside of here what we can do is go and list all of the information on our customer so let’s say that his name is Christopher and his last name is Jones and all of the additional information on Mr Christopher Jones let’s go down to the next line well you can just keep going like this that’s perfectly fine okay so we have all of our information you may notice here however I have current timestamp well anytime you want to get the current timestamp like right at the moment that you create Christopher Jones you just type in current time stamp and it goes and gets that for you and then you’re going to end all all of your quarries with a semicolon and then of course you can go and run it and you can see that it was inserted over here now what we can do is we can come over to our customer and right click on this and then we can go View and edit data and we’ll say all rows and it will automatically come in here create the query it actually creates the query for us and it is going to display that query information down here in our table area alright so cool stuff and we’ll get more into select believe me but this is basically anytime you want to go and get information from a table you say select the star represents anything customer represents the customer table and then order by says that we want to order by whatever the ID is so the unique identification and this is going to be in ascending order you could also do descending order but that doesn’t really matter in this circumstance because we have one customer at this moment all right so we’re done with this we can just come over here you can see the little X and close that out and now we’re back inside of here now I said we can create custom data types so let’s go and create some how you do that is you say create and type and I’m going to say that I want sex type and I want it to be set so that it’s either M or F and I’m going to create it as an enumerated type then that just means it has a set number of values and I’m going to say that it either needs to be M or F that is going to be tied into our database there it is and we can go and run this and you can see create type is down here successful and if you want to go and find it you just come over here to types and Sex Type shows directly inside of there and we can come over here and go properties and definition and you can see the different types that are allowed you can also go and let’s say you decide to add some more you can do that all that you want all right let’s close that well now that you went and created your new sex type and you want to assign it to our customer table how do you do that so you got our customer over here and we have our columns so let’s open up customer just to look at it just to make sure all right so we have all of our columns but we have our sex type down here and it is listed as a single character we do not want it to be a single character and we can go inside of here and change to whatever different type that you would want but let’s just go and use queries instead because I prefer use inquiries it’s fine get rid of that okay so how we go and alter that table is we say alter table and customer the table that we want to alter and then you’re going to say alter column and we’re going to say sex which is the name of our column down here and then we’re going to say type and let me make this uppercase doesn’t matter if they’re uppercase or not and I’m going to say that I want it to be Sex Type now and then I’m going to say specifically using sex colon colon Sex Type oops and make sure you style everything perfectly all right so got that set and run it Corey return successful okay good I’m then also going to want to track our sales people because it’s going to be important to be able to you know know who sold everything and basically what I’m going to do is I’m going to use almost exactly the same Fields except I’m going to use date hired instead of date entered here and I’m going to use a create table just like I did before so just in the interest of time here is our create table for our sales person you can see again first name variable number of characters not null you can see here is something different default if you ever want a default value placed inside of here if none are provided you just say default and whatever you want placed in there by default you can also see everything else is exactly the same so there’s really nothing to explain here and another thing you can do is sometimes whenever you’re issuing queries you might have multiple queries on your screen at one time time and you only want to execute that one query you can just highlight it and hit this right here and you can see the table was created and let’s come in here and is it showing no it is not so let’s come up here and let’s go to this guy right here and refresh and now you’re going to see here is our customer and then let’s get rid of all that and here is our sales person and all of their information specific to them now thinking about tables once again now what we’re doing is basically looking at a description of a product which will be a shoe in this specific situation and by looking at this information about the individual products we Define is this specific shoe going to be a business shoe or is it going to be a casual zoo or an athletic shoe and whenever we see things like that and that is normally a good idea that we should separate this Lodge the business athletic or casual out into a separate table we can also see here information like our brand and the individual shoe name so a brand is Allen Edmonds the individual shoe name is Grand View we see information like we have size we see that we have a specific color a price potentially a discount a potential weird tax rate and then we’re also going to want to chart the quantity of those numbers of shoes now whenever we go in here and we decide that we want to pull this part out we would of course have to create another table and in this situation I’m going to call this product type and it’s just simply going to have a variable number of characters which is going to have business casual or athletic and then it’s going to have an ID assigned to it so let’s go over and create that table let’s just come back inside of here and let’s create a table and let’s leave this here and let’s change this to prod type so product type like this and then what are we going to have inside of it well we said we’re going to have a name and it’s either going to be business athletic or casual and variable number of characters 30 looks like that’ll work not null and then let’s come down here get rid of all that stuff it’s also going to have a primary key and guess what pretty much everything was done for us let’s run that and we have another table that was created and of course we’re going to want to come up here and refresh this just to verify that everything is as we expect so there’s refresh looking over a microphone that’s why I’m moving around a little bit weird and uh this is down in schemas again let’s open up schemas and tables and now you can see here is our product type and if we go into properties you can see columns and there is our information so that’s good stuff we don’t need to do that because we didn’t change anything all right good all right so now we have our product type but we actually want our product as well well how are we going to Define exactly what type of we want we’re going to look at our invoice again so if we look at our information specific to our products we can see here we have Brooks which is going to represent the supplier then we have glycerin which is going to represent the shoe specific name or glycerin 17 in this circumstance we’re going to have quantity so we’re going to want to put a quantity inside of that as well however I am probably also going to separate out the quantity and the the things that make the shoe specific so this is very generic we’re going to have a type ID which is going to be athletic business or casual we’re going to have a name which is going to be glycerin 17. we’re going to have a supplier which is going to be Brooks then we’re going to have a description of the shoe so that those are going to be things whenever you’re creating tables you want to have your tables be things that will not change so you want to be as specific as possible possible but not too specific so what I’m going to do is I’m going to have this information on the shoe the description and the name and then what I’m going to do is I’m going to create another table that is going to have things like quantity color size which and also price and also all these other different things which are going to be more specific to that type of shoe so let’s go and create those guys as well before we move on I actually notice there’s something completely different here I didn’t cover okay so this is actually a reference to the ID down here for the product type table now whenever we are using IDs from other tables these are going their primary Keys down here but up here they are actually referred to as a foreign key and a foreign key is used to identify one of a group of possible rows in another table so if we create a product table and we want to store a value from the product table we can reference that information using a foreign key and whenever we create a foreign key it is going to have an integer type not a Serial type and we can’t use serials because postgres will automatically assign a value for a Serial type it’s not what we want so make sure that you have this marked as integer references and then product type ID if you’re referencing an ID in the product type table so now let’s go and actually create this product table and here you can see it so here is our product that’s the name of our table type ID this is the foreign key that is going to reference our product type table ID it’s going to have a name it’s going to have a supplier a description you can see here we’re using the text data type and once again for the ID you’re always going to be using serial for that and we can come in here and we can also create that now what we’re doing here with this specific table is this is just going to describe what we call the quality of an item now if I were to list quantity here it would make it hard to look at this as a single item and quantity should be kept in a completely different table and of course we’re going to create that as well and anything that gets in the way of being able to model an individual object should almost always be put into another separate table so let’s go and actually create that as well so there’s some things here I need to leave so we’re going to have our item specific information so I’m going to call this item and then we’re going to have a product ID so we’re going to so like this and product ID integer references and this is going to reference a product ID and the circumstances is how we’re tying the product table to the item table and what specific things are we going to have inside of here well we’re going to have a primary case let’s go and get rid of all this and then from referencing our order table what do we have inside of here well we are going to have our quantity a size a price maybe a picture and then I’m probably going to separate all of the discounts and taxes outside of here so let’s just go and plug all this in here so you don’t have to watch me type it out so here we have a foreign key that references the product table we’re gonna have a size which is going to be an integer not an all for every single thing here we’re going to have color we’re gonna have picture we’re gonna have price and we’re also going to have our ID of course of course we’ll go and run this and we’ve created that table and of course we can come up here and verify that this is created by right-clicking on this and refreshing and then come down inside of tables and we can see all of our tables and here is our item table also this brings us to our next table so we’re also going to want to have a sales order table and if we look at the information we have here what do we have well we have a customer information so we’re going to have we already have that stored in a separate table which is good so we’re just going to have a foreign key referencing that customer table we’re also going to have information on the sales person that actually sold it so we’re going to have a reference to the sales table that is created we may want to also have a time stamp for the time the order was taken you can see here time and date there’s a purchase order so of course that’s going to be in the table we’re going to have credit card information so we’ll get the credit card number as well as expiration month the expiration day maybe there’s a secret code and also the different potentially different name that is on the card and then of course we’ll have an ID so that we can reference all the information on our sales orders so once again we’re going to come in here and we’re going to create this table and you can see it’s called sales order it has a customer ID foreign key salesperson ID it also has time order taken purchase order credit card number uh credit card expiration month day you can also see that I’m using the smallest data type possible for all of these and name on the card and so forth and so on and we can run that as well and you can also see that we were able to create that table now there’s one thing that’s missing and that one thing is going to be our potential discounts and our tax rates as well as our quantity and also the price for all of those items so of course we’re going to create another table and it’s going to be called sales item it’s going to have a reference to our item foreign key it’s going to have a reference to our sales order ID and then it’s just going to list in all of the other different information that we need and there is all that information as well so we’re going to create a table called sales item is going to have an idea item id foreign K sales order ID also foreign key and then it’s just going to have the specifics on that uh specific transaction and we’ll run that and we also created that table now it’s very important to understand how all of these tables are going to be interrelated and what you can see here are how the different foreign keys are going to allow us to merge our data whenever we start issuing queries it will become really clear how to use these different keys but basically the product type is going to be linked to the product so here is how those link that ID goes right here the product is going to be linked to the item which is a more specific version of a product and then both the item and the sales order are going to be linked to the sales item Table and there are going to be many different foreign Keys linking tables but I think that in general this is enough to cover at this moment now I’m going to show you a whole bunch of inquiries for doing all kinds of things let’s say you would want to add a new column so all you do is you say alter and the table that you want to add at column two and I’m going to go and say that I want sales the sales item table is what I want to be working with and I want to add a new column and it’s going to be called day of week and it is going to have a variable number of characters let’s say it is 8 because that makes sense and and there you go so that’s going to allow you to add a day of the week column to our sales item table so let’s go and run that and we were able to alter that and this is going to be sales items so let’s come up here and let’s refresh this refresh and sales item and we’re specifically targeting the sales item table so let’s look at the properties and let’s look at the columns and day of week is in here and you can see that there is our information exactly as we entered it okay so let’s close that let’s say that you would like to modify a column or change it in any way well you’re going to be using alter table again let’s stick with sales item makes sense and let’s get rid of the rest of this all right so there that is and we want to set it to not null for example so just reference the table you then say alter and column and all of this is on GitHub we’re going to be using day of week again and I’m gonna set it for not null right like that and we can run this and it will come up here you can see that it’s successful all right then we decide that we want to change the name of a column so how do we do that we’re going to use sales or alter table again with our sales item except we’ll just get rid of this and we’re going to say rename column and we’ll say day of week maybe somebody says we don’t like that and we want to change it to something that is shorter that makes more sense so we’re going to say weekday there that is and run it and you can see we did that as well and then also you may want to drop a column altogether you might be like why did I even do all that well we’re going to say alter table sales item and we’re just going to change this to drop column and they or this is now weekday remember we changed that so let’s get rid of that and let’s run that and that is now gone so if we go into our sales item table and property and columns then you can see that weekday doesn’t even exist so that is how we go and do all that stuff let’s that okay so what’s that go and say we want to add another table to this so this is going to represent our transaction type so we’ll say create a table and transaction type and then after that come down here we’ll say name variable number of characters is going to be 30 and let’s make this not null payment so that looks like something we might want so character and we’ll put this as 30 as well and not null as well and then of course what did I forget I didn’t forget it I just didn’t type it yet okay so serial whoop cereal like that and this is going to be our primary key and there we are we just created a brand new table I’m using a habit of highlighting things but in this circumstance because there’s only one query I don’t need to now let’s say we would like to rename that table well guess what we’re alter table again and what’s it called it’s called transaction type and we want to rename it to just simply try transaction well there that is and let’s run that and it’s successful so that’s how we can do that let’s say that we would like to create an index based on a individual single column well we’ll just come in and we’re going to do let’s get rid of this we’ll say create index transaction ID on transaction the name of our table so transaction and we want to make a unique index on our name column let me do this and run that okay that was also successful um let’s say we can also come in here and actually create an index based off of multiple different columns so let’s just create index again and we will call this create a transaction ID two just for just to put something inside of there and we could say on transaction but this time we want to be based off of not only the name but also the payment type and these are all columns inside of here you can run that and now we created another one so this is going to be okay good stuff another thing that we might want to do is we may want to delete data in a table well to do that we say truncate table and transaction like that and run it boom and the table has been truncated and then on top of that that just deletes all the data inside of a table if you want to actually get rid of a table altogether you come in and just say drop table transaction run that and now it doesn’t exist and if we come over here and we refresh this there it is and refresh you can see that that table name transaction doesn’t even exist inside of here let’s come in and let’s go and add in some more data because then we’ll be allowed to start really doing some cool Quarry things so what I want to do is for our product type table remember I wanted the different product types to be either business casual or athletic so how I can enter those in is I say insert into and then you list whatever your table is called it’s called Product type and what am I going to insert inside here name that’s it nothing else and values and I’m going to say that I want business business like that and there that is and then I’m gonna do and do this for multiple different types so let’s go like this and like this and we will also remember the ID is automatically put inside of there so we’ll have this be casual whoops let’s keep everything consistent casual and also this one right here is going to be athletic athletic with a lower T like that I can run this and run this and boom and then we can come in and check out that information and how we can do it is say select star represents everything from the table called Product type like that and if we just highlight this and run it you can see it’s going to give us information that we just entered so there’s business because it was entered first it gets the id1 and then we have casual and athletic on top of that all right so good stuff another thing that’s interesting to know is that you can also insert multiple rows without defining the column names if you put the values in the same order as our table data so why don’t I demonstrate that so let’s come in inside of here and what I want to do now is I want to enter some of our product information if you don’t remember that’s the supplier the individual shoe name and a description for the shoe so what we can do is we can say something like insert into product so here is our table name and the values that we want to enter all right and then inside of here we’re going to say that let’s look at it so what do we have here so we have remember you have to put them in exactly the same order so we’re working with a product table let’s come over here and let’s look at our columns so we have the product type which in this circumstance is going to be business so let’s say this is one that is the ID for business over inside of product type and then we have our specifics so we can say something like Grand View is the specific shoe name so that’s the name then we’re going to have our supplier and this is going to be Allen Ed men’s and then we’re going to have a description for said shoe and there is this really long description for our shoe and you can see that just as long as you have the type ID which we do name supplier description and remember we never put in our ID we have all that information we do not need to do anything else and there that is and what I’m going to do here is just go and paste in a whole bunch of these and there you can see all of our different shoes that we are going to be selling and remember there’s always a semicolon here at the end but you could see that if we go and put this inside of here all in the right order and run it that it went and successfully entered all of that information and then we can come in and verify that it did that by just coming in and saying select everything from product like this and if we run that you’re going to see that we have a listing for all of our shoe names as well as our suppliers and descriptions and it auto created all of the IDS for all of that information now I’d like to come in and enter in some customer information but maybe somebody was looking at your database table and said you know what I’m not sure if you have the right data type for customer so let’s come in here and let’s look at it so we go and look at properties and columns and we’re looking at them and here we have small int well we think that maybe that smart person was correct and we go back and we check our notes and we find out that yes indeed small int only has a maximum value of three two seven six seven well if you don’t know about U.S zip codes this this is too small so what we’re going to have to do is we’re gonna have to change it to an integer yes this is overkill but we’re gonna have to change it from small int to integer but we know how to alter tables and maybe for just practice you could see if you could alter that table based off of what I taught you previously if not I will show you and there’s a lot to learn here so don’t worry about it if you didn’t get it we’re just going to say alter table and what table are we altering our customer table and then we want to alter what we want to alter a column what column we want to alter the zip column and what do we want to do we want to change its type to an integer and we can do that and we can run it it’s successful and if we come in here to customer and properties and columns and zip you can now see that it is an integer not a small int so now we need to enter in all of our customer information and I’m not going to waste your time by having you watch me type all that in I’m just going to plug it in okay so what do we have here well we have all of our customer data we have our first name last name emails we have the company they work for the address we have the city state ZIP phone number birth date sex date entered and all of those additional values and instead of current timestamp I actually put a real date inside of there and also remember semicolon at the end all right if we enter all that information run it you can see it was successful and we can verify it was successful by coming in and saying select everything from what customer like that and run it and boom there is all of our customer data all looking really good now what we need to do is come in here and insert our sales people but before we do that what if we made the same mistake with our ZIP code with our sales people so we’ll have salesperson down here let’s go and see so we’ll go properties columns and zip yes we used a small int again so we were being dumb we didn’t realize what we were doing so let’s go and fix it but now see after I showed it to you multiple times see if you can go and change the zip to from a small ant to an integer otherwise I will show you alter what are we doing we’re ordering a table what is the table’s name it is called sales person what are we doing we are altering a column and what is the column’s name zip what are we specifically altering the type and what do we want it to be we want it to be an integer and we can run that and that also worked all right so now after we have all that set now we can enter in all of our sales people data so let’s go and grab that we don’t have that many sales people let’s paste them inside of there and again it has all the relevant information for it and remember you don’t need all of the different column names if you put it in order but I just did that just to be descriptive and there we are we inserted all of it and we can come in and we can verify that so we’ll say select everything from sales person and like that whoop everything from salesperson and highlight this so we just execute that part run it and there is all of our sales people information okay so we’re getting a lot of information in here that we will then start running a bazillion queries on but what we’d like to do now is we’d like to insert some information into our items table so where is our items table here it is let’s look at what we got inside of here let’s look at our columns okay so we have our product ID we have a size we have a color picture I’m this is going to be a character I am just going to have that be coming soon and put that inside of there normally what you would do is put a URL to you know location for the image price and ID all right so let’s go and put all of that information in whole bunch of stuff okay paste it inside here and here we have our information so we have in two values and this time I’m not putting in the column names and if we just go and look at item and columns you can see we have our product ID we have our size we have the color we have coming soon where we will have our pictures and then also our individual price make sure you put a semicolon at the end and let’s go and run that guy also there it is and now to verify that all that is in there we can just say select what everything from where and that is going to be item and run that and now we can see is all of our item information so what else do we need to do well we’re going to need to go and put some information into our sales order table another thing that may have been brought to our attention is our purchase order number in specifically with our sales order somebody’s like I don’t know if that’s big enough well let’s look so we got our item and we are going to go into our sales order information and we’ll look at properties and columns and purchase order number is set as an integer okay well how big is an integer let’s go and close that here we can see here is our integer information and those are a lot of different orders but we’re planning on selling just more than 2.1 million shoes we want to be selling this big of a number of shoes so yes before we enter any information yeah it’s probably better to go and actually change that into a big into also and why don’t we do it in a different way this time so we’ll go sales order and we’ll go properties like this and columns and it was purchase order number right let’s come in here and let’s change it into a big int instead and say save and it’s going to automatically go and change that into big integers for us instead so now what we want to do is we want to go and insert all of our sales order data so here is our sales orders and here is our columns so we have all of this data that we’re going to put inside of here and there it is okay so what do we have inside of here well again we’re not going to put the column names because everything’s in order we’re going to have our customer ID our individual sales person the time the order was taken the purchase order number credit card number and of course these are not you know real credit cards um and uh the name on the card and all of that other stuff I just randomly generated all this information and plugged it inside of there okay so there we go got all that and let’s run that again and we can see we placed all that in there that is sales order let’s come in here and just do select all delete that and we’ll say select everything from and it was sales order on that and boom here is all of our sales order information now another thing we have a lot of our tables populated however we do not have anything in sales item do we so let’s select everything from sales item like that run it and successfully run but there is zero rows affected there’s nothing okay so we’re gonna have to put some information in regards to our sales items so what type of information do we have in our sales items let’s get rid of this and let’s get rid of that let’s go sales items here they are and here is the columns so I have an item id a sales order ID and then we have specifics like quantity discount tax rates sales tax rates and then of course an ID as well so let’s go and plug all of that information in here as well plug it inside of there and run it and again that was successful let’s go and get rid of all this let’s go select all delete and then let’s go like this and select everything from and this is sales item sales item right like that and run it and you can see here is all of that additional data so good stuff all right so we populated all of our tables with data and now it is time to start doing some really interesting things what I’d like to do now is start talking about pulling that data and in useful and interesting ways using a never-ending plethora of different query commands all right so here we’re going to start off by just covering select yes I know you saw it before and from and where and order by and limit and and just work up from there now of course let’s just do another select this is how we’re going to be able to select everything in our sales item table and oomp and there you can see all of the different pieces of data inside of there now where is going to be used to define which rows are included in a result based off of different conditions so for example let’s say you wanted to show all sales with a discount greater than 15 or something all right but first off you’re going to need to know about all oh did I tell you how to use comments in PG admin I don’t think I did this is how you do a comment just two dashes and there is a comment and you can also do multi-line comments by moving forward slash and multi-line comments like this okay there you go so I got that covered well you’re gonna have some different conditional operators you need to be aware of so let’s come down inside here and here they are so you’re going to have your equal sign and less than greater than less than or equal to greater than or equal to not equal to and also you can use this more traditional not equal to as well so let’s say that we want to come in here and we want to do exactly what I said we want to go and find sales items where a discount is greater than 0.15 or 15 so you could do something like select everything from and where are we getting it as you can tell it doesn’t matter where you put your your code at all is going to work here so we’re going to say we’re going to get information from our sales item table and where and we’re going to say that we want to do discount greater than 15 percent and we can run that and you can see here it is I just hit F5 instead of clicking up here I just did that in the beginning of the video I’m not going to do it anymore so you can see there you get your different results uh out there on the screen and your discount rate is going to be serious I don’t know if you can see this but it’s 16. let’s move it up here a little bit 16 19 18 and it fits the condition so very very important First Step then you’re going to also have what are called Lodge Cooperators they’re going to be allow you to stack your different conditional statements and they’re very easy to remember you’re gonna have and you’re gonna have or and you’re going to have not so in this situation let’s say that we would like to go and find all all order dates for all orders in December of 2018. well we can do that so we’ll say select and everything from and let’s get information from sales item this time and our condition is going to be where um well let’s do let’s just change that let’s say we want to get everything from just when our sales item was taken so that is time order taken and we are going to get rid of this part right here and let’s say from let’s keep these on separate lines a little bit more complicated sales order is where we’re going to be pulling them from that table and let’s stack some conditional so we’ll say where time order taken that is greater than and then let’s throw a date inside here so I’ll say 2018-12-01 [Music] and time order taken and this will be less than and we’ll do 2018-12-31 okay so there is that’s going to give us everything in December 2018 and let’s just go run out and you can see that we have two of those that took place and of course we could go and put additional information so we have our sales order let’s go and get rid of that get rid of that and where is sales order sales orders over here and columns and let’s say we wanted to get that and we also wanted to get our customer ID get that as well and now you get your customer ID so you can of course pull multiple different pieces of data both from the sales order table and I’m going to show you here soon exactly how you can go and get multiple different pieces of data from multiple different tables guess what you’re going to be using foreign keys to do that let’s talk about order by now so let’s get this out of here and uh let’s say we want to get everything so I’m going to say select everything and I’m going to say from whose sales item this time and I’m going to say where the discount is greater than 0.15 and then I want to order change the order how they uh our output on the screen so I’m going to say order by and I’m going to say that I want to order them by the discount so let’s run it and you can see that we’re able to pull that also so we have our discount so you can see who is getting the greatest discount so it starts at 0.16 and goes down to what what is the biggest discount here 19 oh somebody got a 20 discount okay so that is another way of using order by um another thing you can do is let’s say we wanted the greatest order up here at the top and of going from lowest to highest we just come in here after discount and say descending instead and now you will get exactly what you want there is your 20 percent is the greatest amount of discount another thing is sometimes we want to limit the number of rows in our results so let’s say you wanted to just get like the top five or the top 10 or something along those those lines so we got order by discount descending and then you can follow that up with limit you can put whatever you want here so I just want five of them and it will just give me the top five that’s it I could do the top ten or anything after that and you couldn’t get the the results you can get the name the phone number and state um let’s go into another query here now let’s go uh something like select and let’s say that we want to have the first in in the last name game combined into one field we can do that with concat so we’ll say concat and first name and then we will put a space there and then we’ll say that we want last name and you could use an alias for this so you could say as and let’s just refer to it just as name and let’s say that we also want to get the phone number and the state and now we want to whoops from and where we’re going to be pulling this information is from customer so customer like that and let’s say that we want to just get this customer information for people that live in Texas or something like that so I’ll say where state is equal to and Texas like that right and whoops what I do from cos to oh Tomer customer there we are because dumber and we’ll run it again and now you can see we get just customers from the state of Texas and that this column has been renamed as name and also it has our people’s names a first and last name all in one field so that’s very useful another thing you can do is you can perform calculations so let’s say and there’s tons of functions we can use here and we’re going to be touching on a lot of them over the course of this video so let’s say we want to get the total value of all business shoes that are in our inventory so we’re going to say select and what do we need for business shoes well we need our product ID and we want to sum the price so I’m going to go sum and I’m going to go and get all of the prices and I’m going to call this as and I’m going to change the name of this column to total and where are we going to get this from where we’re going to go from item where and let’s sort of wear on another floor here so we’ll say where product ID product ID one is actually equal to our business shoes and uh let’s also say that we want to group them by the product ID which is going to then give us our sum for just the business shoes so we’ll say group bye and product ID and there we go and this is going to give us our total value of our inventory for our business shoes clearly you know a store would have way more products than this but I’m just trying to keep it easy to understand another very useful keyword or command is distinct you can use distinct to do things like eliminate duplicates so let’s say we wanted to get a list of states we have customers in for example we could say something like select and distinct and we’re going to say what we want to be distinct is the state from customer and then we’ll say that we want to order right state customers in California Georgia Illinois New Jersey and Texas um let’s say that we wanted to find all states where we have customers but we do not want to include California well we’re going to still say select and distinct state from customer all that’s still good but we’re going to put in a where Clause we’re going to say where state is not equal to California you decide we don’t like California all right so there it is and now we don’t have California um of course we can also come in here there’s the other version of not equal to and use that instead if we would prefer another little tool here is uh in we could also go distinct state from customer and let’s say that we just want states that are listed so we could say state in and we could list the the people we want so let’s say we want just California we like California again and New Jersey we could do that and run it and there is California and New Jersey um another thing uh well let’s just cut to the chase here it’s going to be extremely important for us to be able to get data from multiple different tables and we’re going to be able to get results from multiple tables with either inner joins outer joins or unions now the most common join is going to be your inner join and and with it you join data from two tables in the from clause and with the join keyword and the on keyword is going to be used to define the join condition so let’s say we wanted to do something like get all items ordered ever and sort them by ID while listing their price so I’m going to say select and I’m going to say Item ID and price from and item and I want to do an inner join with sales item so we’re joining the item and the sales item tables here and I’m going to say how they want to be joined is they share a key so I want the item ID to be equal to the sales item ID and then I want to order bye Item ID there it is and we can go and run this as well and this is going to show all of that additional information so here are our IDs and here are going to be these are going to be the items that are ordered so that is the reason why we have some items in our inventory that were not ordered and this is going to be the listing price for all of those different orders so just to reiterate we’re going to use our join condition to find IDs that are equal in the tables item as well as sales item let’s look at those so we have sales item and columns and then we have the item and columns so we have here the product ID and this was right right yes so we have item right here and we are joining with our item item id this guy right here is going to be joined with other which is sales item and you can see here is the item ID so here it is a foreign key here it is a primary key and these joins like I said are normally done using primary and foreign keys in the tables and whenever we join tables while checking for equality between a common column this is called an equal joint so let’s come in here and let’s do an examples let’s do uh Item ID and price once again from item inner join sales item and we’ll do on sales items sales ID and let’s also go and stack this so let’s say we want to go and get uh let’s do an and on this we’ll say and or maybe it makes more sense to put it up here I don’t like to keep these lines too long we can stack these and say something like price is greater than 120 or something like that along those lines and there you can see we got that additional information now what I’d like to do is take it up another notch and I want to join three tables so I’m going to get the orders and the quantity and the total sales so just to show this in picture form what we’re going to be doing is we’re going to be using the sales order so this guy right here so we’ll have sales order and we’re also going to have our sales item and then we’re also going to join to our item and what we specifically are going to say is that we want to go to our sales item area and we want to get our sales order ID and we want to make sure that it matches up with our sales order ID down here so this guy is going to match with this guy as this arrow is displaying and then we also want to go to item over here and we want to make sure that this item id down here is going to match up with our sales item Item ID so sales item Item ID is up here so this guy is these two are going to be connected through this ID here and then this ID is going to connect to this one and then we’re going to join all of them together and be able to pull out all kinds of interesting information so what are we going to do let’s just start from scratch so we’ll go to select like this and I’m going to say I want my sales order table dot ID and I also want my sales item table and I want to get quantity like that and I’m also going to get the item dot price and um let’s go and get the quantity and multiply it by the price so let’s go in the next line so I’m going to say sales and item I’m referring to the sales item table and the quantity stored there and I’m going to multiply that times item table price and then I’m going to give that a name of total so now we need to Define how we are going to be matching up all of these different tables so I’m going to say well I want the sales order and I want to join it to the sales item table and how I want to join them is where I have sales item equal to 90 that’s going to be yeah sales item Dot and sales border ID is the name for this and I want it to be equal to the sales order dot ID and what I’m referring to here I have I have all this information the table information in a chart and I’m looking at it but it looks like this okay so this is how we are going to be merging so just to make the reiterate to make sure this is understandable I want to use my sales order table and I want to merge it with my sales item table so what I’m saying is I want these to be matched the sales order ID and the ID for the table sales order and that is all that I did there okay but I said I want to merge three tables so what do I do well I just put in a new join and what table I want to join to them and then they just need to match up a primary key needs to fit perfectly with a foreign key so I’ll say on item dot ID equal to and I’ll say sales item and item ID and that’s why it’s good to keep your names logical as well and then what I want to do is I want to say order buy and this is going to be the sales order dot ID and if we’re on that boom you could say that everything comes up so let’s just bring this up here so you can see it a little bit better we have our IDs we have our quantities and we have all of our prices and you can see in a couple situations here there’s more quantity so that’s going to be double the the total amount of money that we have on those okay all right so what I want to do now well let’s talk about some arithmetic operators because I think they’re kind of important except they’re going to be very easy to understand we are going to have the ability to add subtract divide you can also do integer division with div and you’re also going to be able to get the modulus which is going to be the remainder of a division which is going to be a very important later on those are pretty straightforward okay so what do I want to do now well you can also Define the join conditions using where but this isn’t necessarily considered best practice but I’ll cover it anyway just so that you know if you ever see it okay so let’s go and do something like Item ID and price and we’re going to be pulling that from item and sales item and I’ll say where the item dot ID is equal to sales item and item dot I oh nope nope item underscore ID whoops ID there we are okay and then I’ll say and the price is going to be greater than one hundred and twenty dollars and I’ll also say I want to order them by the item ID and we’re gonna run it and you can see here they are okay so another way of using where but like I said not necessarily the greatest of ideas to do this but it can be done now another type of join is what is called an outer join and outer joins are going to return all of the rows from one of the tables being joined even if no matches are found and uh so let’s go and create them well basically let’s talk about something else basically a left outer join is going to return all rows from the table being joined on the left and a right outer join is going to return all rows from the table that’s on the right and it’s common practice to avoid right joins um so you’re more than likely going to be using a left join if you do this at all let’s just go into an example so you can see what it looks like so I’m going to say select name at my supplier and I also want to do price and I’m going to say from product and I’ll do a left join with item and then I have my condition so I’m going to say on and the condition is always the same it’s going to be connecting with foreign keys and more than likely unless you use where which I told you you know don’t really want to do so I’m going to do this is product ID is the name of that and then the other one is the product table with its ID I like to do I always have my primary keys with the same Name ID just because it makes it very easy I see this I know this is the primary key and I know that this is my foreign key and then guess what this product ID looks almost exactly as that and it’s very understandable I can see exactly what’s going on and let’s just say we want to do something like order by and name and run it and pull this up and you’re going to see that we get our name and we get our supplier and we also get our price inside of here and you can cycle through you can see our limited number of shoes that we have available very sad shoe store but uh and like I said I’m just keeping it simple all right so that is how we can do that type of join or an outer left join and let’s talk about cross joins now so let’s come in and as you’re going to see I don’t use the the other ones that much but I want to cover them now basically a cross join is going to include data from each row in both tables so what let’s say that I want to go and grab information from the item and the sales item table and so we’ll say select sales order order order ID like that and the quantity and the product ID and I’m going to say from table item and I’m going to do a cross join and with the sales item table and I’m going to say that I want to do an order by sales order Heidi and run it and you can see here that we get our sales order IDs our quantities and our product IDs as well um another way is to combine tables is with what are called unions and unions combine the results of two or more select statements into one result now one thing that’s important to remember is that each result must return the same number of columns and on top of that data in each column must have the same data type so let’s do something a little bit more interesting here let’s say we want to send birthday cards or something to all of our customers and sales people for the month of December could we do this yes we could so we’ll say select and we’ll do first name and last name and uh we need our location also because how else are we going to send this information and let’s get our birth date as well and let’s go and get this from our customer table all right now if we want to go and just get the month out of a date data type what we do is we say where and extract and we’re going to say month from and the birth date and you can also get the day and year and in very similar ways and I’m going to say equal to 12 in this situation and then I’m going to do a union between these and you just do another select and I’m going to say first name last name let’s just go and copy all of this so let’s copy that throw this here again you’re not going to use unions very much whenever we get into the part of the tutorial where I am really doing real world stuff you’re going to see that I don’t use the unions and the outer joins and all this stuff all that much so I’m going to say from uh sales person and where and I’m gonna do another one exactly the same so let’s just copy this and paste this down inside of there and then let’s say that we want to do an order by so order by and it’s going to be ordered by the birth date like that and run it and you can see that we got all of our pertinent information for our employees that were born in December and this is employees as well as customers I guess this is kind of a real world thing that’s something that you might want to do um let’s go and get this out of here also okay so we talked about previously previously how null is used when a value is not known well we can use is null uh to search for potential problems so for example null results where they should not exist so let’s say we want to search for items with null prices we don’t have any but I’m gonna just do it anyway just so you can see and so let’s do product ID and price at least I don’t believe I have any well I guess we’ll find out so we’ll say in our item table where the price is equal to null like that and run it and you can see that we don’t have anything so we could also come in here and say price like this is not null and run that and you can see everything comes back and total rows is 50. another thing we got is we can work with regular Expressions to search for simple string matches or really complicated string match so let’s say that we wanted to match any customer whose name begins with an M how could we do that well we could go select and first name name and from our customer table where the first name is and we could use there’s a couple different commands and I’m going to show you all of them so I’m going to say similar to and if we want to match for names that begin with m just throw an M inside of here and then you throw the percent sign inside of here and that percent sign is going to match for zero or more characters so basically anything afterwards and you can see there we got our information in all of these first names begin with the letter M another thing is we have a lot of different commands let’s uh let’s do first name last name from customer and then let’s say something like we want to match for somebody who has the first name begins with a and then has five characters after it there is another regular expression character underscore like this and if is going to match for any single character so one single character so we said that let’s try to match for I think I have an Ashley inside of here so I’m going to say where first name and instead of using similar to I’m going to use like so I’m going to say like everything else here is the same let’s change this to a and then I’m going to do five underscore so one two three four five like that and run it and yes indeed I do have an Ashley inside of there so that’s going to match for exactly that but if I put in another underscore inside of here that’s looking for a name who gives with a and then has five uh five no six characters after it obviously run it you can see we get no results and we can take that off of there and see if we have any no we don’t have anybody like all right so that is how we can use like um another one let’s go and let’s say we want to return all customers whose first name begins with a D or whose last name begins uh let’s make it even more complicated whose last name ends with an N so let’s do first name last name customer and let’s do first name and let’s get rid of this part and let’s use similar again so I’m going to say similar two and I’m going to say that the first name is going to begin with a d and then it’s going to have how many characters doesn’t matter just a bunch of them or the last name is and I’ll do similar again two and then I said I wanted the last name you can have that be lowercase but okay last name ends so I’m going to put any number of characters in front of it and then an N at the end of that run it and we said that first name is going to begin with a d well there’s a Daniel and it’s going to end with an end or or end with an N well you can see here we know why that match because it’s a d here there’s no D but Brown ends with an n and that’s the reason why all those matched up um let’s go and take a look at some regular Expressions here and give you a very good overview of them all right so here’s some regular expression patterns now I showed you the underscore you can also use a DOT for any single character you could use a star for zero or more you could use a plus for one or more you could use this carrot symbol here that represents the beginning of a string you can have the dollar sign represent also the end of a string anytime and I’m going to demonstrate these in real world examples but it is good to know this because this is very very important let’s say that you did not want to match for the letters A and B you just have your brackets and a carrot symbol and the closing carrot symbol and it will not match for a or b if you want to match for Amba then use that you can have all uppercase letters by just putting a dash between them and the same for lowercase as well as numbers and then you whenever you have your curly bracket it’s here with an N inside of it that’s saying n instances of um which means you know a certain number of letters numbers whatever you want which are going to be represented by these or actual characters you can also do instances of between M and N so a maximum and a minimum and then you can use here’s our or symbol here this is going to match either for the letter M or the letter N but I’m going to show you some real world examples so that you can better understand this concept all right so um let’s say that we wanted to get the first name that starts with m a so I’m going to do that so I’m going to say first name and last name fine I’m going to be working with the customers table as well and I’m going to say where the first name and here I am going to use another operator this is going to be our tilde because you don’t want to leave that out this is going to allow us to search for regular Expressions so what did I say I wanted the the to match a first name that starts with m a so I’m going to put the carrot symbol inside here representing the beginning of the name or the beginning of the string Then followed with M and A don’t need to put any other information inside of it and it’s going to give you Matthew Matthew all right um let’s say we wanted to match names that end with e z so we’ll go we still have our first name and our last name customer let’s just change this to last name though just to be different so last name and here we use the carrot symbol to remember to reference the beginning of a string now what we want to do is represent the end of a string so we said names that end with e z and then I’m going to put our dollar sign inside of there run that and now you get all the Martinez’s showing up let’s say we wanted to match last names that end with e z or son so again from customer where owned we could go last name and our tilde and we can just go easy and replace this little uh dollar sign here with our ore sign so we’ll go like this and son and run it and now you get Martinez or Anderson or Johnson or Wilson or Jackson and all of those different names now the cool thing about regular Expressions is whenever you learn them they’re pretty much exactly the same with every other programming language so and I have tutorials on regular Expressions so you just search for my name regular expression and that will show up uh let’s do another one though let’s say that we want last names that contain w x y or Z so all of this stays exactly the same except we go and get rid of this and inside of brackets and what we’re looking for is all the letters w through Z inside of them run it and you can see there is our list um I think that’s enough for now in regards to regular expressions and now I’d like to talk about group by okay so Group by we’ve already talked about it briefly but I want to get into more detail now Group by is going to Define how results are grouped okay pretty simple and uh we’re going to use another operator count and what it’s going to do is return the total number of records that match and then we’ll use Group by to return a single Row for each unique value so let’s do an example here so let’s say we wanted to find out how many customers have birthdays in certain months so that so we’re going to say select and then we’re going to use extract to get the month from the birth date and let’s go and give this an alias of month and also I want to count how many total customers have a birthday in that in that month so to do that we just say count and all of them and let’s have this have a name of a mount or an alias of amount where did I say I’m pulling this from the customer table and how do I want to group them well I want to group them by the month the Alias here is our month and then I want to order them by the month as well and run it and we can see here for our customers we have what’s the big month oh there’s a couple of them so we have uh that would be March we have three and so forth and so on all right so that is how we can use the group by and we’ll be doing more with group five course another cool uh useful tool is having and what having is going to do is it’s going to narrow the results based off of a condition so in this situation let’s say we want to get months if more than one person has a birthday that month so we’ll say select extract month from birth date as month and we’re going to count all of them and uh let’s say from customer Group by month and then we’re going to also come in and say having a count that is greater than one so we’re only going to get people that have at least more than one person and there you can say that’s exactly what we got all right good stuff um another thing is we have a lot of aggregate functions we’ve covered them already and basically an aggrav aggregate function is just going to return a single value from multiple different parameters and um let’s do another one we’ve I’ve done counts one like I said before let’s say we wanted to go and get the sum price for an item so I’ll say select and sum price and from item like this and you can go and get all of the prices for every single item you can also there’s a whole bunch of them so let me see what do I want to do here um let’s get let’s get a whole bunch of different things so we’ll do some price and we’ll say something like as value and then let’s go and get a count and let’s get all of those oops and let’s give this an alias of items like this or maybe we want to put the count first so let’s go grab that I think that makes more sense throw count here and then we’ll have our sum let’s go and let’s say we wanted to do an average so I’m gonna go round these are all aggregate functions average is also an aggregate function and we’ll do price and we’ll go and divide by two in this situation say as average let’s have a deconsistence as average and let’s say we want to get the minimum price do that does and make this minimum and then we’ll you can also get our maximum price so we’ll do price and again and then this will be labeled as Max and we’re going to get all those from the items table so let’s run it and there you can see we have 50 total items and the sum for all of them is 7231 the average price is 144.63 the minimum value for an item is 86 dollars and the maximum is 199. and that brings us to another concept which is very useful which are views all right so a view is extremely useful they are basically select statements that uh that’s result is stored in your database and uh let’s make a really complicated view because we haven’t made anything that complicated yet so what I want to do here is I want to create a view that contains our main purchase order information so I’m going to say create this is how you create a view you say create View and purchase order overview and that is the name for our view and I’m going to say as and then we’re going to say select equals order and purchase order number and the customer um company that let’s go down to the next line let’s go to sales item quantity quantity make sure I spell this right quantity there we are uh what else do I want let’s do our product supplier and it might be useful I said I mentioned this before but to get the from GitHub the page that has all of my table data on on it and it’s a little bit easier because that’s what I’m referencing but I can’t I don’t really have screen real estate to show it to you at the same time as I’m doing all this other stuff so I’m going to say product especially with this one product name I want I want to go and get the items price remember I have the the price information separate from the product um anything else I’d want uh yeah let’s go into a total one thing to know though is you can’t use total if you want this to be updated as the data changes but I’ll show you a fix for that down below so we’ll do sales but let’s just do it this way one to T and let’s multiply that time our item price and then let’s give this an alias of Alias of total and don’t forget to put your commas inside or here whoop did I forget one yes I did so after company put another comma inside of there and um let’s do another concat and you should remove concat if you also want this to be updatable but I’m for now I’m gonna have it be this way all right and then I’m going to say sales person DOT first name and then like this and we’ll do sales person like that dot last name and it just as a heads up what you basically would do is you just remove the total amount from this and remove the concat get all the rest of the information except for that and then call for that whenever you call for the view to show all right but either way don’t worry about that if it doesn’t make sense let’s go and give this also an alias so I’m going to go and call this as and we’ll do sales person like that and uh what else let’s come down here we’re gonna go from and hmm we need to do anything with that I don’t think so so we’ll say from sales order and we’re gonna just join some tables like we did before so we’ll say join and it’s going to be sales item is the other table we want to join then we need to have our condition for how they join so it’s going to be sales item Dot oils order ID is equal to the sales order ID and um and of course you know this is just the tables joining together on sales order ID as well as the ID for the sales orders and uh let’s go and add another one so let’s do another join let’s join item onto this and how we’ll do that is we’re going to say Item ID is equal to let’s make sure Item ID is equal to and sales item dot Item ID and I like to use this same format over and over again so if I am joining Item ID to sales item well I know I just put dot and item id after it I think that is so much easier so if I want to join sales order to the sales item well I just go and have my sales ID for the sales order table and it automatically matches see everything just goes together really well um let’s go and get that and after this let’s Also let’s do some more let’s do some more joins here just to make this even more complicated so I’m going to say join join customer so let’s also get the customer table into the mix here and again this is going to match up and I want to match up my sales order with the customer ID that is inside of it so customer ID equal to and the customer ID exactly like that so I actually should have made this customer ID instead of cost ID but either way it’s fine and um let’s also join product so we’ll do product like this and we need to say how so it’s going to be product dot ID is going to be equal to the item Dot Pro ID and uh let’s do more let’s do another drawing and we’ll say join sales person and on the sales person ID equal to the sales order dot sales person ID and that is just like that and uh I think that’s it let’s do an order by now so we’ll say order by and we’re going to order by the purchase number so we’ll say purchase order number okay so we have all that set up and let’s run it is there an error anywhere it doesn’t look like it returns successfully okay cool and then we have that view stored inside of here and you can see that it is stored here’s views and we called it percher purchase order overview and you can see all of the column information that’s inside of there and what’s really cool is whenever the data in the database is updated so will this view except for this concat and this total and if you wanted everything else to be updated what you would do is you’d have to separate this and put that inside of your quarries and another thing that’s interesting is you can use the view in all the same ways you can use a regular table so if you want to be updatable another thing you can’t do is you can never use distinct Union aggregate functions like concat um you also can’t use Group by or having I think that is every single thing you cannot use if you want it to be a view and if you want to see the results of your view you just go select like this and from and purchase order overview so there’s this let’s just copy that paste that down inside of there put that there select this and hit F5 and here you can see all of your awesome data all organized together in a really nice easy way okay so purchase order the company and the supplier the quantity the name all of that awesome stuff all right so really good stuff now if we remove the total above so that it could be updated we can just calculate this let’s just go and get rid of it and you can do the same with concat so let’s come in here and let’s do purchase order overview and let’s just make this purchase order overview two or something and let’s get rid of the total part so let’s get rid of this yank that out of there and uh yeah let’s just get rid of it okay so let’s go on so let’s go and highlight this because we need to make this now so we’ll say F5 and that was created and let’s go and recalculate it using the total how you do that you just say you want to select everything but on top of that you want to add some additional stuff and what we’re going to do is we’re going to say Quantity times the price like that and then we can say as total like that and then let’s go and throw the from on the other one and this is purchase order overview underscore two is what I call this select it like that and run it and you can see we still get all of our same information but this now is updatable because well we’d have to also get rid of the sales person first name and last name we have to list those in separate columns because we can’t use aggregate functions but other than that everything else would work the same and that would be really useful and if you would like to if you decide I don’t like these views well you can get rid of them also you can just say drop you like that and let’s just let’s just run it there we are okay and the view has been dropped all right so now we’re going to get really into the meat of this tutorial and it’s going to be built around actually creating functions and there’s going to be different ways of creating functions first I’m going to talk about creating functions using SQL statements and then I’m going to go and cover PG SQL which is very highly influenced by Oracle systems so you can basically write programs just like you would in any other traditional programming language basically the overview of an SQL function let’s go and get it here so here it is and there you go so you are going to have create or replace function like this and these are going to be replaced so let’s get rid of that ah okay get rid of that and let’s get rid of this also so there’s going to be a quote around this which we’re going to cover how to get rid of that and why that’s a problem so you’re basically going to have create or replace function whatever your function name is returns and if it’s void that means that it’s returning nothing otherwise you’re going to put the data type you’re returning inside of here as then inside of quotes but we’re actually going to be using things called dollar quotes here very soon we’re going to put all our SQL commands and then here you’re going to Define that you’re using the SQL language all right and I think this makes more sense it’s by just going and creating some stuff so what I want to do is I want to create a function here that is going to receive two values and it’s going to return an integer and it’s just going to add some values together so we’ll have create or replace function and I like to start off my function names with FN and then I’m going to say add and inside of the parameter area you’re going to show what values you’re getting and I’m going to have this I’m just going to you can just go and put in int okay that represents you’re going to get two integers inside of there now later on we’re going to actually put variables inside of there but I just want to demonstrate everything so returns and it does return something and I’m going to put the return statement down here because I like it here all right it what’s it going to return it’s going to return an integer so let’s just throw it inside of here as and then you have your quotes inside of here and then I’m going to say select like that and if you want to get the very first value that was entered inside of it you can put a dollar sign and a one and if you want to get the second value you can put a vowel or sign and A2 and there it is and it is going to return back this this Edition right here that we have so what we can do to actually execute this well first off let’s run the function so let’s run it and and what oh yeah this has to be two two dashes for this to be a comment but I’m just going to get rid of it all together because we don’t need that comment there do a select all and run it okay so I created my function and how you can say that you created there we’re going to cover triggers later on that’s something completely different let’s get rid of this and let’s go and close all these columns and tables and stuff okay so we got rid of all that stuff and if we go up here inside of functions you can see it created my function it’s called function add ins and it gets two integers all right so if we actually want to utilize it we can just come in here and we can say select and then call our function so function add ins like this and let’s say we want to add four and five inside of there we can do so we can select this run it and we can see our result down here is nine all right so that is the basics um like I said and everything’s going to show up inside of our function folder but I’m going to get rid of these quotes right now because you’re never going to see them and I’m going to use something called dollar quotes and the reason why is very often we’re going to want to use these single quotes inside of our SQL statements and we can’t do it this way so what we can do is we can actually replace those with dollar signs and I always like to put body inside of here so let’s get rid of that and let’s get rid of this and put body inside of there like that because you can have some different problems this is a basic dollar dollar quote okay I don’t want that you can use it but there are circumstances in which that’s not a good thing so what I want to do here now is I want to go and just add in these values again just to demonstrate that yes indeed you can do this so I’ll just leave this the way that it is and command and because we have create or replace function it’s going to go and get rid of what we had before so let’s just go and run that and let’s go and select this and run it and we get our same result once again of course um another thing let’s go and create a function that actually returns void so what I want to do here is I want to check if a salesperson has a state assigned and if not I want to change it to the state of Pennsylvania so I’m going to go in here and let’s just select this and let’s call this something like update employee [Music] employee and state and here this is not going to return anything because it doesn’t need to it just needs to perform an action then inside of here I can put my SQL code or SQL statement so I’ll just input or tab in here and I’m going to say update sales person and set and I’m also showing here how to do some additional SQL queries that I didn’t cover elsewhere I don’t want to repeat myself too much say where the state is null so I’m saying if the state is null I want to go in and make a novel okay so there it is and we can just select all this and we can run that and there we are we created another function inside of here it’s not showing because this needs to be refreshed so let’s go here and refresh and where is it functions right there and now you can see that that’s been updated and then what we can do is we can just go and select this right here copy and jump down inside of the select area which doesn’t have any parameters oops I just realized I left those in there let’s get rid of them there’s no reason for that to be there and let’s go and select this now it because I change the parameters you’re going to have another function show up inside of here let’s just run it okay so we got that and just to show you that it’s going to show both of them even though protect well there are two okay so let’s do refresh and if we come down here in functions now you see that there is an update employee state with and without parameters but this is perfectly fine so I’m going to come in here like this and I’m going to show you how to drop all these functions as well here in a little bit so let’s throw this inside of there paste that inside of there and this does not receive any parameters let’s go over here select that and and you’re going to see that null comes back because there is no return all right let’s do some more stuff okay so let’s say that we wanted to get the maximum product price so again we’re gonna just change some information up here so let’s go like this and let’s call this Max product price because that seems to be a good name and in this situation we’re returning an integer I normally just use numeric because that works pretty good for me and it is going to represent a float value and we can get rid of this whole entire thing right here so I’ll get rid of that and I’m gonna go select and I’ll use an aggregate function which is Max and we’ll get our price and from item and everything else stays the same go like this run it and let’s go and call this function so we grab this copy and paste this over here it’s not receiving any parameters either so we can just select this like this and five and you can see the maximum price shows up right there all right let’s continue making some useful functions so let’s come over here and let’s say we want to get the value of our inventory so we’ll say function get value and we’ll put inventory inside of there and here that’s going to be a numeric so we’ll leave it a numeric select and this is going to be sum now so let’s go and get total of all them for our item and you can also go and throw a semicolon there if you would like and let’s go and select all this and F5 and we created it and then we can come down here and copy this guy right here and paste it inside and go and get our total inventory value so like this and there is our total inventory value um let’s say that we want to go and get the total number of customers so let’s go get rid of this and let’s call this number customers and this we can leave it as a numeric it’s actually going to be an integer we’ll leave it as a numeric why not and we’ll say select and get rid of that and here we’re going to use count and oops count and we’ll just throw in our star inside of here but we’re getting this from customer so customer like that and again we can go run that and we can go and get this and we can use these functions and queries and everything if we would like throw this right here and select and run it and you see that we have a total of 20 customers our business is not doing that great all right so let’s go and do a little bit more complicated let’s say we want to get the number of customers who do not have a phone number so I’ll get get number of customers and I’ll say no phone don’t worry we’ll be getting into more complicated things again leave it as numeric why not select from customer and here wear Claws and I’m going to say where the phone if I can spell phone phone is no like that and run us and we will get this oops let’s go copy this first copy and paste this inside of here and then select this again run it and we see that we have telephone numbers for every single customer so that’s pretty good take it up uh let’s say that we want to go and get a parameter this time so let’s say I want to get the number of customers from Texas and using a name parameter and that name parameter is going to be Texas so this is going to actually get the number of customers from any state I’m just going to say that I just am interested in Texas so I’ll get get number customers from State okay so it’s going to get anything but inside of the parameter section put State and I’m going to say well let’s make it state name because I’m obviously going to be getting the state from the database and that would cause confusion and I know that it’s two characters so I’m going to throw that in there and what’s it going to do it’s going to return a numeric so I’m going to go and count from customer where and then this part just needs to change so I’m going to say where state that they passed or that is in the database customer or the table customer and is equal to whatever name they throw inside of here so I got all that let’s go and select this and let’s go and paste that down inside of here and then over inside of here the parameter we’re going to pass is Texas and I don’t remember if I ran that function or not to make it did I I did not so let’s go and highlight it F5 and now we can run it and F5 and we see that we have 11 customers from the state of Texas um let’s get first off let’s go and say we want to get the total number of orders using a customer name and very often whenever you’re doing things like this makes a lot of sense the very first create or go and just use regular old X SQL to verify this is going to work so I’m going to say count like that and I’m going to count um get a total count here and I’m going to say from and I’ll do sales order I’m going to do a natural join here so natural join with customer and I’m going to say that I’m interested in where the customer first name I’m just using specifics here in this situation and in my function I’ll actually pass in this name so I’m going to link this and Christopher is going to be the first name and then I’m going to say and the last name customer dot last name is going to be equal to Jones like that and there it is that’s all I need to do and if this works then I know it’s going to be easy to make my function work so select it run it and and I get one which is all I only have one Christopher Jones so it looks like it worked really well now what I want to do is I want to go and take this and put it up here inside of a function so I’m going to say hmm I don’t want to get let’s say I want to get the number of orders from a very specific customer name so I’m going to say get number and let’s get rid of this part right here and I’m going to change that to get number orders underscore from underscore a specific customer so I do not need to oh it’s it’s going to just be a variable number of characters inside of here so I’m going to say customer underscore F name like that and I am going to of course have to label this as a variable number of characters then customer underscore and I’ll make this L name L name and variable number of characters again so just so you see that because it’s going to run off the screen here in a second do I need this part over here I don’t think I can stretch it over there and move it out when I need it okay Urban lover characters what’s being returned well it’s going to get the number of orders so that’s a numeric don’t need to change that and then we have this part right here well I already did it here so this works good so let’s just go and highlight this and cut it out of there and we’ll replace inside of here this then all we need to do is I like to keep everything nice and organized so let’s go like this and the first name is going to be equal to the customer first name that they passed inside of it so go like this is equal to and paste this here and then the customer last name copy and now we have a function that we can use to search for any customers and that’s all I need to do there so that I know this works so I’m just going to go and copy this right like this and come down here and paste this inside of our select area and then what I’m going to pass inside of the parameters is Christopher and Jones so we’ll say Jones like that did I run this I don’t know let’s just run it again just to be safe there it is it returns successfully run it again and we see that we have one order from said customer Christopher Jones uh let’s say we want to return a row and this is actually called a composite um and what do we want to return let’s say we want to get the latest order so I’m just going to call this get and get rid of this part right here and I’ll say get the last order this and and am I going to need any parameters uh because I’m just getting the last order nothing else and I’m going to say here returns and I’ll call this sales order Nails order as and then I’m going to come down here and I’m just going to get rid of all this because I don’t need it okay so I’m going to say I want to select everything from what well this is returning a sales order so guess what I’m getting I’m getting a sales order so sales order like that and I’m going to say order by time order taken and I’m throwing descending inside of there and then do limit one there it is that’s going to give me the last order so I can grab this copy I can paste this inside of here just that easily and I can go and get rid of this and I can run this and I can run this it’s going to give me the last order and it all comes out like this so there’s all this information all the information I want but it’s not in a table format so maybe I don’t like it I want it in a table format so it’s easy just put quotes or uh parentheses around this like that and then just go Dot and star like this and if you do that you can select it F5 and now you look at that everything is in a nice table format um what else could we get well we could go and throw any of the different column names like time order taken and just get that individual piece of information all right so very useful stuff now let’s say we want to be able to get multiple rows so let’s say I want to get all rows uh for every employee inside of California well first off let’s go and verify that I can do this with SQL so I’m going to say select everything and I’m say from and sales person like that and I’ll say where the state is equal to California like that okay so I got this Quarry and if this works I can use it in my function and it looks like it did all right so that’s good stuff everybody’s in the state of California wonderful okay so what are we going to do here well uh create or replace function again I’m getting the employees so I’m gonna get rid of those parts I’ll say employees and I’m gonna make it so that it works for any state employees location and let’s say a location and this will be variable number of characters they pass in and what is it going to return well it’s going to return multiple rows so in situation in which you want multiple rows you say set of like that and then what am I getting sales people well I’m using salesperson so I’m getting a list of rows set of rows from the sales person table so got all that and I know this works because I got it down here so let’s just go and copy this cut it out of there and paste it inside of here there it is and the difference is I got my location here so let’s go like this and location and then I can go and copy this guy right here and let’s go and uh what else am I getting so let’s keep this format I like getting all that information and let’s go and put a store here so I can get everything in a table format like that and in this circumstance I’m just going to say California of course I can do any state I don’t know if I ran this or not doesn’t matter I can just do it again Corey is good and run it again and you can see here we got all of our employee information four employees live in the state of California and of course we could change that to wherever else we have those different people oh what else would I like to do here oh well I said that I could use these functions in quarries so maybe I should demonstrate that so what I want to do here is I want to get the names and the phone number from this function I’m only interested in the name and the phone number I don’t want anything else so I can do select like this and I can say first name and last name and phone and I’m looking for from and I’m getting it from this function right here so let’s just go and get this function name and actually let’s go and copy the California part also so there that is from paste that inside of there put that there and there we are don’t run it boom and now we get just the names and the telephone numbers for all those employees okay so there is a rundown of how to create numerous different functions using SQL functions and up next I’m going to show you how to make PG SQL functions all right so now we’re going to get into pgsql and it is going to be very heavily influenced by Oracle SQL it’s going to allow this is the biggest part of the whole entire tutorial here this is now going to allow us to Loop to use conditionals functions data types and much more and you can see the basic concept of laying out a pgsql function you go create or replace function just like we did before and then we’re going to have function name all the parameters with their types returns and the return types as here is our body tags again and then we’re going to have begin with all of our statements inside of here and and then we’re going to have the end of these dollar tags and then we’re going to now Define our language as p l p q SQL so let’s start off by doing a real world type of example here let’s just get rid of this part here and let’s say I want to get my product price by name well first I’m going to just go and create a query to see if that works so say select item price and then I’m going to say from item and then I’m gonna do a natural join and I’m going to join with my product table and then I’m going to say where the product dot name is equal to and Grand View is one of them I know and then if this works then I know I can create my function so I’ll just select that hit F5 and you can see here that yes indeed I was able to get a result 199 dollars and 60 cents great stuff now all I need to do is go and transpose this into the function so I’m going to have a function name so I’m just going to keep this abbreviated function and I’m going to say get price product name like that and inside of here I’m just going to call this the the parameter name is going to be prod for product and then I’m going to have the type B here uh variable oh let’s let’s make this product name make it a little bit easier so I’ll say product name and then after that I’m going to say this is a variable number of characters and let’s move the returns down to the next page and this is going to be a numeric because it’s a price so I’ll make that numeric as and then we get to the begin part so let’s get rid of the statements here and actually put real statements inside of it so what statements am I going to put here and what else am I going to do well instead we’re going to go and grab this guy right here and the whole entire thing let’s just copy it out of there and let’s paste it inside here now you can no longer use select and what we’re going to use instead is return so we’ll say return item price from item stays the same natural join product again where product name is and now instead of Grand View this is going to be interactive and we’ll be able to say product name and then you’re going to have your end and everything else is perfectly fine so good stuff now what we can do is to go and actually utilize this function and we’re going to be able to use this in all our other SQL queries as well is we’re going to come in and say select and then paste in our function and then we will do Grand View so Grand View like that and then we can select it and we can run it whoops something is wrong no function matches the get ah I forgot to run the function don’t forget to run the function like I did okay so let’s go and select this first and now we can run it and now we can select this and oops PL what did I do wrong there I see what I did I put a q inside of here instead of a g okay select that again and run it and there it is so that is a g not a Q and then let’s select this right here and let’s run that and now we see we get our same results exactly like we expected all right so good stuff now what I want to do is talk about using variables in functions so let’s go and do another example people here I think I just want to go and just have this be simple and just get a sum so what are we going to do we’re going to change this function right here into get sum like that and then it’s going to have some different variables inside of here so I’m going to have this be value one and it’s going to be an integer and value 2 which is also an integer you don’t need the space there and what’s it going to return it’s going to return integer so let’s put that in there as body begin and what are we going to do well what I want to do here is I want to have a declare block where I can put all of my variables inside of here so you go declare this is where you create your variables and I’m just going to have this be the answer which is going to be an integer exactly like that and this guy right here is actually going to go before we have our begin block so we’re going to copy that out of there paste that inside of there and let’s just have it be at the same level of indention that’s perfectly fine and then what we can do is let’s just get rid of all of this and we can say answer and you’re going to use this colon equal to assign this value and it’s going to be value 1 plus value 2. and then quite simply you just say return like this and our answer and there it is so we got all that created and we run it everything looks good and then we can come in here and grab this guy right there copy it and paste it inside here and then we can go and throw some values inside of there and see if it worked or not and I’m just going to say four and five seems good let’s run it and boom and you see we get a value of nine so that works great so that is how we are able to assign variables inside here using our declare block now what I’d like to do is assign a variable value with a query so what do I want to do here well let’s say I want to get a random number and assign it to a variable so let’s just come in here again and I’m going to say get random number that’s a useful function and what it’s going to have inside of here is it’s going to have a minimum value can’t put Min inside of there because that is a a aggregate function so I can’t do that and we’re going to say our maximum value both of them are going to be int and then we’re going to say what’s it return it’s going to return an integer again inside of the declare area we’re going to have a random value which is also going to be an integer and then down inside of here where we put our statements I’m going to say select and I will call our random function like that and I’m going to multiply it times our max volume minus r min volume and then we say plus Min value and we’re going to put that into our variable called Rand and then what we can do is just come in and say return Rand exactly like that everything else here stays the same and it’s also common put a semicolon after that but as you saw you don’t need to and there we are so let’s go and generate a random number so we can go copy and paste this inside of here and we need to Define our minimum and maximum and let’s say something like 1 and 50. looks good and I know I didn’t run this did I don’t believe so let’s run it everything seems to compile perfectly and run this get a random value of 52 and we can run it again 25 45 29 19 and so forth and so on so that looks like that is looking very good okay in the next function what I’d like to do is I would like to get a random sales person name and what I want to do here is basically just go through an example where we store rows and variables and also I’ll use the concat function okay so what are we going to do I’m going to say that I want to get a random sales person so let’s say somebody calls they don’t have a sales person currently and we want to generate a random sales person for them to uh you know give them a potential sale so get random sales person returns well this is going to be a variable number of characters as and inside of declare am I going to want to put anything yes I’m going to want to have a random integer inside of there and what else am I going to do well I am going to use a record type to store row data so I’m going to do employee and this will be record and then down inside of here I am going to want to generate a random value so I’m going to say let’s say I have five employees I could have passed in the number of employees and all that but I know I have five employees so let’s just keep this very simple I’m going to say 5 minus 1 and then I’ll say plus one and there that is and we’re going to store that into the variable that is called Rand then I am going to come in here let’s go here and I want to get row data for a random sales person so I’m going to say select and everything from where whoops from sales underscore person and I am going to say that I want to put those into the employee record where their ID is equal to my random value then after I do that I want to concat the first and last name and return it I’ll say return and concat and I will get my employee first name and then I’ll put a space between it and get the employee last name and there we go all right so got all that set up and let’s see if we did a good job or not five look at that created the function looks good and then we can come up here and just call this right here so copy and we’ll paste this inside of here it does not have any parameters whoops oops what did I do uh yeah let’s just paste that there there we are get random fun uh sales person and let’s go and get rid of this and we’ll run it and see if it looks like Orcs so we got Jennifer Smith and we got Jennifer Smith why are we getting Jennifer Smith ever up there we go Brittany Jackson Michelle yes it’s totally random and looking good all right so that is how we can go and get a row of data and we’re just going to store it like I said before inside of a record type all right what would I like to do now I’d like to demonstrate in in out and out and these are going to be used uh in instead of returning types like we have done before so I’m gonna just do another one where it’s just going to be a simple summation so I’ll go this is going to be git sum whoops sum two and we’re going to label this as a in variable so I’ll say in V1 and it is an integer and then another in V2 and it’s also an integer and then we will have out which will be our answer and it’s also an integer so then in this circumstance we don’t need the return type so we’ll get rid of that it’ll just be as here is our body section we’re also not going to need to declare anything so let’s get rid of that and then this quite simply is going to automatically return our answer for us so we just go and assign the addition of these two values to each other and it automatically is returned to us and that’s it that’s all we need to do for that and then we can come down here and go get and I’ll just write write this in sum 2 like that and then we’ll pass in 4 and 5 again just like we did before let’s go and run this function boom everything looks like it compiled and we’ll run it again and we see we get our answer all right so there is one example of using in and out the let’s go and do a couple of them like let’s say for example I wanted to get multiple out values so for this what I want to do is I want to get my customer’s birthday so I’m going to come in here get rid of that and I’ll say customer birth day and I’m going to have an in variable which is going to be the month so I’ll say um a month in month looks good that’s an integer I’m also going to well let’s get rid of the rest of these so get rid of that and I’m going to say out is going to be birthday month and that’s also going to be an integer and then I’ll have another out which is going to be birthday day which is also going to be an integer so there’s multiple out values this time um also what else would I like to do I probably would like to get a name or something so why don’t we go and actually put these on separate lines so I’m going to put a comma inside of here and bounce down here and I’m going to say that I also want to get the first name so we’ll say out and F name variable number of characters again out last name variable number of characters and we have as again body stays the same begin stays the same but we’re going to have to have a little bit more complicated type of statements inside of here so what would I want to do here first is I want to select my month and my day my first name and my last name so I’m going to say select and if you don’t remember it how we go and get the month from a date is we say month from the birth date on file so we’ll say birth date and then we’ll say extract and we also want to get the day from birth date birth date like that and let’s go down to the next line I’d also like to get my first name and my last name and this all of these are going to go into you have to list the variables where they are going to be stored so I’m going to say into BD and pounds month and BD obviously the order matters and also F name and L name and they are going to go into all of these out values that we have defined up here inside of our parameters so and what are we going to do now well we’re going to say from and we want to get them from our customer table and then we have our conditionals so I’ll say extract month from birth date so I’ll say birth date and equal to the month that was provided to us in our in parameter up inside of here and let’s say I just want to get limit this to one and also that this needs to end of course all right so did I do all that right it looks like it let’s run it uh oh there is an error month from birth date and this is not closed off so that’s the reason why I got that error and let’s go and run it again did I make that error twice no that doesn’t look like I did okay so I just forgot that little parentheses let’s run it again another error last name line seven so I have my first name and I have my last name here what did I do wrong um oh I don’t I have an extra parentheses there’s the one that was missing from before okay let’s run it again and that time it worked okay so now to get it let’s say I want to get the very first customer birthday who is has a birthday in the month of December so let’s go get rid of this paste that inside of there whoops forgot the FN part and I’ll say December like this so there that is and we will select it and run it and we see that yes Lauren Wilson has a birth date on the 26th of December and that’s great okay um another thing that we would like to do is to return query results so how can we do that and well how do I want to do it let’s say I want to return salesperson data using a query so what am I going to call this I’m going to call it git and I’m going to call it sales people and I’m going to get rid of the ins and outs and all this stuff let’s not get rid of all of our parentheses though because that can cause some problems okay so do I need to pass anything in no I don’t okay so what would I like to do here so I’m going to say returns and we say set of like we did previously whenever we had SQL functions and the set of things I want is going to be salesperson so we’re going to be using the sales person uh table and information to get these query results so what I want to do as body begin and here what we’re going to do I’m just going to get rid of all of this code and I’m going to say return hurry like this very simple and then I can say select everything and let’s go in the next line from the table name so this will be sales person like this and that is all that I need to do so I have that end there and all that looks good so let’s go and copy this and then let’s come down inside of here and paste this inside of there and remember there are no parameters let’s get rid of that and let’s run our code just to make sure that it compiles everything looks good and I want to go and get my sales people now can you remember what the results were before not exactly what we wanted but you can see here is all of our employee information however it’s all crammed together so how do we go and get it in table format just put parentheses around it just like we did before so dot and let’s throw a star there and this will give us all of our information in table format just like it did and let’s say that I specifically want the street or something like that let me just go like this and go Street and we’ll run it to get an individual cell and there we go we got all that also okay so let’s go and continue to make this more complicated let’s say we want to return specific data from a query using multiple different tables how can we do that what do I want to do let’s say I want to get the top 10 most expensive products well this is a little bit more complicated so I’m probably going to issue a query here first just to make sure that I can do it so I want to go and I want to get my product name and I’d also like to get my product plier and I’d also like to get my item price and I’m once again I’m referring to the code that I have on GitHub here to read the names off the table so it might help if you’re watching this to get that it’s free of course so I’m not selling anything all right so I want to get this from item and I’m going to have to do a natural join with product so I’m joining the item and the product table anytime I do this I need to whoops yeah that’s fine I need to go and do an order bye and I’m going to say that I want this to be the price and I’m going to say descending order and I said that I want the top 10 most expensive products so let’s say limit and 10. does this work let’s go and run it and it looks like it did looks like about 10 prod yes there’s 10 products 1 through 10. all right and I also got the price exactly what I wanted okay so now I know that works and now I can create my function all right so what am I doing here I am returning a query using multiple different tables so so what am I going to name this I’m going to name this get oops get and of course you could go and have a parameter and that would say the top 20 or whatever but I’m just going to use the top 10 because that seems good expensive and let’s make it a little bit more descriptive let’s say products now what we do is we say we’re returning a we’re going to still need our as part here but we’re going to say returns table this time and then we Define what is going to go in the table let’s bring the as up here like that and go here and put that there okay good and then we’ll say Tab and what do we want to return we’re going to return a name which is going to be a variable number of characters and we are going to return a supplier also variable number of characters and this is our custom table we are returning with just the information we want in a numeric don’t need a column after that then as body begin and we’re going to keep the return Quarry part here and we’re going to keep a lot of this so we’ll say select and what are we going to get we want our product name and we want our product supplier so there that is and we want our item price so we got all the different parts we want what are we going to do we’re going to use our item table we’re also going to be using a natural join here so let’s get rid of that semicolon so we’ll say a natural join and we’re going to be joining with the product table obviously we need product and item we have item there’s product and then I want to go and get the top 10 so that means order buy and this will be item dot price descending order and then finally limit 10. all right let’s see if we did all of that correct so let’s go and select this just this part right here and let’s go and run it and it says it returns successfully so that looks good and then we can go and just call our function so we’ll go get this guy right here copy come down here and go and paste it inside of here just like this oops just like this don’t need the parentheses I don’t believe and in this circumstance yes I want this to be in table format so let’s go and select this run it could not identify if and get 10 expensive products did I FN get 10 expensive prods yes I did that didn’t I run this function I thought I did let’s run it again and let’s go F5 yes create function looks good oh straight I don’t want Street here I want to replace street with just a star and let’s get this and run it and did we get it yes we got our name supplier and all of our products and they are in descending order so it looks like yes those are the top 10 most expensive products all right so now up next what I’d like to do is go and explore if else if and else so what I’m going to do here is I’m going to we don’t need this right now so let’s get rid of that what I’d like to do is go and look at the current orders that we have by month and then shoot out like a little piece of information that says whether we’re doing good order wise or bad order wise so what am I going to call this let’s call this function and check month orders so there it is and of course we need the month so that we are searching for so we’ll just call this the month and what are we going to return here well this is going to be much more simple let’s get rid of all of this stuff because all we’re going to do is generate a message that is going to print out on the screen Urban rear characters as um in this circumstance I am going to inside of here before begin I am going to declare a variable and it’s going to be total orders and that of course is an integer because we don’t have fractions of a uh have a of an order so let’s get rid of this and how am I going to check my orders well I’m going to say select and I’m going to use the function count and I’m going to say purchase order number so we’ll go get that and I’m going to be storing that value into the variable called total orders and then after that what I can do is I’m going to of course say that I’m going to get this from sales order like that and where and I’ll go and get our month extract month from and time order taken and equal to the month right like that all right now what it can do is I can use conditionals to provide different output so I’m going to say if total orders is greater than 5 then obviously selling five pairs of shoes in a month is not good but we have a limited number you know so this isn’t 100 reality we’re living in here I tried to keep it all very simple so what I’m going to do here is just say concat and I’ll say total orders that is the variable that I’m working with here so total orders underscore like that and then I will I’m going to put a comma here and then after that I’m going to put in quotes orders and I’ll say that this is doing good doing good even though it’s not but you know okay so after we do this we can then do else if and total orders and we’ll say less than five well then we will shoot out we’re basically going to do exactly the same thing so let’s grab this copy and paste and there we are concat total orders and we’ll say orders and we’ll say doing bad and then after that we will say else and we will return another statement and we’ll say something like on target so [Music] Target all right I think I did everything right there and everything oh one thing you have to do very very important anytime you put if inside of here you have to end that if conditional block with an end if exactly like that and everything else is exactly the same so let’s highlight this and let’s jump up here and let’s run it and created function successfully good and we can come in here and check our orders so we’ll go and select all of these copy come down here and then to get them just like before we say select and we pass in our orders and we’ll say that we’re interested in the month of December and we’ll come in and we’ll test to see if it works the message that was sent back is doing bad so not good it says four orders so I should have also put a space right here for all of these but that’s okay I think you’ll forgive me for that all right so there we have our uh if else if and else statements what I want to do now is I want to go and basically do exactly the same thing but I want to use the case statement for it so what I want to call this we can just leave this be the same leave this be the same all of that be the same total orders also the same and select count purchase order number that’s fine into sales orders that’s fine from sales order also good where extract month time taken also good the part that needs to change is this part because we’re going to be using case statements instead so basically case executes different code depending on an exact value so what we’re going to do here is just say case like that and then of course I’m going to do it up front I’m going to say end case you have to end a case statement block like that and I’m going to say when total orders orders is less than one then I am going to I should have kept the return statement but that’s okay so I’ll say return concat and we’ll do total total orders underscore this is a variable and like this we won’t make the same mistake we’ll put a space in this side of there instead so we’ll say orders is equal to terrible all right so that’s what we have for our first output and then let’s come down here and then we’ll say when total orders is greater than one we can go and stack these conditionals with a logical operator and inside of them multiple tool orders is less than five then and we’ll go and use this again so let’s go and copy it copy and come down here and say paste and we’ll say in this circumstance that this is On Target so on Target like that then what you do is if you want to have a default that if it doesn’t meet either of those conditions then you say some languages actually use the fault but here we’re just going to say else and we’ll say orders and we’ll say doing good all right so there’s all that and we can go and select it and of course always have the end for your case let’s run it said the function was created everything looks good here whoops I don’t need this get rid of this little guy there and check monthly orders is the name of it run it and check monthly orders and it says on target we have four orders and we are on target does that make sense with what we have yes on target all right good stuff okay so now what we’re going to do is we’re going to get into looping so the basic concepts of there’s multiple different ways to be able to Loop but the basic idea here if I throw in some multi-line comments is this is the basic concept of looping you can have Loop you’re gonna have statements and then you’re going to exit when a condition is true and then you are going to end your Loop exactly like that so let’s go and think of something I can do maybe let’s just do a a simple Loop test so I want to sum values up to a maximum number that is passed into my function so this is going to be function and I’m going to say Loop test and it is going to be passed in a maximum number which is going to be an integer returns it’s going to return an integer in this circumstance and what are we going to do are we going to declare anything yes we have to declare some things so I’m going to say this is J and it is an integer and it is going to get a default value of 1 and then we are going to have a total sum and it is also going to be an integer and we will have it start off with a default value of zero then we’re going to go and loop some stuff so let’s get rid of all that stuff right there and instead we’re going to say Loop and then anytime you define a loop you also have to end it so and loop just very good practice to do that up front and then what am I going to do I’m just going to continue adding values to sum so assignment operator is going to be total sum plus whatever J is J is 1 in this circumstance because that’s what I said it was and then I’m going to say J colon equal to J plus 1. so we’ll just continue adding 1 to it and then we have our condition so we’re going to say exit when and J is greater than the maximum number and there that is end Loop all that’s good and we can grab this and this is function Loop test so let’s go and first run it and then we’ll go function Loop test like this copy and paste it down inside of here there it is and what do we want to do let’s start with the value of 5 for example so let’s go put 5 inside here and we’ll run it and run it uh oh control reached end of function without return what did I do wrong here oh I forgot to say return total sum so after our end Loop I’m say return and what are we returning the total sum exactly like that so let’s run our function again F5 there it is and let’s run this down here five and we see we get a value of 15 and that is exactly what we expect to get now I’d like to talk about the for Loop now basically with the for Loop you’re going to have a counter in is going to be a command or keyword and then you’re going to have your starting value two dots your ending value by then you can Define stepping so let’s say you wanted to do every other value stepping is going to be how much you add to this value as you cycle through your looping and then you’re going to have your statements of course and then of course also your ending Loop so what do we want to do here let’s go and get rid of this and let’s say we want to go and just sum odd values up to a maximum number okay just to do something slightly different so what do I want to call this let’s call this function for test there it is and we have a maximum number returns an integer all that’s good are we going to to declare anything well we don’t need this guy now because this uh value we’re going to be using as a counter is going to be built into our for Loop so not needed and let’s go and get rid of say hmm what do I want to get rid of um let’s go that’s going to stay the same and we don’t need this part right here and we are not going to be using this exit statement that’s only used for our looping but we’ll come after begin right here and I’m going to jump in and I’m going to say 4 I in and they need to find the minimum value dot dot and the maximum value and we’re going to say that we want to go and increment by two every single time Loop is going to stay the same total sum this is all the same this needs to be changed into an I however and the end Loop is also going to be the same so let’s get rid of that so there’s end Loop and we’re also going to return a total sum everything else is exactly the same so let’s go and run it there it is and then let’s go and test it so we’ll just change this to function for test exactly like that and what do we want to do let’s just leave it be the same way and run it and we see we get 9 as an answer exactly like we would think remember all we’re doing is summing the odd value so that’s the reason why it’s less and one thing to remember is you can also count in reverse and to do that what we would do is we would say 4 I in reverse like this and the only thing is you would have to have your maximum number be first like this and dot dot like that there’s the two dots and one and this will actually give you exactly the same results as you can see because we’re doing the same operation we’re just doing it backwards so we can run that again and come in here and run that again and you see you get nine again all right so good stuff now what I like to do I’m going to show you a do block just to show you something a different way of doing things so this is a do block you just say do like that and then we can come in and have our dollar tags and do another dollar tag like this and then we are going to come in and declare some values what I want to do now is I want to do basically I want to Output all of my sales people’s names using a for Loop so I’m going to say declare and I’m actually going to print them out here as messages to my console just to show you something else different so this is going to be a record and if you made it this far into the tutorial please take a just a second to tell me in a comment just say hey I made it all right and uh helps me a lot to know that somebody is actually watching these videos all right so they take a little bit of time to make so what I want to do is I’ll put my first name and last name like this and I’m going to say from sales person and let’s say I want five I think I only have five and I’ll get an error if I have more than five so I’m going to say limit five and then I am going to come down here and I’m gonna say Loop like that and I’m going to Output a message so I’m going to say raise notice like this and what you can do is inside of your quotes you can put this little percents on and it will tell you it will actually put the variable values inside of here so I’m going to use my record I’m going to get my first name and then you go record and last name and that will output that to the screen and anytime you have a loop remember you must end your Loop exactly like that and then we also need a final and for our begin statement we have here everything else is looking good though oh and one other thing you need to Define your language so language and this is p l p g SQL and that’s good so let’s go and select this and let’s run it and you can see right there in the messages section now it is printing out all of our employees names okay so interesting not just another tool we can use now what I’d like to do is cover four each and also why don’t I cover arrays how do I just do this inside of a do block as well so basically uh the way of using for each with an array is going to be this concept so let’s get this up here dot dot like that and they’re your basic layout for this so what do I want to do I want to print all the values inside of an array so let’s just get rid of that because I don’t think you need to see it because you’re going to see it again in a second so we’re going to have body we’re going to have declare and we want to create an array inside of here so how you create a array is you just say I’m going to call this array one you define what the data type is that you’re going to be using you’re going to use your array and then you’re going to use your assignment operator array and you’re going to put in whatever you want inside of here let’s say one two and three in there that is and then I don’t need record here so let’s get rid of that bring this up here and I’ll just have this be an integer named I alright so we have begin and here we’re going to be let’s just get rid of well I still need the loop and I’m going to use a raise notice also but in this circumstance I’m going to use for each so let’s just get rid of all that and type in for each and I I is a temporary holding cell for each value that we’re going to pull from the array and uh so we’ll say I in or like this and you have to say array and then whatever the array name is after that we have our Loop let’s just leave the loop beer like that and then we’ll have raise notice and we will just put that there and then this can just be I it’s going to Output all of those values so in Loop end for begin and end for the body and all of that else all that other stuff looks good and let’s run it and you can see that it goes and outputs everything exactly as we would expect now let’s talk about while Loops okay so what do I want to do with this while loop is I want to sum values as long as a condition is true so I’m going to say do body declare and let’s go and get rid of all of this we’re not going to be using arrays this time just going to have this be a counter so I’ll say default and it’s going to have a value of 1 and I’m going to have a total sum int and default and this will have a value of 0 from the beginning and then for our while loop let’s just go and get rid of this part we’re going to be using a loop again we’ll say while J is less than or equal to 10 we are going to Loop and while we are looping we are going to say total sum and the assignment operator equals the total sum plus J and then each time through we’ll use the assignment operator to increment the value for J and the loop then is going to end and then we will do a raise notice to Output this so we’ll say raise notice and this and I’ll put our total sum like that and there it is so very easy quick way this one of the way it reasons you use a do Loop is to just go in here and I do block I mean just to go in here and test code works pretty good another thing I haven’t covered is continue so let’s say we wanted to print the odd numbers from 1 to 10 for example well what do we need to have here let’s change this to I just for consistency reasons and have that be a value of one let’s get rid of this right here and we’re just going to do a simple Loop and show you how continue Works basically what continue does is it’s just going to jump back to the beginning of the loop all right so that is it so we only want to print odd numbers from 1 to 10. let’s go and get rid of this and let’s change this to I and let’s change this to I and then we’re going to Define when we are going to exit so we’ll say exit and when I is greater than 10. then what we can do is we can say continue when and we’ll use our mod function here so we’ll say when mod and whatever the value of I is currently divided by 2 is equal to zero so that means that it is going to be an even number so that means if it’s an even number it’s going to skip back to the top of the loop but we’ve already incremented the value of I but what we’re going to do now is actually print out what will end up being only odd numbers because it won’t get to this part because it you know it says it’s even Okay jump back to the loop don’t do anything else that is inside of this uh looping block so we’ll say raise notice if it is an odd number and we’re just going to say number is equal to and throw a percent sign inside of there and whatever the value of I is will print we are not going to have the raise notice down here this time then we end our Loop we end our whole entire begin block and everything else is good it actually probably makes more sense to also indent this but I’m yeah you know we could indent this as well so we could say tab tab tab makes more sense looks easier to read but I think you get the point select it all run it and you can see it only prints on values I think now what we’re going to do is go back to some real world examples I just wanted to demonstrate how the do block works okay what we’re going to do this time is we’re going to return inventory value by providing A supplier name so I’m going to come in here and change this function name and let’s call this get supplier value I think that makes sense and we’ll change this to let’s change both this because it’s going to be a variable number a variable character okay so the supplier variable character and what are we going to do what’s it going to return first well it’s going to return terrible character we’re going to concatenate the supplier’s name with the actual value of the inventory for that supplier so body also good declare we’re going to declare anything yeah we’re going to declare some stuff we’re going to have our supplier name variable character and we are also going to have let’s call this price sum and it’s a numeric then down in the begin block we’re going to get rid of all this and we’re going to say select product Dot supplier and we are going to sum the item price and uh do we need to do anything with those yes we need to put them into our variables so we’ll say into supplier and into price sum and we are going to be pulling those from product and item and the condition is going to be where product supplier is equal to the supplier and let’s also we need to group by the product supplier so that we can do that sum and then we are going to this is actually going to have a semicolon here then we’re going to have our final statement which is what we’re returning and we’re going to concatenate our supplier name and with some text we’ll have something like inventory value and let’s put this here and a dollar sign and then after that we’re going to have our price sum like this there that is and body ends this ends all looks good let’s select it let’s see uh oh supplier is not a known variable into supplier what did I call it oh I called it supplier name I want to just call it so now let’s just keep it consistent so let’s say let’s just change no no not because I have supplier other way or other places so we’ll say supplier name this is fine okay so we have all that the same let’s go and run it again whoops hit the wrong button there we are Corey return successful and then we’re going to say that we want to get a supplier value and I’m actually realizing I can’t think of one um I think I have Nike Insider here so let’s throw this here and like there that is and let’s go and get rid of this right here and let’s run it and let’s see the total value of our Nike inventory and let’s go and get this and that’s a lot it’s 21 694 dollars okay so there you go and that is just a rundown of a lot of what you could do with PG SQL The Core Concepts and what I want to do next is talk about something called stored procedures all right so now we’re going to talk about stored procedures and I’ll give you a bunch of examples basically stored procedures can be executed by an application that has access to your database and uh stored procedures can also execute transactions which you cannot do with functions procedures however traditionally can’t return values but there’s a workaround using in and out which I’ll show you procedures also can’t be called by select you can execute them with execute which I’ll show you and we’re going to be able to use parameters whenever we use that execute command um also you’re going to be able to use call to execute them and also if a stored procedure doesn’t have parameters it’s called a static procedure and those with parameters are called Dynamic and you can see the basic layout of a stored procedure very very similar to what you have with functions what I’m going to do here first however I’m going to get rid of this line and I’m going to come up here and I’m going to create a sample table that is going to store customer IDs with balances due so let’s say that that’s something that I decided that I wanted to do then I’m going to call it past do and inside of here I am going to have an ID of course and it will course B cereal and it will be a primary key I’m going to also have a customer ID integer and not null and I’m going to have the balance that they owe our company and I’m going to say that this is a numeric and total length is six and two decimal places this is also going to be not null so let’s go and create that table just select it and there we go oops what I do wrong uh oh I forgot the right primary key so primary key there we are now it’ll be fine run it there we go okay so we have that set up and is it showing over in our tables area probably not our functions aren’t updated so yeah probably not so let’s go and refresh everything here and let’s go in schemas and let’s look and look at all the functions we have now there’s a lot of functions that’s how many functions we created and uh more and are to come and tables and let’s look and see past due yes it is it’s right there all right good so we know we created that table and we’re ready to go now what I want to do here is I’m just going to get some information on my customers so I’m going to say from customer like this whoops and let’s go and run this little guy okay so we got some customer information and uh what I’d like to do here is I’d like to get well I know I got a customer wanted to so let’s go and place this inside of here so I’m going to say insert just throwing in some junk data so that we can play around with this and I’ll say past do and customer ID is going to be passed inside and um balance events like that and I will go and throw some values inside of it so I’ll say values and one and we’ll say one two three four five is how much they owe us and then of course two um three two four fifty and there we go and we can throw this into there as well there that is and then we can say select everything from just throw it on one line because it’s simple past like this and just to verify that that information is in there I know it’s in there there it is okay so we got that okay so now let’s create a function here or a stored procedure I guess I should say so what am I going to call this let’s call I like to start stored procedures off with PR and I’m going to say I’m going to create a function that is going to be called debt paid and what it’s going to receive is let’s go and get rid of that let’s get this there we are so get rid of that and throw this on the next line and tap this in I’m going to have past do ID and it’s going to be an integer and I’m going to have the payment amount and it’s going to be a numeric as body all the same do we want to declare anything we don’t want to declare anything I’m just going to leave that blank for now and down inside of my begin block where I’m going to put all of my statements I can move this up so you can see everything here all at once so this doesn’t need to be on that line let’s move it up just to have some space okay so what I’m going to do is I’m going to say that I want to update my past do table and I want to set the balance the New Balance to whatever they paid so this is going to be a function that is going to allow the user to go and update that past due table and I’m going to say where the ID is equal to the past two ID that was set that was passed inside of here then what you need to do is after this you need to say commit to run this update and everything else ends exactly as you would expect so now what I can do is I can go and call this so I can get this and if you remember it was one two three see one two three four five is how much customer one had as a balance we can then say call and there is the pr debt paid and let’s say that they we want to pay off part of the ID for one and it’s going to be ten dollars is how much they pay us so let’s go and let’s run that did I run this I don’t know let’s select it again and run it yep no I didn’t run it and let’s go and do this and boom okay so that all looks like that worked and now what I’d like to do is come in here and actually check it so I’m going to say select everything from past two like that and let’s go and run that to verify that it worked and you can see this used to be one two three and now it’s one one three so yes indeed that worked out for us um let’s also oh also if you would want to be able I said that if you would want to be able to do return values that normally is something that is not available to you what you could just simply do is just use an in out just like you did previously so I’ll just leave that to you to play around with so that is how we can return values all right so that’s a basic concept and everything else is basically the same for working with these uh procedures as you had with functions except now what you’re going to be able to do is actually update data and what I’d like to talk about next are triggers okay so basically triggers are going to be used when you want an action to automatically occur when another event occurs common events include things like using commands such as insert update delete truncate and uh triggers can also be associated with tables and foreign tables or views and I’m going to show you a whole bunch of examples of course and basically triggers can execute before or after an event executes and triggers also can execute instead of another event and also you you can also put multiple Triggers on a table and they execute just so you know in alphabetical order and they can’t be triggered manually by a user and triggers also can’t receive parameters and another thing to know is if you have a trigger that is row level which we’re going to cover what row level means here the trigger is called for each row that is modified and if a trigger is statement level it will execute once regardless of the number of rows one thing that is really important to understand is when can you perform certain actions with triggers as you can see on this table it’s going to show what triggers can execute based on when they are to execute so for example if a trigger is to execute before if an event is insert update or delete it can perform action on tables if row level and on tables or views if at statement level don’t worry if this doesn’t totally make sense when I show you some examples it will now the pros of triggers is they can be used for auditing so if something is deleted a trigger could save it in case it is needed later they can be used to validate data make certain events always happen to maintain Integrity of data they can ensure Integrity between different databases they can call functions or procedures and triggers are recursive so a trigger on a table can call another table with a trigger the cons of triggers is the triggers add execution overhead and also nested or recursive trigger errors can be very hard to debug and on top of that they are invisible to the client which can cause confusion when actions just simply aren’t allowed see here the basic idea of using triggers and what I want to do here is I want to well basically what happens is you’re going to have a trigger function and it is going to look like this and then what you’re going to do is to actually create the trigger you’re going to have a statement that looks like this and in this situation remember I said before or after and I showed the table and we’re going to have our events and your event is going to be normally either insert update or truncate this should be truncate truncate there we go on table and for each and you’re going to see an example and it’s going to make everything make much more sense so let’s just come in here and let’s get rid of all this so what I want to do is I want to log changes to a table that is called distributor so let’s say I have a table and it is called distributor and what we’re going to do is we’re going to update if the name of the distributor changes and save that as sort of a log file so I’m going to go ID and this will be serial and this will be primary primary p and it’s just going to have a name and we’ll say variable number of characters and we’ll have something like a hundred or something like that did I close off all this yes so this is going to create that table and you’ll just have to Boom there it is we created our table successfully all right so now what I want to do is I want to insert some Distributors into this so I’m going to say insert into and distributor and I’m going to put whoops name that’s all we’re gonna put inside of there that’s all this there and values and then inside of here I’m actually going to use some real ones so here is a distributor name whoops comma and another one so J and the sales and let’s throw in Steel City Clothing okay so there we go and those are going to be some values we can throw inside of our table and then change them and just to verify that they are all there I’m going to say select everything from distributor like that and let’s run that and yes indeed they are all there okay so now what I want to do is I want to create another table and what it’s going to do is it’s going to store changes to a distributor so let’s get rid of that so it’s going to have an ID it’s going to have a distributor ID it’s going to store the name and it’s going to also store the date that the change was made create table and we’re going to call this distributor audit like that and what it’s going to have inside of it is an ID which is going to be serial primary key and it’s going to have the distributor ID it’s which is going to be an integer of course this is going to also be not null we’re going to have a name which is going to be a variable number of characters which is going to be a hundred and not null and we are going to have an edit date make this a time stamp and it’ll be not null also and that doesn’t matter if that’s capitalized um let’s throw this here and we can create whoops don’t put the semicolon there it goes down here all right so we got all of that all set up and that is correct right now actually no let’s get rid of this put that up there there we are okay so I got that and run it and we created our distributor audit table which is going to monitor those changes all right so what we want to do now is we want to create our trigger function so remember this is create or replace function and I’m going to call this function log distributor name change and I am going to what’s it going to do return trigger language p l p g SQL and we’ll have this be as and body all the stuff that you’re well used to at this point body and then inside of here we’ll have begin and then inside of there we will also have an end statement and I’m also going to probably yeah I’ll show you some trigger information variables just to throw some extra stuff into this okay so I’m gonna go begin and what I want to do here is I want to check if a name change has occurred so how I can do that is I could say if new DOT name is not equal to the Old Dot name then I want to insert that information so I’ll say insert into distributor audit this is where we’re going to save all of our distributor name changes and inside of here I will go distributor ID is going to be passed inside and the name and edit the edit date that has occurred and then I will throw in values and the values are going to be the old ID and the old name and the current time when this occurred which will be provided with now now anytime we have an if block what do we do we have to say end if like that and um everything else there looks good and then after this what I’d like to do is also just show you some trigger information variables just so you see what they look like so and this is going to show up in messages so I’m going to say raise notice and if you want the trigger name to show up you just like this and we can go and get it by just going TG name and copy that and we’re going to do basically the same thing for all of the rest of them if you want to get the table name that we’re working with here we can just go table name and this will be TG and table underscore name and paste this we could do operation so operation and this is going to be TG op and another one I’m just I’m covering most of them most of them you won’t even care about so when executed we can go TG and win and what else we could do row whether it was a row or statement so we’ll go row row or statement and let’s go like this there that is and to get that you say level and um let’s say we want the table scheme and this is it this is the last one I’m gonna do you’ll see what they look like so schema and that is table s-c-h-g-m-a all right so we cut all those different things and then what we can just simply do at the end here is just say return new and there we go and and we have our body section okay so we have that set up so here is our trigger function let’s say does it execute oh got a problem here and actually this is not the error I should have put returns okay we’ll select this again come down here and run it and we created our function all right so now what we want to do is bind the function to a trigger so to do that you say create trigger and I’m have this be TR so that I know this is a trigger and say name changed so if the names changed we will have a log of it and then what we’re going to do is I’m going to call the function before the name is updated because if I don’t do that I can’t get the old name so this is where before and after come into play okay so before the name is updated down below we’re going to go through all this stuff so I’ll save it for update and on distributor and we want to run this on every row where an update occurs so to do that we say for each row and execute procedure and then we call our function we have up here so this is going to be the function name changed this guy right here so let’s come up here and run that and put a semicolon at the end then we’ll go and allow for all of our changes to be made so I’m going to say update is distributor that I mean this this is separate I should put this on a separate line so this has nothing to do with this part that binds the function to the trigger this is something we’re going to actually go in and update the distributor name and then we will see how the um New function is going to be updated so that it has that information so I’m going to say update to and I’m going to set the name to be equal to and I’ll say their new name is western clothing I don’t know I’m just throwing something in there where ID is equal to 2. and we can command did I run this create trigger I don’t think I did so let’s go F5 nope create trigger there it is and let’s update it and then let’s test to see if it worked and you can see here are all of the uh pieces of information that I asked for down here are showning up inside of the message area so now that I have that set up I can go in and check my log so I’ll say select and from distributor audit is it in there and select this and F5 and we can bring this up and you can see that yes indeed this is the old name that we had and this is when it was updated you could also put another field in here that would be the new name or whatever just by using new instead of old and that would work also another thing I’d like to talk about is conditional triggers um you can actually revoke delete on tables for some users just through the use of triggers so let’s say that we want to set up our system in a way that is not going to allow people to update the records on um on a weekend that’s something that sort of sounds like something that might be useful so what I’m going to do is I’m going to say that I want to block weekend changes to our database so maybe we have people we don’t trust on the weekends working and we don’t want them changing our database so we’ll say weekend changes like that returns trigger same language is going to be the same as body begin also the same this is going to be much more simple though so let’s come down here and let’s go and delete all that and inside of here we’re just going to say raise notice and we’ll say no data base database changes allowed on the weekend like that throw that right there and then you have to return null like that and that’s it that’s all we need to do for this now we need to bind a function to this trigger so again this is going to be create trigger and we’ll go and give it a more descriptive name that makes sense and this is going to be block weekend whoops underscore changes and and we can call our function before the name is updated so we’ll say before update and we will say that we also want to block inserts deletes and truncates how you do that is say before update or insert or delete say we can block all of them or truncate on distributor and we want to run this on the statement level for each statement statement and then we will go and do a block here so we’ll say for each statement when here is our condition and we’re going to say a weekend so it’ll be extract and we want to get our day of the week which is going to be a number day of the week like this from and we want to get our current time so we’ll go current times stamp and whoops times stamp not stomp time or there we are there oops time stamp for current time stamp like that and then we’ll say between and six and seven are the weekend dates inside of here so I got that all set up and then we’ll say execute procedure and that is this guy up here and what it simply is going to do is just print out that information huh and we can just go and call it paste that inside of there exactly like that and let’s go and get this guy and run it I think we I don’t think we did oops syntax error what’s wrong with this I see I actually forgot to put a semicolon here okay and let’s select it again run it function created good stuff and uh let’s say equal to to my current day so that this will actually give me an error so this is currently Tuesday I’m going to make this uh three is that right I think well let’s just run it I think that’s right two three no this would be this would be two yeah so this will be a two instead so let’s go and run all of that instead there it is and I created the trigger and now that I created the trigger I can go and try to make this change again and it should block it and it does no debate no database changes allowed on the weekend all right and another thing that you probably want to be able to do is to delete triggers and how you can do that is very simple you just say drop event trigger and then it will be the name we have here for our trigger which was TR block we can changes and we can take this and we can drop it all right so there are triggers and what I want to talk about next are cursors and basically cursors are used to step backwards or forwards through rows of data and they can be pointed at a row and then select update or delete and cursors get data and they push it to another language for processing operations that add edit or delete and cursors are first declared defining the selection options to be used and then it’s going to be opened so that it retrieves the data and then the individual rows can be fetched after that and then after use you want to close a cursor so you can free it up you know memory and such now to explain cursors what I’m going to do is I’m actually going to go and just create an example I’m going to do a do block here because this is going to keep it nice and simple and I’ll go body like this and I’m going to say that I want to declare and I’m declare a couple different variables inside of here I’m going to have a message and text and I’ll have its default default be equal to nothing and then I’m going to have a record of a customer so I’ll say customer like this record and I’m going to declare a cursor so I’ll go cursor and customers and cursor like that and then I’m going to say that I want to assign this to 4 select whoops select for everything from customer and then we can go in and actually Define our code what we’re going to be doing here now you if you want to work with a cursor you need to open the cursor so we’ll say open cursor customer and then we’re going to Loop and we’re going to fetch records from the cursor so we’ll say Loop and fetch cursor customers into our record for our customer let’s go and get rid of this and move this down here there we go and put a semicolon inside of there and then we’re going to say exit when there are no more customers found that’s what that is actually saying then after we do this what I want to do is concatenate all our customer names together so I’m going to say message sign this to whatever our message is pipes here and I’ll go and get our individual customer first name and pipe this together and then I’m going to go and throw a space between the first and the last name do another pipe and this is just going to Output all of our customer names so again customer Dot blast name and then let’s say we want to put commas around all of them there we are like that okay and anytime we have a loop what do we need we need to do an and loop but and then I am going to Output all of this information so I’ll say let’s just put it here we’ll say raise notice and this will just be a list of customers so I’ll say customers like this and uh let’s go and throw those in there and then I can just get them because I have them saved inside of message call end and then let’s close everything off all together all right so got all of that set up and if we come in and we run it boom current cursor customer I got a cursor customer right here cursor customers I wrote customers there so let’s go and change that to that run it again and there you can see we were able to jump in there and grab all of our customer names and output them now I’m going to do one more example this is the end of the tutorial been a long way but I think it’s very important to understand the concept of using cursors with functions and basically what I want to do is create a function that’s going to return a list of all customers in a provided state so let’s just get rid of all of this and just write everything from the beginning so I’m going to do create or replace function and function underscore and we’ll say get customer by state and we’ll just do c underscore state for customer State variable number of characters and what’s this going to return well it is going to return text and the language that I am using PLP gsql gsql as and throw in my body tags inside of here and another set of them right there and then I’m going to declare a couple variables so I’ll have declare and I’ll have this be customer names and this would be text and to start off it will have no value so there that is and I’m going to be cycling through customers again so I’m gonna go and need a record of each of the customers I am going to then Define my cursor and the query it is attached to customer by state cursor and parser state is going to be passed inside of here for this query and I’m going to say 4 and let’s go down select this and I’ll go first name asked name and State and I’m going to be pulling this from the customer table I will get if it has a match to whatever the state was that was passed inside of it okay so I got all that set up okay now I can come in here and store right and everything else we’ll go begin and we’ll go end like that and let’s move this down here so we can fit more code inside of it now of course I need to open my cursor so to open it I just get open cursor customer by state why don’t I just copy this so I don’t have any typos inside of it oops cursor customer by state cursor customer by state there it is and I need to pass in see state and then there that is after I have that all set up I’m going to create a loop hands we do an end Loop of course like that and then inside of here what I want to do is I want to move my row of data into the record that I have right here so to do that I’m going to use a fetch cursor customer oops I have that still saved don’t I yeah I do and we’ll put that into our record for our customer and then I’m going to continue looping until nothing more is bound so I’ll say exit when not pounds and then after I do all of that I want to concat the customer names for each of the rows so I’ll say customer names and assign to that customer names and we’ll do a pipe go and get our record of our current customer we’re working with That We’re looping through and get the first name for it and we’ll put a piper and a space and another one there and then we’ll go and get that a record again and get the last name and then we have to put another pipe and then we have to we’ll have everything separated with commas just like we did before and then this ends our Loop and of course after we end our Loop we have to close our cursor so we’ll say close and cursor customer by name do I still have it in there yes I do so let’s paste that inside of there and then after we do all that we can say return and it’ll be all of our customer names based off of the state that we said we wanted this ends the body ends and then let’s say we wanted to get all of our customers from the state of California we’ll go select and let’s come up here and get this function right here so let’s grab that copy jump down here paste that inside of there and then we’ll paste in California and there that is and of course we have to select all of this all of it and run it do we have any errors no we did not and then let’s go and run it and we’ll get a list of customers from California and there we are and here they are so there you go okay so now I want to cover installation no installation is large it’s very very simplistic so basically no matter what your operating system I’m using Windows here obviously so you’re going to want to go to the postgresql.org files you just type in postgres and download you’re going to see this page right here then you’re going to want to pick the latest version of postgres and you’re going to be sent to a page like this again you’re going to want to pick whatever of the operating systems that you are currently using I’m clearly using Windows and then this is going to open up and you just basically next your way through the whole thing so just next and you can Define where you want it to be set up and click on next and then you’re again going to select just make sure you do not select stack builder in this situation so everything else should be checked and click next and then you can go and select your the directory you want to store your data in again next and then you’re going to define a password that you’re going to be using so go into that’s basically an Administration so click on next after you enter that in then you’re going to want to set your Port which is going to be 5432 and click on next and then I just leave this as default location let’s or look to default Locale that is and then basically everything’s just going to install for you and there you go you have basically everything set up now click on next and then after everything has been installed you’re going to see a message that looks like this then what you’re going to want to do is you want to find PG admin it’s currently version 4 and you’re going to want to open that up and right here you’re going to set your master password so just type in whatever your password is and click on OK and then you’re going to enter a password for your user so just come in here click on OK and you can enter it again and click on OK and then you’re going to see PG admin pop up just like this just like we worked with in the tutorial so there you go guys that is basically a vast majority of anything you’re going to do with postgres outside of some Administration and such I mainly focused in on the programming aspects which I believe are what most people come to my tutorials or my channel for and like always please leave your questions and comments down below otherwise till next time
Affiliate Disclosure: This blog may contain affiliate links, which means I may earn a small commission if you click on the link and make a purchase. This comes at no additional cost to you. I only recommend products or services that I believe will add value to my readers. Your support helps keep this blog running and allows me to continue providing you with quality content. Thank you for your support!
The text presents interpretations of Surah Kahf, specifically focusing on the story of Musa and Al-Khidr. It explores the divine wisdom behind seemingly inexplicable events, emphasizing the importance of patience and trust in Allah’s plan. The text analyzes the reasons behind these trials, and highlights their context within the Sunnah. It suggests that these trials test faith and understanding and offers insights into Allah’s actions that extend beyond human comprehension. Moreover, the narrative explores how prophets receive and understand divine messages, drawing parallels with worldly events. Ultimately, the passage offers guidance on maintaining faith and trust in divine wisdom even when faced with life’s challenges.
Wisdom in Surah Kahf: A Study Guide
Quiz (Short Answer)
Answer each question in 2-3 sentences.
According to the text, what three types of reasoning are commonly found in the Holy Qur’an?
What is the significance of patience in the context of the challenges faced by prophets?
What are the two aspects that should be kept in mind sometimes when comforting someone or advising someone patience?
What is the significance of the story of Khizr and Musa in Surah Kahf?
How does the Qur’an describe the ways in which Allah conveys His message to the prophets, as summarized in Surah Shura?
What are the two ways “intentions” or “observations” are shown to the prophets?
What does the text suggest about the interpretation of dreams and visions experienced by prophets?
What is the context of the story of Sayyiduna Musa’s journey?
What was the moral of the episode regarding the killing of the boy?
What does the text say about trusting in the sources of knowledge that God gave us?
Answer Key (Quiz)
The three types of reasoning are arguments from the self, arguments from the horizons (Afaq), and arguments from history. These types of arguments appear throughout the Holy Qur’an, though the level of detail varies in each argument.
Patience is essential because results may not be immediately apparent, and it requires trusting in Allah’s wisdom even when outcomes are unclear. Prophets are advised to act with patience and wisdom, trusting in Allah’s wisdom, even when faced with accusations and criticism.
When comforting or advising someone patience, keeping in mind that there is one age, one period that is fixed. And also that the Lord is the end of my nation, if guidance would have been given by you.
The story of Khizr and Musa is not a historical narration, but rather, an explanation of the importance of patience and the wisdom of Allah. This includes patience in waiting for permission for migration and patience when trying to understand the purpose or reason for waiting.
Allah speaks in their innermost being (thought takes the form of words) or something is heard with regular sounds (a voice felt from behind a hijab). The thought either speaks in sounds or dialogue occurs between them as instructions are told.
Intentions are either shown in a dream (where the truth is shown through parables needing interpretation) or through observation in an awakened state. In the awakened state the ego begins to appear.
Dreams and visions of prophets are true and not influenced by Satan, but they often require interpretation and are presented in the form of parables. For example, in Surah Yusuf, it is written that the dream of the person who misbehaved with the king was also true.
The context of the story is to teach patience and wisdom and is intended to tell that permission for migration is not given until it is received. One must wait patiently and faith is necessary in the wisdom of Allah.
It was feared that the boy might grow up and cause trouble for his parents with his disobedience and disbelief. Allah wanted to save his parents from his disbelief.
If you look back it has been advised to act with patience and wisdom, trust in Allah, and trust in Allah remains only when a person is fully satisfied that the wisdom of my Lord works in everything and there is no covering of His wisdom. This is the lesson that has been given in this incident.
Essay Questions
Explore the concept of “Sunnah of Allah” as described in the text. How does understanding this concept influence a believer’s perspective on current events and personal challenges?
The text outlines three methods for addressing problems: reasoning from the self, reasoning from the horizons (Afaq), and reasoning from history. Discuss how these methods can be applied to contemporary issues facing the Muslim community.
Analyze the role of prophets as described in the text. How do they balance their human emotions and limitations with their divine calling and responsibilities? Use examples from the story of Musa and Khizr to illustrate your points.
Discuss the importance of trusting in the sources of knowledge that God gave us. What can we do to avoid temptation that comes with greed or creating satanism?
Examine the story of Musa and Khizr in Surah Kahf. In what ways does it contribute to a deeper understanding of divine wisdom, patience, and the limits of human knowledge?
Glossary of Key Terms
Sunnah of Allah: The established and unchanging way or pattern of Allah’s actions and dealings with mankind.
Afaq: Horizons, realms, or the external world; arguments from Afaq refer to observing the signs and phenomena in the world around us.
Al-Mudat in Al-Qurba: Nearness or relationship; used to describe the prophet’s connection and love for his people.
Ijtihad: Independent reasoning or interpretation of Islamic sources to derive rulings on matters not explicitly covered in the Qur’an or Sunnah.
Tafsir: Interpretation or explanation, particularly of the Qur’an.
Dalal: Evidence or proof; used in the context of presenting historical events as evidence.
Hijab: A veil or barrier; in this context, it refers to a veil that separates humans from the direct perception of divine communication.
Ara: Observation; refers to divine visions or intentions shown to the prophets.
Majma Al-Bahreen: The meeting place of two rivers; a symbolic location in the story of Musa and Khizr.
Qada and Qadr: Divine decree and predestination; the belief that everything is predetermined by Allah.
Stories of Wisdom: Understanding Divine Intentions in Surah Kahf
Okay, here’s a briefing document summarizing the main themes and ideas from the provided excerpts from “Stories of Wisdom: Understanding Divine Intentions in Surah Kahf”:
Briefing Document: “Stories of Wisdom: Understanding Divine Intentions in Surah Kahf”
Overall Theme:
The excerpts focus on understanding divine wisdom and intentions, particularly within the context of Surah Kahf in the Quran. A central argument is that seemingly random or negative events have underlying, often hidden, wisdom orchestrated by Allah. The excerpts emphasize the importance of patience, faith, and trust in Allah’s plan, even when the reasons behind events are not immediately apparent. The text addresses how prophets receive divine guidance and how believers should react when faced with trials and tribulations.
Key Ideas and Facts:
The Sunnah of Allah is Unchangeable: The speaker emphasizes that what happened to past nations will happen again, serving as a warning. “It is the Sunnah of Allah and this Sunnah is unchangeable. There will never be any change in it, nor has it ever happened before, so be warned.” This establishes a pattern of divine justice and tests throughout history.
Prophets’ Role and Consolation: Prophets are human beings, and even they need comfort and reassurance. The text highlights the prophets’ longing for their people to believe. “Prophets are also human beings in the flesh, they are giving invitations, delivering messages, devoting their days and nights to their people… By Allah, the believers, I am still inviting you, still conveying the message, praying for you. My last wish is that you believe before I leave you.” When facing resistance, prophets are advised to be patient, as Allah has a plan and a fixed time for decisions. The surah itself provides consolation to the Prophet.
Arguments from Self, Horizons, and History: The Qur’an uses these three types of reasoning to convey its message. These methods of reasoning serve the overall function of giving “warning and observation.”
Arguments from self are more detailed sometimes
Arguments from Afaq are more detailed sometimes
Historical reasoning
The Story of Musa (Moses) and Khidr: This story is presented as an example of how divine wisdom operates beyond human understanding. Musa’s journey with Khidr is used to illustrate why one must be patient and trust in Allah’s plan, even when faced with events that seem unjust or incomprehensible.
Khidr’s actions (damaging the boat, killing the boy, repairing the wall) are explained as being divinely guided and serving a greater purpose.
The journey was not “to create an event or to narrate history,” but to comfort the Messenger of God and explain if results of da’wah do not come as expected, to understand the wisdom of Allah.
The story teaches about the importance of wisdom in actions.
The Nature of Prophetic Revelation (Intentions): The text delves into how prophets receive messages from Allah. It describes different methods:
Thought Speaking: Inner thoughts taking the form of words without actual sounds. “The thought speaks in their innermost being…the thought speaks, that is, as if I am talking or someone else is answering me, then here the thought has taken the form of words and the words have come out as sounds, but this does not happen in a dream or in a dream, in which words are sounds. They don’t come out of being, the thought speaks”
Hearing a Voice Behind a Veil: As with Musa and the burning bush.
“Intentions” (Ara’at): Showing the prophet a reality or vision, either in waking life or in a dream. These visions can be symbolic and require interpretation. “It means that there is no talk and no message has been given but the reality is being shown i.e. it has been observed…Paradise was shown to me as if it appeared at that time, but it is not clear whether they are asleep or in a state of sleep.”
Patience and Wisdom as Essential Virtues: The text emphasizes the necessity of patience and wisdom for human beings. “I think these two things are needed at every step by a human being, he should make a plan of action with wisdom, speak with wisdom, keep wisdom in mind in every action, and all goals and objectives should be discussed from the perspective of wisdom.” Patience is connected to faith in the fact that something is delayed, so if there is wisdom from Allah.
Limits of Human Knowledge: The passage acknowledges that human knowledge is limited. “What will be beyond your scope of knowledge, how can you even be patient with it, as if the Holy Qur’an has told me, forgive me. Man has a field of knowledge… the biggest mistake is that people do not know their field of knowledge.” The inability to grasp the full scope of Allah’s wisdom is a central reason for needing patience and trust. Humans must know and accept the limits of what they can know and trust in sources of knowledge Allah has given.
Everything Happens by Allah’s Permission: All things, including those that seem random or chaotic, happen under Allah’s permission, intention, and will. This should form the basis of a believer’s faith. “One is that everything that happens in this world is all by God’s permission and His intention. Occurs under will, that is, there is nothing that happens by yourself in this world.”
No Intention of Allah’s is Devoid of Goodness: Even if falsehood is allowed to exist, it does not mean that Allah loves falsehood. He nurtures good through the challenges.
The Importance of Understanding Limits of Knowledge: Because not understanding the limits of knowledge causes many problems.
Implications:
The excerpts provide a framework for understanding difficult or seemingly unjust events in life.
They encourage a deeper trust in Allah’s plan and a willingness to accept what is not immediately understandable.
They highlight the importance of seeking knowledge while acknowledging its limitations.
The surah aims to train patience and willingness.
This briefing document captures the key themes and ideas presented in the provided text. Remember that it is a summary, and further reading and study of the source material are recommended for a more complete understanding.
Stories of Wisdom in Surah Kahf
Stories of Wisdom FAQ:
1. What is the central message conveyed through the stories in Surah Kahf?
The central message emphasizes the importance of patience and trusting in Allah’s wisdom, even when events seem perplexing or unjust. The stories illustrate that there is often a divine purpose behind occurrences that we may not immediately understand, urging believers to accept Allah’s plan with contentment. It highlights the immutability of Allah’s Sunnah, warning that patterns of past nations will repeat and to learn from them. The stories also point to the ways a messenger brings a message, by way of arguments from self, the horizons, and history.
2. What three types of argument are presented in Surah Kahf?
Three types of arguments are presented: arguments from the self, arguments from the horizons, and arguments from history. These methods are employed to convey warnings, observations, and ultimately, to offer consolation to both the prophets and the believers.
3. How does Surah Kahf provide comfort and guidance to prophets facing difficulties?
The Surah offers solace by reminding prophets that they are not alone in facing rejection and adversity. It highlights that prophets are also human. It assures them that Allah has a fixed time for decisions and a wisdom behind events, even when results are not immediately apparent. The stories encourage patience, emphasizing that Allah’s mercy and victory will eventually prevail, but haste and ego must be avoided. The message is one of perseverance and trust in Allah’s ultimate plan.
4. What is the significance of the story of Musa (Moses) and Khidr (the wise servant) in the context of Surah Kahf?
The story of Musa and Khidr serves as a practical illustration of the Surah’s core message. It demonstrates that there is divine wisdom behind actions that may seem incomprehensible or even morally wrong from a human perspective. Musa’s inability to understand Khidr’s actions initially underscores the limits of human knowledge and the need to trust in Allah’s greater plan. It illustrates that Allah uses observation, dreams and the unseen to communicate to prophets.
5. In the story of Musa and Khidr, what do Khidr’s actions represent, and what lessons can be derived from them?
Khidr’s actions, such as damaging the boat, killing the boy, and repairing the wall, represent Allah’s divine will and wisdom operating beyond human comprehension. The lessons include:
Trust in Allah’s wisdom: Even when events seem unjust, there is a greater purpose.
Limits of human knowledge: We cannot always understand Allah’s plan.
Patience: Understanding the divine plan requires patience.
Acceptance: We must accept Allah’s will and strive to learn from it.
6. What different methods were used to communicate to the prophets?
Several methods were used to communicate with the prophets including, inspiration or thoughts that speak within, direct sound behind a veil, visions presented in dreams (which require interpretation), and direct observations or “intentions” shown while awake.
7. How does Surah Kahf address the problem of evil and suffering in the world?
The Surah does not explicitly solve the problem of evil, but rather offers a perspective that acknowledges its existence while emphasizing the presence of a divine plan. It suggests that suffering and apparent evil may serve a greater purpose, even if we cannot always discern that purpose. It urges believers to trust that Allah’s wisdom and mercy will ultimately prevail.
8. What practical advice does Surah Kahf offer for navigating life’s challenges and maintaining faith?
The Surah provides several key pieces of practical advice:
Cultivate Patience (Sabr): Be patient during difficult times.
Trust in Allah’s Wisdom: Have faith that everything happens for a reason.
Act with Wisdom (Hikmah): Strive to understand situations from Allah’s perspective.
Avoid Haste and Ego: Do not rush to judgment or act out of pride.
Seek Knowledge: Strive to increase your understanding of the world and Allah’s plan.
Accept Allah’s Decisions: Be content with Allah’s will and trust in His ultimate plan.
Understanding Allah’s Intentions and Wisdom in Events
Allah’s intentions and wisdom are central themes in the sources, particularly in the context of understanding events and adhering to patience and faith. The sources emphasize that everything happens with Allah’s permission, intention, and will. This perspective is presented as a foundation of faith, suggesting that belief should extend beyond philosophical concepts into a practical understanding that Allah’s wisdom underlies all events.
Key points related to understanding Allah’s intentions:
Unchangeable Sunnah The ways of Allah (Sunnah) are unchangeable, indicating a consistent and predictable pattern in how events unfold.
Arguments and reasoning Understanding Allah’s intentions involves recognizing arguments from the self, signs from the horizons, and lessons from history.
Patience and consolation Prophets are advised to practice patience, especially when outcomes seem unpromising. Consolation and patience are advised, coupled with wisdom, to understand the reasons behind Allah’s decisions, which are made according to a fixed time.
Wisdom behind events Believing in Allah’s wisdom is what gives a person patience. This requires trusting that Allah’s wisdom is at work in everything.
Limits of human knowledge Humans have limitations in their understanding, and should not try to exceed the scope of their knowledge.
Guidance and Knowledge The means by which Allah conveys messages, including thought, sounds, intentions, and dreams, all serve to provide guidance and knowledge.
Testing and Trust Allah tests people by presenting situations where they must use their intellect to understand what they can and trust in their Lord for what remains.
Goodness in intentions No intention of Allah is devoid of goodness and wisdom. Even when falsehood appears to thrive, it is part of a larger plan that nurtures a greater good.
Divine Scheme There is a divine scheme in place that governs the happenings of the world. Understanding this scheme completely is beyond human capability, but glimpses can be caught to provide insights.
Mercy and Grace Events that seem evil may still be acts of mercy from Allah, as demonstrated in the story of the boat.
Guidance in the Quran The Quran provides guidance, but some facts are beyond human grasp. Analogies are used to explain these concepts, but humans should not try to fully comprehend their reality.
Limits of Knowledge Humans should recognize the limits of their knowledge and not overstep them, as the basis of all knowledge comes from observation and experience.
Intention and Will Everything in the world occurs with God’s permission, intention and will, and that is the basis of faith.
Trust and contentment Humans should be content with Allah’s will and strive for the right results.
The sources emphasize that while humans may not always understand the reasons behind events, trusting in Allah’s wisdom, mercy, and overarching plan is essential.
Understanding Human Limitations: Knowledge, Senses, and Faith
The sources discuss several limitations inherent in human understanding and knowledge, emphasizing the importance of recognizing these boundaries in the context of faith and wisdom.
Key aspects of human limitations include:
Scope of Knowledge Human knowledge is limited, and there are matters beyond its scope. Humans should determine their field of knowledge, recognizing what lies within and beyond their grasp. People make a mistake when they do not know the scope of human knowledge.
Senses and Reflexive Knowledge Knowledge is created through a relationship with metaphysical information within, but it is limited by our senses. Senses connect individuals to both their inner selves and the external world, but these connections have limits.
Rationality and Moral Principles: Intellect and moral principles cannot fully dictate or govern the world, as the laws and regulations of the world operate differently. Moral questions arise based on external circumstances, but the causes of evil may not be known.
Inability to Grasp Divine Intentions Humans cannot fully grasp the wisdom of Allah’s intentions or all the mysteries behind His actions.
Limits of Senses Humans can increase the capacity of their senses to a degree; however, there are still limits.
Limits of Imagination While humans possess the ability to extract information and create a world of imagination, this world lacks inherent truth or reality.
Rational vs. Observational Human intellect does two things: it starts to know the things that are obligatory in the structure of consciousness and it creates possibilities that become the subject of research through observation.
These limitations suggest a need for humans to trust in Allah’s wisdom and accept that not all events or divine intentions can be fully understood. Humans should be patient and thankful for the knowledge they have, and avoid exceeding the boundaries of their understanding. The source material indicates that this is the greatest blessing.
Patience and Wisdom: Navigating Life’s Challenges
Patience and wisdom are critical virtues discussed in the sources, particularly in the context of understanding Allah’s intentions and navigating life’s challenges. The sources emphasize that patience and wisdom are intertwined and necessary for those who seek to understand the world and act in accordance with divine guidance.
Key aspects of patience and wisdom, as discussed in the sources, include:
Interdependence Human beings need both patience and wisdom at every step. A plan of action should be made with wisdom, speaking should be done with wisdom, and wisdom should be kept in mind in every action.
Source of Patience Faith in the wisdom of Allah gives a person patience.
Need for Patience Patience is advised when matters seem to be getting out of hand.
Patience in the face of adversity The sources advise patience and satisfaction, especially when the results of one’s efforts are not immediately apparent. Prophets, in particular, are encouraged to practice patience when faced with criticism and accusations, understanding that the timing and outcomes are determined by Allah.
Connection to Allah’s Wisdom Patience is essential for those who want to align themselves with Allah’s wisdom and understand the divine plan. It involves recognizing that Allah’s wisdom underlies everything and trusting in His decisions, even when they are not immediately comprehensible.
Wisdom as a guide Wisdom involves planning and acting in accordance with Allah’s law, recognizing that everything has an appointed time. It requires a comprehensive understanding of goals and objectives from a wise perspective.
Learning Patience and Wisdom The story of Musa (Moses) is presented as a lesson in patience and wisdom, teaching that one must wait patiently for Allah’s permission and have faith in the wisdom behind delays.
Wisdom and Knowledge: It is necessary to tell the wisdom of Allah, especially when one cannot see the real facts.
Wisdom as a Test: Prophets are tested on how they make decisions with their existing knowledge.
Benefits of Patience and Wisdom: Taking patience and wisdom in terms of results is the greatest blessing in this world.
The sources suggest that by combining patience and wisdom, individuals can navigate the complexities of life with a deeper understanding of Allah’s intentions and a greater sense of inner peace. Patience allows one to endure challenges, while wisdom provides the insight to discern the divine purpose behind them.
Understanding the Quran: Context, Reasoning, and Wisdom
The sources discuss several aspects of interpreting the Quran, emphasizing the importance of understanding its context, recognizing different forms of reasoning, and acknowledging the limits of human knowledge.
Key points related to Quran interpretation, according to the sources:
Arguments and Reasoning Understanding the Quran involves recognizing arguments from the self, signs from the horizons, and lessons from history. These arguments are sometimes detailed and sometimes not.
Historical Context Historical reasoning is important in understanding the Quran. Historical events, including the stories of prophets and their nations, are used to illustrate principles and provide guidance.
Patience and Wisdom The story of Musa (Moses) and Al-Khidr (mentioned in a tradition and adopted in Tafsir books) is presented as a lesson in patience and wisdom, teaching the importance of waiting for Allah’s permission and recognizing the wisdom behind delays. This story advises acting with patience and wisdom, and trusting in Allah.
Context and Circumstance Quranic verses and stories should be understood in the context of the situation in which they were revealed or the purpose they serve. The context of an event is important for understanding the message.
Dreams and Visions Dreams and visions mentioned in the Quran, particularly those of prophets, are true but may require interpretation. These visions can be shown in the form of parables and may be interpreted to reveal deeper meanings.
Intention and Observation Understanding the Quran involves recognizing the importance of intentions and observations.
Limits of Knowledge Humans have limitations in their understanding, and should not try to exceed the scope of their knowledge. There is a need to understand the limits of one’s knowledge when interpreting the Quran.
Trust and Contentment Humans should be content with Allah’s will and strive for the right results.
Guidance and Mercy: The Quran provides guidance, but some facts are beyond human grasp. Analogies are used to explain these concepts, but humans should not try to fully comprehend their reality.
Recognizing Allah’s Wisdom The ultimate goal of Quran interpretation is to recognize Allah’s wisdom and mercy in all matters.
The sources suggest that interpreting the Quran requires a combination of intellectual reasoning, historical awareness, and spiritual insight, as well as an acknowledgement of the limits of human understanding. It is a process of seeking guidance and understanding Allah’s intentions, while remaining humble and patient in the face of the unknown.
Prophet Musa: Patience, Wisdom, and Divine Knowledge
The sources discuss Prophet Musa (Moses) primarily in the context of his journey to learn patience and wisdom, as well as to illustrate key principles of Quranic interpretation.
Key aspects of the discussion regarding Prophet Musa, according to the sources:
The Journey for Knowledge: The story of Musa’s journey with a disciple to meet Al-Khidr (a servant of Allah) is a central theme. Musa seeks to gain knowledge from Al-Khidr, but is warned that he will not be able to be patient.
Patience and the Wisdom of Allah: The narrative serves to teach patience and to highlight the wisdom of Allah. Musa is advised to be patient and trust in Allah’s wisdom, even when events seem inexplicable or unjust.
Testing of Musa: Musa’s character, particularly his tendency to question events, is evident throughout the story. The journey can be seen as a test for Musa, intended to educate him.
Limitations of Human Knowledge: The story emphasizes the limitations of human knowledge. Musa’s inability to understand Al-Khidr’s actions underscores the idea that humans cannot fully grasp the divine wisdom behind all events.
Events during the Journey: The journey includes several notable events that test Musa’s patience:
The damaged boat: Al-Khidr damages a boat belonging to poor sailors.
The slain boy: Al-Khidr kills a young boy.
The repaired wall: Al-Khidr repairs a collapsing wall in a village where they were denied hospitality.
Explanations and Divine Intentions: Al-Khidr eventually explains the reasons behind these actions, revealing the divine intentions and wisdom that were hidden from Musa. These explanations illustrate that seemingly negative events can have positive purposes and are part of Allah’s plan.
Lessons from the Story: The story of Musa is presented as a parable, teaching believers to trust in Allah’s wisdom, accept the limits of human knowledge, and be patient in the face of adversity. It also highlights the importance of acting with wisdom and aligning oneself with Allah’s laws.
The Meeting Place of Two Rivers: The location where Musa meets Al-Khidr is described as the meeting place of two rivers. The exact location is debated, with some suggesting it could be near Khartoum, where the Blue Nile and White Nile converge. Others suggest it is not important to know the physical location.
Determined Journey: Musa expresses his determination to continue his journey until he reaches the meeting place of the two rivers, even if it takes years.
📑 SERIES: Story MUSA & KHAZIR | موسی اور خضر کا واقعہ | AL Bayan – Al-KAHF | JAVED AHMAD GHAMIDI
The Original Text
کہ یہ کوئی نئی بات نہیں اور نہ ہی یہ نئے رسول ہیں، جس طرح اللہ تعالیٰ پہلی امتوں میں اپنے رسول بھیجتا رہا ہے، وہ بھی وہی ہیں جیسے پہلے رسولوں کی امتیں اس قانون کے تحت آئیں جس کا ذکر یہاں کیا گیا۔ آپ بھی اسی طرح اس سے متاثر ہونے جا رہے ہیں۔ اب وہی باتیں ہونے جا رہی ہیں۔ یہ اللہ کی سنت ہے اور یہ سنت ناقابل تغیر ہے۔ اس میں کبھی کوئی تبدیلی نہیں آئے گی اور نہ ہی اس سے پہلے کبھی ہوئی ہے، لہٰذا خبردار رہیں۔ میں نے سب سے پہلے نفس کے دلائل کو بیان کیا، افق کے آثار کی طرف توجہ دلائی، کچھ واقعات کی طرف اشارہ کیا، اس کے بعد بالآخر تنبیہ اور مشاہدہ کے موضوع پر یہ سلسلہ پایہ تکمیل کو پہنچا۔ انبیاء بھی جسم میں انسان ہوتے ہیں، دعوت دیتے ہیں، پیغام پہنچاتے ہیں، اپنے دن رات اپنی قوم کے لیے وقف کرتے ہیں۔ کیا تم قرآن پاک میں دیکھتے ہو کہ کیسی تمنا اور کیسی محبت کا اظہار کیا گیا ہے کہ میں تم ہوں۔ میں تجھ سے کوئی صلہ نہیں مانگتا، یعنی میں نے کوئی مطالبہ نہیں کیا، مجھے تجھ سے کوئی دلچسپی نہیں، سوائے القرباء میں المدت کے، میں تجھ سے تعلق رکھتا ہوں، اس کی محبت مجھے مجبور کرتی ہے کہ میرا رب کہتا ہے: لاالہ نفسک۔ اللہ کی قسم اے ایمان والو، میں اب بھی تمہیں دعوت دے رہا ہوں، پیغام پہنچا رہا ہوں، تمہارے لیے دعا کر رہا ہوں۔ میری آخری خواہش یہ ہے کہ میں تمہیں چھوڑنے سے پہلے یقین کر لو۔ عمل یا اس رویے کا اثر ہوگا۔ قرآن کریم کا شعر دیکھیں تو کہتا ہے کہ اگر نفس سے دلیلیں ہیں، افق سے دلیلیں ہیں تو تاریخ سے بھی دلیل ہوگی۔ بعض اوقات نفس سے دلائل زیادہ مفصل ہوتے ہیں۔ کبھی آفاق کے دلائل زیادہ مفصل ہوں گے، کبھی تاریخی استدلال، جیسے سورہ اعراف میں، کئی حصوں پر پھیلا ہوا ہو گا، بلکہ پوری سورت، لیکن یہ تین طرح کے استدلال ہیں، نفس سے، آفاق سے، اور تاریخ سے اسی طرح جب مضمون تحدید یا مشاہدہ کا مضمون، مضمون اس مقام تک پہنچتا ہے، جہاں یہ مضمون پہنچتا ہے، تو یہ مضمون بھی انسان کو پہنچتا ہے، وہ بھی سکون ہوتا ہے۔ اسے تسلی ہے. یہ تبدیلی تین یا چار آیات میں ہوتی ہے، یعنی اوپر جو کچھ بیان کیا گیا ہے اس کی وجہ سے وضاحت کی گئی ہے، پھر ایک دو آیات میں ایک تاریخی واقعہ کی طرف اشارہ کیا گیا ہے، اور آخر میں قرآن کریم یا رسول اللہ سے متعلق تسلی دی گئی ہے۔ مضمون آگے بڑھے تو کئی جگہ ایسا ہوتا ہے، جیسے ابواب ترتیب دیتے وقت، اس میں بھی کبھی کبھی یہ حالات یکے بعد دیگرے پیش آتے ہیں اور قوم کی طرف سے یہی رد عمل سامنے آتا ہے، تو مثلاً سورۃ الضحی یا سورۃ الم نشرح میں تسلی کی صورتیں ظاہر ہوتی ہیں، پھر اس باب کا وہ حصہ تکمیل کو پہنچتا ہے، یہاں بھی اللہ تعالیٰ نے نصیحت فرمائی ہے، چنانچہ اللہ تعالیٰ نے فرمایا: صبر صبر کی تلقین اسی وقت کی جاتی ہے جب نتیجہ نکلتا دکھائی دے کہ معاملہ ہاتھ سے نکلتا جا رہا ہے۔ نبی کا معاملہ اپنی امت کے بارے میں یہ نہیں ہے کہ وہ مطمئن ہو کر بیٹھ جائے اور یہ کہے کہ اگر اچھا عذاب آئے تو اسے چھوڑ دو۔ سیدنا نوح نے فرمایا کہ لم یلد الفجرا کفارہ۔ عام طور پر، نبی چاہتا ہے کہ اس کی قوم ایمان لائے اور اللہ تعالی کی رحمت سے سرفراز ہو، لیکن وہ مرحلہ کھو جاتا ہے . ان رسولوں کے باب میں جنہوں نے اللہ کی اجازت کے بغیر ہجرت کی، یہ طے ہے کہ ان کی ہجرت کا فیصلہ اللہ ہی کرتا ہے، وہ یہ فیصلہ خود نہیں کر سکتے، وہ اپنے اجتہاد سے نہیں کر سکتے۔ ان کا تذکرہ کرتے ہوئے فرمایا گیا کہ اگر تم مچھلی کے آدمی کی طرح جلد بازی میں نہ بیٹھو تو صبر کا وہ مرحلہ آئے گا جس میں اللہ کی رحمت نازل ہوگی اور اللہ تمہیں اپنی فتح سے کامیابی عطا کرے گا لیکن اس سے پہلے ایک مشکل وقت آئے گا۔ مرحلہ انا کا ہے، اس میں جلد بازی نہیں کرنی چاہیے، لہٰذا اب صبر و تحمل کے ساتھ سکون اور سکون کا معاملہ ہے۔ ہمارے لحاظ سے ہم انسان ہیں، کبھی ہمیں چند قدم کے فاصلے پر نظر آتا ہے اور کبھی ہم آگے دیکھتے ہیں اگر آپ اصل حقائق کو نہیں دیکھ سکتے تو اللہ کی حکمت بتانا ضروری ہے، تو اکثر موضوع یہ ہوتا ہے کہ جب تسلی دینا چاہتے ہیں تو کبھی نتیجہ بتا دیا جاتا ہے، اور کبھی پردے کے پیچھے حکمت کام کرتی ہے۔ یہ رہا ہے، یعنی مضمون میں اس کی وضاحت ہو چکی ہے، کیا ایسا نہیں، کہ ایک عمر ہے، ایک مدت ہے، وہ مقرر ہے، وہ جو کچھ کرتے ہیں، کبھی کبھی ہو جاتا ہے، جو رہ گیا ہے، رب، اب ان کی کہانی ختم ہو جائے، یہ خیال بھی پیدا ہوتا ہے۔ اور یہ بھی کہ رب میری امت کی انتہا ہے، اگر آپ کی طرف سے ہدایت ہوتی تو ان دونوں پہلوؤں کو سامنے رکھتے ہوئے کبھی سکون، کبھی صبر، کبھی اطمینان اور ایسی باتیں بھی نصیحت کی جاتی ہیں اور پھر کوئی حکمت یہ بھی بتائی جاتی ہے کہ ایسا کیوں ہے کہ ہم نے ہر چیز کا ایک وقت مقرر کر رکھا ہے، ہمارے فیصلے کیسے ہوتے ہیں، اس بات کو یہاں بیان کر دیا گیا ہے، تاریخ کا ایک واقعہ بھی ہے، تاریخ کا ایک واقعہ بھی ہے۔ انبیاء اور ان کی قوموں کی تاریخ۔ ایسا بھی ہوتا ہے اور عموماً قرآن کریم اسے موضوع بناتا ہے لیکن بعض اوقات اس کے علاوہ بھی دلال میں کچھ واقعات اس طرح پیش کیے جاتے ہیں۔ میں صاحب سے ملا ہوں۔ یہاں اس کا کوئی نام نہیں ہے۔ چونکہ یہ نام روایت میں آیا ہے اس لیے اسے ہم نے کتب تفسیر میں اختیار کیا ہے۔ اس کا تلفظ خضر ہے اور اسے ہمارے شاعروں نے باندھا ہے، اس لیے اسے اختیار کیا جاسکتا ہے۔ اس کا کوئی مطلب نہیں ہے۔ واضح رہے کہ اصل تلفظ ایک ہی ہے۔ اب یہ قصہ قرآن کریم نے پیش کیا۔ اور اب اگلا مضمون اس کے پیچھے ہے جس کی پابندی یا نشاندہی کی گئی ہے یا تنبیہ کی گئی ہے اور جس طرح آخر میں اس نے اپنے قانون کا حوالہ دیا ہے اور خاص طور پر بتایا ہے کہ ہر چیز کے پیچھے ایک حکمت ہوتی ہے۔ اور ہر چیز کے لیے ایک وقت مقرر ہے، اس لیے اسے سامنے رکھ کر استدلال کیا گیا ہے، پس واقعہ اور واقعہ کے پس پردہ موضوع کے درمیان تعلق کو اس طرح واضح کیا گیا ہے کہ اے نبیؐ، ان پر صبر کرو، یعنی اگر ایسا ہی ہو۔ وہ اس مقام پر پہنچ چکے ہیں اور کوئی بات سننے کو تیار نہیں ہیں، اس لیے ٹھیک ہے کہ فیصلہ ہو جائے گا، اس کا وقت مقرر ہو چکا ہے، لیکن اس فیصلے کا آپ کو صبر سے انتظار کرنا پڑے گا۔ اے پیغمبر ان پر صبر کرو اور اس کے لیے اس واقعہ کو سامنے رکھو واقعہ کو سامنے رکھو یہ معلوم ہے کہ عام طور پر جب واقعات بیان ہوتے ہیں تو قرآن کریم فعل متعین کرتا ہے، یہاں بھی ایسا ہی ہوا، یعنی فعل ہے لیکن اس کے پیچھے کوئی فعل نہیں ہے، اس لیے فعل لفظ کے لحاظ سے استعمال ہوتا ہے۔ ہم خود سمجھتے ہیں کہ کبھی ایسا ہوتا ہے، یاد رکھنا، کبھی ایسا ہوتا ہے، یاد دلانا، یاد دلانا، اور بعض اوقات صرف نبی ہی ہوتا ہے جو مخاطب ہوتا ہے، اسے اپنے سامنے رکھو، ذہن میں رکھو، وہ اسے الفاظ کے لحاظ سے نکالتے ہیں، ان کے مقابلے میں اپنے دل میں صبر کرو۔ اے نبی صلی اللہ علیہ وسلم اپنے سامنے موسیٰ فتح کا واقعہ رکھو جب موسیٰ نے اپنے شاگرد سے کہا تھا کہ میں نے بیان کر دیا ہے۔ اس سے واضح ہوتا ہے کہ اس کا مقام صرف ایک خادم کا نہیں بلکہ ایک نوجوان ساتھی اور طالب علم کا ہے۔ اس لحاظ سے دیکھا جائے تو لفظ شاگرد کا ترجمہ بہت موضوعی ہو گا، یعنی یہ لفظ بعض اوقات عام ہوتا ہے، لیکن موقع لفظ ہمیں بتاتا ہے کہ یہ لفظ یہاں کس پہلو سے استعمال ہوا ہے، لہٰذا اگر اردو زبان میں لفظ نوجوان کے استعمال کو بھی دیکھیں یا بوڑھے استعمال کر رہے ہوں، تو ان تمام پہلوؤں میں یہ لفظ استعمال ہوتا ہے۔ سے ہم کسی ایسی چیز کا ترجمہ اور اختیار کر سکتے ہیں جو اب منظر عام پر آ رہی ہے۔ اس واقعہ کے بارے میں ایک اہم بحث یہ پیدا ہوتی ہے کہ آیا یہ ایک واقعہ ہے جیسا کہ ہم اس کا تجربہ کرتے ہیں۔ یا خواب میں کچھ حقائق اس لیے دکھائے گئے ہیں کہ ہمارا وقت ختم ہو گیا ہے، اس لیے ہم اس پر اگلی نشست میں بات کریں گے اور پھر دیکھیں گے کہ یہ واقعہ کیا ہے، اس میں کیا پیش کیا گیا ہے اور یہ کس پہلو سے ہے۔ کہی گئی باتوں سے اس کا کیا تعلق ہے ؟ اللہ مجھ پر رحم فرمائے ، الحمدللہ۔ بات اس سے شروع ہوتی ہے کہ میں نے واضح کیا کہ خضر کا نام قرآن میں نہیں ہے، کیا کسی حدیث میں مذکور ہے، اسی وجہ سے وہ ہمارے درمیان مشہور ہوئے۔ ایسا لگتا ہے کہ جس طرح واقعات رونما ہوتے ہیں، اسی طرح ہمیں اپنے ساتھ پیش آنے والے واقعات کا سامنا کرنا پڑتا ہے۔ اسی طرح کی بات یہ بھی ہے۔ ایک بار پھر ہم وضاحت کرتے ہیں کہ ہجرت کا معاملہ اس طرح پیش آیا، نبی صلی اللہ علیہ وسلم کی سیرت کا بیان اور اسی طرح دوسرے انبیاء کی سیرت کا بیان بھی اسی طرح ہوا ہے۔ وہ بھی ان کے ہیں یا دکھائے جاتے ہیں۔ جب ہم کسی نبی کی زندگی کو بیان کریں گے تو اس کو اس طرح بیان کیا جائے گا جس طرح دنیوی زندگی میں پیش آنے والے واقعات، اگر ان پر کوئی بات نازل ہوئی یا ان پر کوئی مشاہدہ کیا گیا تو۔ ہو چکا ہے اور بیان بھی کیا جائے گا۔ یہ نبی کی سیرت ہے اور نبی کا مفہوم یہ ہے کہ اللہ تعالیٰ نے ایک شخص کو خطاب کے لیے چنا ہے، اسے چن کر یہ ذمہ داری دی گئی ہے۔ کہ وہ اللہ کا پیغام دوسروں تک پہنچائے۔ اس لیے اللہ تعالیٰ اپنا پیغام اس تک پہنچاتا ہے۔ اس کے لیے جو طریقے استعمال کیے گئے ہیں ان کا خلاصہ سورہ شوریٰ میں کیا گیا ہے۔ بتایا گیا ہے کہ دو سورتیں ہیں۔ خیال ان کے باطن میں بولتا ہے، جسے ایک شاعر نے کندراں بہارف می روئی کلام کہا ہے۔ بلکہ خیال بولتا ہے، یعنی گویا میں بات کر رہا ہوں یا کوئی اور مجھے جواب دے رہا ہے، تو یہاں خیال نے الفاظ کی شکل اختیار کر لی ہے اور الفاظ آواز بن کر نکلے ہیں، لیکن ایسا خواب یا خواب میں نہیں ہوتا، جس میں الفاظ آواز ہوتے ہیں۔ وہ وجود سے نہیں نکلتے، خیال بولتا ہے، یعنی یہ بولتا ہے، ہم اسے سنتے ہیں، سمجھتے ہیں، اور اسی طرح جب ہم بیدار ہوتے ہیں، اگر ہم نے کوئی بات کہی ہے تو ہم اسے دہرا سکتے ہیں، اگر یاد رہے تو مجھے یہ جواب ملا۔ فلاں نے جواب دیا، “لیکن اگر کبھی سوچو اور پیچھے مڑ کر دیکھو تو اس میں آوازیں اس طرح سنائی نہیں دیتی تھیں جیسی کہ اب سنی جاتی ہیں، اس میں فکر نے ایسی زبان اختیار کی ہے جو سنی اور سمجھی جاتی ہے۔” یہ وہاں ہے، لیکن اس میں کوئی آواز نہیں ہے۔ ایک بات یہ ہے کہ انسان کو اس طرح کچھ سننے کی صلاحیت دی گئی ہے۔ وہ خیال پر اثر انداز ہوتے ہیں اور پھر وہ خیال بولنا شروع کر دیتے ہیں۔ دوسرا طریقہ یہ ہے کہ کوئی چیز باقاعدہ آواز کے ساتھ سنی جاتی ہے۔ ایک حجاب ہے جیسے درخت کو اللہ تعالیٰ نے سیدنا موسیٰ علیہ السلام سے بات کرنے کے لیے پردہ کیا ہے، حجاب کے پیچھے سے آواز آتی ہے۔ یہ آواز اسی طرح کی ہے جس طرح ہم آواز سنتے ہیں۔ اور بندہ اسے سنتا ہے، اس کی حقیقت کو سمجھتا ہے، پھر جب حضرت موسیٰ کو دیا گیا تو کون سا مکالمہ ہوا، قرآن نے پورا مکالمہ نقل کیا ہے، انہیں اس میں کوئی عجب محسوس نہیں ہوا، وہ جواب دے رہے ہیں۔ وہ سوالات سن رہے ہیں، انہیں ہدایات دی جارہی ہیں، انہیں بتایا جارہا ہے کہ انہیں کون سا کام سونپا گیا ہے، جو مسائل انہیں درپیش ہیں، ان کے سامنے رکھے گئے ہیں، ان کا حل بتایا گیا ہے، تو ایک مکالمہ کی طرح۔ جیسا کہ ہوتا ہے مکالمہ ہوا لیکن جیسا کہ میں نے عرض کیا ہے خیال کو لفظوں میں ترجمہ کرنا پڑتا ہے خیال بولتا ہے اس میں حروف اور آواز نہیں ہوتی بلکہ کہنے والا بولتا ہے اور سننے والا سنتا ہے۔ اس کے ساتھ انبیاء کو سکھانے کے لیے ایک اور طریقہ اختیار کیا جاتا ہے، وہ ہے نیت کا طریقہ۔ اس کا مطلب یہ ہے کہ نہ کوئی بات ہوئی ہے اور نہ کوئی پیغام دیا گیا ہے بلکہ حقیقت دکھائی جا رہی ہے یعنی مشاہدہ کر لیا گیا ہے۔ اس کی دو نمایاں مثالیں حدیث میں بیان ہوئی ہیں۔ نبی کریم صلی اللہ علیہ وسلم نے فرمایا کہ جب مجھے اسرائیل کا رویا دکھایا گیا اور میں نے صبح کو کہا کہ مجھے بیت المقدس لے جایا گیا ہے۔ لوگوں نے اس پر طرح طرح کے اعتراضات کرنے شروع کر دیے، چنانچہ جس مجلس میں یہ معاملہ بیان کیا جا رہا تھا، وہاں قریش کے کچھ رہنما بھی موجود تھے۔ جب وہ سوال پوچھنے لگے تو میں گھبرا گیا۔ کسی چیز کو اپنی مرضی سے یا کسی کی توجہ سے دیکھا جاتا ہے، پھر جو کچھ وہاں دکھایا جاتا ہے اور خواب میں ہم عام مشاہدہ کرتے ہیں۔ اگر آپ دکھا دیں کہ یہاں کتنے جال لگے ہوئے ہیں تو انہوں نے اسی طرح کے سوالات کرنا شروع کر دیے، اس حقیقت کے پیش نظر کہ رسول اللہ صلی اللہ علیہ وسلم کی دعا کا انکار کیا جاتا ہے۔ گھبراہٹ یہ پیدا ہوئی کہ مجھ سے یہ ممکن نہ ہو، وہ باتیں گزر گئیں، اگر اس وقت میرے ذہن میں ہوتی تو اب وہ مجھ سے ان تفصیلات کے بارے میں پوچھیں گے، تو میں انہیں کیسے بیان کروں گا، پھر وہ منظر میرے سامنے آنے لگا، یعنی پھر سے گویا ایک قسم کی آرت شروع ہوگئی۔ اب وہ یہاں گم نہیں، خوابوں کا علم نہیں، چیزیں اسی طرح دیکھی جا رہی ہیں، تو اس نے کہا کہ میں اسے دیکھتا تھا، جو سوال کرتا تھا، ویسا ہی منظر سامنے آتا اور میں اسے بیان کرتا چلا جاتا۔ تو یہ ایک مثال ہے۔ دوسری مثال وہ ہے جس میں رسول اللہ صلی اللہ علیہ وسلم نماز پڑھا رہے تھے۔ انہوں نے کہا کہ جب میں نماز کے لیے کھڑا ہوا تو مجھے جنت اس طرح دکھائی گئی جیسے اس وقت دکھائی دی لیکن یہ واضح نہیں کہ وہ سوئے ہوئے ہیں یا نیند کی حالت میں۔ میں نے محسوس کیا کہ میں اسے توڑ سکتا ہوں، اس لیے میں نے اپنا ہاتھ آگے کیا، یعنی یہ شکل بنائی گئی۔ یہ نیتوں کا ایک پہلو ہے۔ ایک پہلو وہ ہے جس میں خواب میں کوئی چیز یا مشاہدہ ہوتا ہے۔ خواب میں کوئی چیز دکھائی جاتی ہے، یعنی نبی صلی اللہ علیہ وسلم سو رہے ہیں، پھر دکھایا جاتا ہے۔ ان میں جو کچھ دکھایا گیا ہے وہ حقیقت ہے، اس میں شیطان کے دخل کا سوال نہیں ہے اور نہ ہی روح کے دخل کا کوئی سوال ہے، بلکہ وہ تمثیلوں کی صورت میں دکھائے گئے ہیں، اس لیے سورہ یوسف میں ان کی تشریح کی گئی ہے۔ قرآن نے واضح کر دیا ہے کہ انبیاء کو دکھائے گئے یا ان سے متعلق لوگوں کو دکھائے گئے خواب اگر ان سے متعلق کوئی نتیجہ نکالیں تو وہ سچے ہیں، چنانچہ بادشاہ کا خواب بھی سچا تھا۔ جس شخص نے اس کے ساتھ بدتمیزی کی اس کا خواب بھی سچ تھا۔ ان کا خواب بھی سچا تھا اور سیدنا یوسف کا خواب بھی، جس سے یہ نکلا کہ باپ نے گیارہ ستاروں اور سورج اور چاند کو اپنے سامنے سجدہ کیا۔ اگر میں دیکھ رہا ہوں تو یہ سب سچے خواب ہیں، لیکن تعبیر کے محتاج ہیں، یعنی تعبیر ہو جائے گی، اس لیے جب نیند میں کوئی چیز اس طرح دکھائی جائے تو اس واقعہ کی تعبیر بھی ہو سکتی ہے۔ عام طور پر، اس کی تشریح کی ضرورت ہے. اس دیباچے میں میں نے یہ ساری باتیں اس لیے بیان کی ہیں کہ پچھلی بار میں نے یہ واقعہ بیان کیا تھا جو آگے بیان کیا جا رہا ہے۔ یہ خواب بھی ہو سکتا ہے اور یہ ایک حقیقی واقعہ ہے، یعنی ایسا ہی واقعہ سیدنا موسیٰ علیہ السلام کے کسی سے ملنے کے وقت ہوا، جس کی مزید تفصیل بیان کی جا رہی ہے، اور یہ خواب کا واقعہ بھی ہو سکتا ہے۔ اسرائیل کا خواب بھی اسی طرح بیان کیا گیا ہے۔ سورہ بنی اسرائیل کے شروع میں ایک لفظ بھی ایسا نہیں ہے جس سے معلوم ہوتا ہو کہ یہ واقعہ خوابیدہ ہے۔ اسی لیے لوگ اب تک اس پر بحث کر رہے ہیں، یعنی اگر واقعہ سبحان الذی یسرہ بابدا لیل من المسجد الحرام المسجد المجید البرکانہ حول پڑھیں تو اس پوری آیت کو عربی زبان میں پڑھیں، ترجمہ میں پڑھیں، مزید پڑھیں۔ بعد میں اسی سورت میں فرمایا کہ یہ جاگنے کا واقعہ نہیں تھا بلکہ ایک خواب تھا، خواب دکھایا گیا تھا۔ کیونکہ یہ اللہ تعالیٰ کی طرف سے واقعات ہیں، یعنی ان کی تاویل کی ضرورت ہے، لیکن وہ واقعات ہیں، ان کو اس وضاحت کی ضرورت نہیں۔ لہٰذا لفظ نیت سے مراد یہ ہے کہ جو معاملات انبیاء علیہم السلام کے ہوتے ہیں وہ وحی کے معاملات ہیں، یعنی کوئی بات سنی جاتی ہے، سنی جاتی ہے، اس کی صورتیں بیان کی جاتی ہیں، اسی طرح کچھ چیزیں دکھائی جاتی ہیں، یہ آراء ہے۔ اور دکھانے کے لیے دونوں طریقے استعمال کیے جاتے ہیں۔ بیداری میں انا ظاہر ہونے لگتی ہے۔ یہ ”عراق” کا ایک پہلو ہے اور دوسرا یہ کہ اسے خواب میں دکھایا جائے۔ یا بیداری میں دکھایا گیا ہے، البتہ اگر اس کی حقیقت کو مزید کسی بحث میں بیان کرنا ہو تو بتایا جاتا ہے۔ میں نے ان چیزوں کو اپنے سامنے رکھ کر سمجھا دیا ہے اور اس میں لکھا ہے۔ ہم واقعہ پڑھنے جا رہے ہیں۔ میں پہلے بھی کہہ چکا ہوں کہ یہ واقعہ یہاں سیاق و سباق کے لحاظ سے آیا ہے، یعنی صبر اور حکمت کی تعلیم دینے کے لیے۔ یہ بتانا مقصود ہے کہ اب ہجرت کی اجازت اس وقت تک نہیں دی جاتی جب تک کہ وہ موصول نہ ہو جائے، صبر سے انتظار کرنا پڑتا ہے اور انتظار کے لیے اس بات پر یقین ہونا ضروری ہے کہ کسی چیز میں تاخیر ہو رہی ہے، اس لیے اگر اس میں اللہ کی طرف سے کوئی حکمت ہے تو اللہ کیا کرتا ہے؟ حکمتیں ہیں اور ان حکمتوں کو سمجھنے کے لیے انسان کو صبر کیوں کرنا چاہیے۔ اس حقیقت کو بیان کرنے کے لیے یہ واقعہ بیان کیا گیا ہے۔ خواب خواب بھی ہو سکتا ہے کیونکہ انبیاء علیہم السلام کے خواب سچے ہوتے ہیں یعنی ان میں غلطی کا سوال ہی پیدا نہیں ہوتا۔ چونکہ انبیاء کے نظارے سچے ہیں، اس لیے قرآن اور بائبل دونوں سے یہ بات واضح ہے کہ بعض اوقات ان کو بالکل اسی طرح بیان کیا جاتا ہے جس طرح دنیا کی بیداری کے واقعات ہوتے ہیں۔ بیانات اس لیے بنائے جاتے ہیں کہ وہ سچے ہیں، اس لیے جب اللہ تعالیٰ ان کو بیان کرتا ہے تو عہد نامہ قدیم میں اور قرآن میں بھی کچھ بیانات ہیں، وہ اسے اسی طرح بیان کرتے ہیں جس طرح کوئی واقعہ بیان کیا جا رہا ہے، یعنی اللہ کا۔ نقطہ نظر سے دیکھیں تو دونوں چیزیں دکھائی دے رہی ہیں۔ یعنی قرآن اور بائبل کی بعض اوقات بالکل اسی طرح تشریح کی جاتی ہے جس طرح روشن خیال علماء نے سورہ بنی اسرائیل میں واقعات کو بیان کیا ہے۔ اسراء کے واقعہ کو دیکھیں تو سورہ بنی اسرائیل کی ابتدا اسی واقعہ کی تفصیل سے ہے اور کہا گیا ہے کہ پاک ہے وہ جو اپنے بندے کو راتوں رات مسجد حرام سے مسجد حرام تک لے جائے۔ اب ان الفاظ میں ایسی کوئی بات نہیں کہ اس کی اصل میں وضاحت کیسے ہوئی؟ آئیے سورت کی تلاوت کرتے ہیں، پھر آگے بڑھیں اور وضاحت کریں کہ یہ خواب تھا۔ یہ اس لیے نہیں تھا کہ بتانا تھا، اس لیے نہیں کہ ایک مسئلہ زیر بحث تھا جس میں حوالہ دینا تھا، اس لیے وہاں یہ بیان کیا گیا ہے کہ وما نے جھوٹا کہا ہے کہ جو رویا ہم نے آپ کو دکھایا وہ اوپر بیان کر دیا ہے۔ ان لوگوں نے اسے فتنے میں بدل دیا ہے اس لیے اس کا اس طرح ذکر کرنا پڑا تو اس کی حقیقت واضح ہو گئی۔ اگر آپ سورہ بنی اسرائیل کو دیکھیں تو اسرائیل کا واقعہ اسی طرح بیان کیا گیا ہے، لیکن اسی سورت میں مزید قرآن نے واضح کیا ہے کہ یہ ایک خواب تھا جو رسول اللہ صلی اللہ علیہ وسلم کو دکھایا گیا تھا۔ واقعہ ہو یا خواب، ہر شخص اس میں اپنی ترجیح قائم کر سکتا ہے۔ میں نے اپنی ترجیح بیان کی ہے، تاہم، میں دوسرے امکان کی نفی نہیں کرتا۔ جگہ ایک ہی ہو سکتی ہے۔ اس میں دو دریاؤں کے ملنے کی جگہ کا ذکر ہے۔ اسے عربی میں مجمع البحرین کہتے ہیں۔ مصر میں اور سوڈان میں اس علاقے میں اسے مجمع البحرین کہا جاتا ہے۔ اسی لفظ کا اوپر ذکر ہوا ہے۔ تاہم اگر اسے عالمی بیداری کا واقعہ سمجھا جائے تو دو دریاؤں کے ملنے کی جگہ ایک ہی ہوسکتی ہے، یعنی اگر یہ واقعہ ہماری دنیا جیسا ہے۔ میں نے دیکھا ہے کہ یہ کوئی خواب نہیں ہے، اس لیے اس میں البحرین کی مجلس کا ذکر ہے، یعنی دو دریاؤں کے ملنے کی جگہ، ظاہر ہے، پھر مسئلہ پیدا ہوتا ہے کہ یہ کون سی جگہ ہے، یعنی سیدنا موسیٰ مصر میں ٹھہرے، بعد میں وہاں سے چلے گئے۔ وہ 40 سال صحرائے سینا میں رہے اور وہ فلسطین نہ پہنچ سکے۔ یہ واقعہ کب ہوا؟ بائبل میں اس کا کوئی ذکر نہیں ہے۔ یہ معاملہ کہاں ہوا؟ اگر آپ خواب کو مان لیں گے تو ان سے کچھ پیدا نہیں ہوگا۔ کسی سوال کی ضرورت نہیں۔ اگر بیداری کے واقعہ پر یقین کیا جائے تو دونوں دریاؤں کے ملنے کی جگہ وہی ہو سکتی ہے جہاں دریائے نیل کی دو بڑی شاخیں بحرالبائز اور بحرال الازرق موجودہ شہر خرطوم کے قریب آپس میں ملتی ہیں۔ یہ شاخوں میں بٹ جاتا ہے اور وہ دونوں شاخیں وہاں مل جاتی ہیں، ایک کا نام البحر البیز، سفید دریا اور دوسرا ازرق یعنی نیلا دریا ہے۔ اور میں نے عرض کیا کہ اگر خواب تھا تو اس سوال پر تحقیق کی ضرورت نہیں۔ قرآن کے صحابی فرحیح نے بالکل صحیح لکھا ہے کہ جن علاقوں میں حضرت موسیٰ نے اپنی پوری زندگی گزاری وہاں اس ایک جگہ کے علاوہ کوئی جماعت نہیں ہے۔ بحرین نہیں ملتا، یعنی جہاں دو دریا آپس میں ملتے ہیں اور کوئی جگہ نہیں ہے، تو ہمیں یہ ماننا پڑے گا کہ واقعہ وہیں پیش آیا، اور اسی طرح بحرین کی اس ایک مجلس کے علاوہ واقعہ میں بیان کردہ تمام چیزوں کا ذکر نہیں ہے، کیونکہ بائبل میں اس کا کوئی ذکر نہیں ہے۔ اگر ایسا ہوا تو سوال باقی ہے کہ یہ واقعہ آخر کیوں پیش آیا، وہ سفر پر نکلے اور ان کے ساتھ ان کا ایک شاگرد بھی تھا، پھر تاریخ میں اس کا ذکر کیوں نہیں کیا گیا؟ یہ سوال ان لوگوں کے لیے بھی اہم ہے جو اسے ایک واقعہ سمجھتے ہیں، اسے بصیرت کی دنیا کا معاملہ سمجھتے ہیں، اس سوال کی کوئی ضرورت نہیں، البتہ یہ معاملہ ہے، اب اس میں کہا گیا ہے کہ اے نبیؐ، ان کے سامنے صبر کرو، یہ واقعہ کا سیاق و سباق ہے۔ اے نبی صلی اللہ علیہ وسلم ان پر صبر کرو اور وہ واقعہ ان کے سامنے رکھ دو۔ یہ شروع ہوتا ہے کہ موسیٰ ( علیہ السلام ) اپنے شاگرد کے ساتھ چل رہے ہیں۔ وہ کہتے ہیں کہ چل رہا ہوں، یہ ایک لمبا سفر ہے، اور کوئی منزل نہیں، تو کہتے ہیں کہ مجھے چلتے رہنا چاہیے، یعنی جب تک میں دو دریاؤں کے ملنے کی جگہ نہ پہنچ جاؤں، مجھے چلتے رہنا ہے، یعنی یہ میری منزل کی طرح ہے۔ جب تک مجمع البحرین دو دریاؤں کا ملاپ ہے، میں اپنا سفر نہیں روکوں گا، میں چلتا رہوں گا۔ جب تک میں دو دریاؤں کے ملنے کی جگہ نہ پہنچ جاؤں یا برسوں یونہی چلتا رہوں، یعنی نہیں رکوں گا، مجھے منزل تک جانا ہے، کیا ایسا نہیں ہے کہ کسی نے کہا ہو کہ جہاں دو دریا ملتے ہیں، تم ایسے ہی چلتے رہو۔ جہاں دو دریا ملتے ہیں وہ منزل ہے تو وہ کہے گا کہ میں چلتا رہوں گا چاہے برسوں چلتے رہوں اس سفر میں نہیں رکوں گا۔ یہ عزم و حوصلے اور جذبے کا اظہار ہے اور یہ لفظ بہ لفظ ٹپک رہا ہے۔ استاد امام لکھتے ہیں کہ وہ کہتے ہیں کہ یا تو میرا مطلب ہے کہ سیدنا موسیٰ علیہ السلام کہتے ہیں کہ میں یا تو مجمع البحرین میں اس جگہ پہنچوں گا جہاں پہنچوں گا۔ کیونکہ مجھے ہدایت دی گئی ہے۔ مجھے کچھ بتانے کے لیے بلایا گیا ہے، اس لیے میں اب نہیں رکوں گا، میں اس مقام تک پہنچنے تک چلتا رہوں گا، اور اگر وہ منزل برسوں نہ آئی تو برسوں چلتا رہوں گا۔ ورنہ یہ شخص بہرحال اس محبوب سفر پر جا رہا ہے اور اس عزم کے ساتھ روانہ ہو رہا ہے کہ یا تو اس کی منزل مقصود ہے یا وہ مرنا چاہتا ہے، یعنی اس سفر کے لیے مجھے اپنا سب کچھ قربان کرنا ہے، تو گویا یہ عزم ہی ہے کہ اس نے خواب میں یا کسی حقیقی واقعہ میں اپنے شاگرد پر ظاہر کر کے کہا کہ میں جاری رکھوں گا، جب وہ مچھلیوں سے ملاقات کے لیے دریائے مچھلی کے پاس پہنچے تو وہاں پہنچ گئے۔ ان کا ناشتہ. وہ چلتے وقت اپنے ساتھ کچھ لے گیا تھا، وہ بھول گیا۔ شاگرد نے اسے اٹھایا ہوگا۔ جب وہ ندیوں کے جلسہ گاہ پر پہنچا تو ناشتے میں تلی ہوئی مچھلی بھول گیا۔ میں نے وضاحت کی ہے کہ اس کے لیے لفظ “فوری گلینڈ” استعمال ہوا ہے۔ یہ لفظ کسی بھی طرح بغیر بھنی ہوئی یا زندہ مچھلی کے لیے موضوع نہیں ہے، اس لیے یہ اس بات کا واضح ثبوت ہے کہ مچھلی بھنی ہوئی ہے۔ زندہ مچھلی لے کر اتنے لمبے سفر پر جانے اور پھر اسے ناشتہ کہنے کی کوئی وجہ نہیں ہے۔ اس سے صاف معلوم ہوتا ہے کہ یہ تلی ہوئی مچھلی تھی، چنانچہ جب وہ دریاؤں کے جلسہ گاہ پر پہنچے تو وہ اپنے ناشتے میں تلی ہوئی مچھلی بھول گئے اور اس نے اسے وہیں رکھ لیا۔ کرےگا نے مڑ کر دیکھا کہ بھنی ہوئی مچھلی درد کے ساتھ باہر نکل کر دریا میں چلی گئی۔ وہ مچھلیاں جاتے ہوئے دریا میں چلی گئیں جب وہ جلسہ گاہ پر پہنچا تو وہ اپنے ناشتے میں تلی ہوئی مچھلی بھول گیا اور اس نے راستہ بنانے کے لیے دریا میں ایک سرنگ کھودی۔ انہیں ایک نشانی بتائی گئی کہ ایسا ہو جائے گا، پھر وہیں رک جاؤ، یعنی جب دو دریاؤں کے ملنے کی جگہ پر پہنچو گے تو ایک واقعہ ہو جائے گا، یہی تمہارا مقدر ہے۔ پھر جب آگے بڑھے تو موسیٰ نے اپنے شاگرد سے کہا، قل فتح، اتنا کھانا بھائی، وہ ہمارا ناشتہ کھاتے ہیں، جو پیک کیا ہوا تھا، وہ کھاتے ہیں۔ ذرا ناشتہ لے آؤ اور جیسے ہی ان کے کہنے پر کھا پی لو۔ حیرت کی بات ہے کہ کوئی اپنی بات بڑی ہچکچاہٹ سے کہتا ہے۔ ہوا تھا کہ وہ دونوں دریا کہاں سے شروع ہوتے ہیں جب ہم اس چٹان کے پاس ٹھہرے ہوئے تھے تو میں نے کچھ دیر تک مچھلی کے بارے میں نہیں سوچا یعنی میں اس منظر میں گم ہو گیا اس وقت میں نے مچھلی کے بارے میں نہیں سوچا اور حضرت کیا کہوں، یہ شیطان ہی تھا جس نے مجھے یاد کرنا بھلا دیا اور اس نے عجیب طریقے سے دریا میں اپنا راستہ تلاش کیا۔ میری سمجھ میں نہیں آیا کہ کیا ہوا۔ لیکن جیسے ہی میں نے دیکھا، مچھلی نے چھلانگ لگائی اور وہ زندہ ہو کر دریا میں چلی گئی۔ تو گویا اس طالب علم نے نہایت واضح الفاظ میں اپنی معذرت کا اظہار کیا ہے کہ یہ واقعہ پیش آیا ہے، اب میں کیا کروں، اگر آپ اس پر غور کریں، اگر آپ عربی زبان اور اس کے ادب سے واقف ہیں۔ ایک ایک لفظ رک کر بولا گیا اور قرآن نے اسے بالکل اسی طرح بیان کیا ہے، یعنی گویا کوئی بات جھجکتے ہوئے کہی جارہی ہے۔ ہم نے اسی رعایت کے ساتھ ترجمہ کیا ہے۔ میرے پاس معاوضے کی کوئی صورت نہیں تھی۔ میں اسے لے گیا اور اب وہ مچھلی دریا میں چلی گئی ہے۔ اس نے معافی مانگ لی ہے۔ یہ کہانی ابھی تک چل رہی ہے۔ ہمارا وقت ختم ہو گیا ہے۔ اللہ کے نام سے جو بڑا مہربان نہایت رحم والا ہے، خواتین و حضرات، ہم سورہ کہف کی آیت نمبر 64 کے سبق میں پناہ لے رہے ہیں۔ سیدنا موسیٰ علیہ السلام کے سفر کا واقعہ زیر بحث ہے۔ نیز، وہ کوئی واقعہ تخلیق کرنے یا تاریخ بیان کرنے کا ارادہ نہیں رکھتے۔ یہاں سورہ کے سیاق و سباق میں رسول اللہ صلی اللہ علیہ وسلم کو تسلی دی گئی ہے اور بتایا گیا ہے کہ اگر دعوت کے نتائج اس کی توقع کے مطابق نہ آئے۔ اس لیے اللہ کی حکمت پر نگاہ رکھیں۔ اللہ دنیا کا نظام چلا رہا ہے۔ اس میں جو کچھ ہوتا ہے وہ بڑی حکمت کے تحت ہوتا ہے۔ اللہ کی حکمت پر ایمان ہی انسان کو صبر عطا کرتا ہے۔ یہ انبیاء کے لیے بہت بڑا امتحان ہے۔ وہ اپنی دعوت کو پوری خوش اسلوبی کے ساتھ پیش کرے، لوگوں کے سامنے اسے ہر لحاظ سے واضح کرے، ان کے اعتراضات کا جواب دے، لیکن لوگ سننے کو تیار نہیں، اس پر الزام لگاتے ہیں، ہزار طرح سے اس پر تنقید کرتے ہیں۔ ایسے موقعوں پر خود کو نشانہ بنایا جاتا ہے، انسان تو انسان ہے، اس کی فطرت بھی رد عمل ظاہر کرتی ہے۔ قرآن کریم نے ایسے تمام مواقع پر یہ نصیحت کی ہے اور کہا ہے کہ جو ذمہ داری تمہیں دی گئی ہے اسے خود ادا کرو۔ ہجرت کا موقع کب آئے گا ؟ قوم کو چھوڑنے کا وقت کب ہے؟ کارروائی کا وقت کب ہے؟ انجام طے ہے، یہ سارے فیصلے آپ کے کرنے کے نہیں، اللہ تعالیٰ کرے گا، اس لیے پیچھے مڑ کر دیکھیں تو صبر سے کام لینے کی تلقین کی گئی ہے۔ میرے خیال میں انسان کو ہر قدم پر ان دو چیزوں کی ضرورت ہوتی ہے، اسے حکمت کے ساتھ لائحہ عمل بنانا چاہیے، حکمت سے بات کرنی چاہیے، ہر عمل میں حکمت کو ذہن میں رکھنا چاہیے، اور تمام اہداف و مقاصد کو حکمت کے نقطہ نظر سے زیر بحث لانا چاہیے۔ اور اللہ کے قانون کے مطابق چونکہ یہ طے ہے کہ ہر چیز کا ایک وقت مقرر ہے جس طرح اللہ نے میرے اور تمہارے لیے ایک وقت مقرر کیا ہے جب ہماری مہلت ختم ہو جائے گی اور ہمیں واپس بلایا جائے گا۔ جس طرح قوموں کے لیے ایک ڈیڈ لائن ہے، اسی طرح اللہ تعالیٰ اگر کسی کو کوئی مشن سونپتا ہے تو اس کے لیے بھی ایک ڈیڈ لائن ہے۔ مجھے صبر اور حکمت کا طریقہ اختیار کرنا چاہیے، اللہ پر بھروسہ رکھنا چاہیے اور اللہ پر بھروسہ اسی وقت باقی رہتا ہے جب انسان کو یہ اطمینان ہو کہ میرے رب کی حکمت ہر چیز میں کام کرتی ہے اور اس کی حکمت کا کوئی پردہ نہیں ہے۔ اس واقعہ میں یہی سبق دیا گیا ہے۔ کہا جاتا ہے کہ سیدنا موسیٰ کو اللہ تعالیٰ نے دیکھا۔ یہ مشاہدہ براہ راست تھا یا خواب میں۔ کہ وہ سفر پر ہیں، ان کا کوئی خادم یا ان کا کوئی شاگرد ان کے ساتھ ہے اور وہ کہہ رہے ہیں کہ میں اس وقت تک چلتا رہوں گا جب تک کہ وہ مجمع البحرین یعنی دو دریاؤں کے ملنے کی جگہ نہ پہنچ جائیں۔ وہ دو دریاؤں کے سنگم تک نہیں پہنچ پاتے، وہ اس عزم کا اظہار کرتے ہیں کہ چاہے مجھے برسوں چلنا پڑے، میں چلتا رہوں گا، جو کچھ مجھے بتایا گیا ہے اس کے حصول کے لیے میں اس سفر کو نہیں روکوں گا۔ وہ بڑے عزم کے ساتھ اس وقت تک اپنے عزم کا اظہار کرتے ہیں جب تک کہ وہ جگہ ختم نہ ہو جائے۔ کافی دیر وہاں رہنے کے بعد معلوم ہوا کہ رسی کے ساتھ کچھ لے جایا جا رہا ہے، وہ بھول گئے اور آگے کا سفر کرنے لگے۔ غالباً یہ سوچ کر کہ میں ایسی عجیب بات کا ذکر کروں گا تو سیدنا موسیٰ کیا مانیں گے اور یہ میری طرف سے غلطی تھی، میں نے انہیں یہ نہیں بتایا۔ کہنے لگا کہ ہم اس سفر سے بہت تھک گئے ہیں اس لیے وہ ناشتہ لے آؤ اور ہمیں کھانے دو۔ اور یہ شیطان ہی تھا جس نے مجھے اس کو یاد کرنا بھلا دیا اور اس نے عجیب طریقے سے دریا میں اپنا راستہ پایا، یعنی میرے لیے یقین کرنا مشکل تھا کہ تلی ہوئی مچھلی بندھے ہوئے تھی، اچانک وہ باہر نکل آئی اور اس نے نکل کر دریا میں سرنگ بنا دی، اس لیے میں ایسا نہ کر سکا۔ مجھے اپنے آپ پر افسوس ہوا کہ مجھ سے غلطی ہو گئی۔ موسیٰ نے کہا کہ ہمارا ارادہ یہی ہے، انہیں بتایا گیا کہ جب وہ دو دریاؤں کے سنگم پر پہنچیں گے تو مچھلی ایک سرنگ بنا کر دریا میں کود جائے گی، یہ اشارہ تھا۔ تو جیسے ہی شاگرد نے اسے یہ خبر سنائی تو اس نے کہا کہ یہ ہماری منزل تھی، اس کا مطلب ہے کہ ہم اپنی منزل پر پہنچ چکے ہیں، ہم جہاں جانا چاہتے تھے وہاں پہنچ گئے، وہ اپنے قدموں کی طرف دیکھتے ہوئے، یعنی فوراً بولا۔ ہم وہیں سے آئے ہیں، مجھے اس جگہ لے چلیں جہاں مچھلی کے ساتھ یہ معاملہ ہوا، ہم وہاں جائیں گے۔ وہ سنگم جسے عربی زبان میں دو دریاؤں کے ملنے کی جگہ کہا جاتا ہے مجمع البحرین میں کہتے ہیں کہ جب تم وہاں پہنچو تو سانس لینے کے لیے کسی چٹان کے پاس رک جاؤ، جیسا کہ مسافر عموماً کچھ دیر آرام کرتے ہیں، کہیں رک جاتے ہیں، پھر اپنا سفر جاری رکھتے ہیں، یعنی کچھ دیر ٹھہرنے کے بعد۔ جب اس نے دوبارہ سفر شروع کیا تو شاگرد مچھلی کو اپنے ساتھ لے جانا بھول گیا، تھوڑا آگے چلنے کے بعد اسے یاد آیا تو وہ اسے لینے کے لیے واپس لوٹ آیا، لیکن جب وہ وہاں پہنچا تو دیکھا کہ ناشتے کے لیے تلی ہوئی مچھلی مر کر پانی میں جا رہی تھی۔ یہ واقعہ اس قدر عجیب تھا کہ شاید اس کے شاگرد نے اس خوف سے حضرت موسیٰ کو نہیں بتایا کہ کہیں وہ اس کی بات پر یقین نہ کریں اور ناراض بھی ہو جائیں۔ یہ بھی ہو سکتا ہے، یعنی ایسا مشاہدہ جو کیا گیا تھا۔ موسیٰ علیہ السلام نے فوراً فیصلہ کر لیا کہ ہم واپس جائیں گے، چنانچہ وہ اپنے قدموں کو دیکھتے ہوئے واپس لوٹے، یعنی جس راستے پر وہ آئے تھے، جب وہ اس چٹان پر پہنچے تو اب کیا دیکھیں گے؟ انہیں وہاں ہمارا ایک نوکر ملا۔ قرآن کریم کہتا ہے کہ ہمارے بندوں میں اس بندے کا معاملہ یہ تھا کہ وہ سب سے زیادہ مہربان علماء تھے جن پر ہم نے خاص طور پر رحم کیا۔ میں نے وضاحت کی ہے کہ یہ شخص کون تھا، اگرچہ روایات میں اس کا نام آیا ہے، لیکن اردو میں اس کا تلفظ خضر کے نام سے جانا جاتا ہے۔ نام موجود ہے اور یہ عربی زبان میں صحیح تلفظ ہے۔ اس کے کچھ کام پہلے بیان کیے جا چکے ہیں۔ اسے بتایا گیا کہ وہ یہاں کسی سے ملے گا۔ یہی ان کی منزل ہے، چنانچہ وہ طالب علم کی بات سن کر پرجوش ہو گیا اور کہا کہ میں اسے ڈھونڈ رہا ہوں، اور وہ شخص مل گیا۔ ان سے معلوم ہوا کہ وہ غالباً کچھ فرشتے تھے یعنی اللہ تعالیٰ نے اپنے ایک فرشتے سے ملاقات کی جو انسانی شکل میں موسیٰ علیہ السلام سے ملے اور اللہ تعالیٰ نے انہیں اسی قسم کا کام سونپا۔ اس دنیا کی مذکورہ بالا ترتیب کے بارے میں جو اللہ تعالیٰ نے قائم کیا ہے، قرآن کریم نے ہمیں بتایا ہے کہ اس ترتیب میں فرشتوں کو مختلف ذمہ داریاں دی گئی ہیں اور وہ اللہ تعالیٰ کے لیے اپنی ذمہ داریاں ادا کرتے ہیں۔ جن باباؤں نے کام کیا ہے، یہ مادی دنیا ہمارے سامنے ہے، جس میں انسان کا سانچہ تیار ہوا ہے، وہ اس دنیا کو اپنے حواس سے سمجھ سکتا ہے، اس کے حواس بیرونی ہوں یا اندرونی، اس کا علم یہیں سے شروع ہوتا ہے۔ لیکن اس سے آگے بھی بہت سی حقیقتیں ہیں۔ اللہ تعالیٰ نے بتایا ہے کہ آپ کے علاوہ دو اور مخلوقات بھی ہیں جن کے پاس فرشتوں اور جنوں جیسی مرضی اور اختیار ہے اور ان میں سے بالخصوص فرشتوں کے بارے میں قرآن کہتا ہے: مجید نے نہ صرف تمام تفصیلات بیان کی ہیں بلکہ انہیں ہمارے ایمان میں شامل کیا ہے، یعنی ان پر ایمان لانا ضروری ہے، ایمان لانا ضروری ہے۔ اگر وہ ان کو انجام دے رہے ہیں تو اللہ نے کچھ دیر کے لیے پردہ اٹھایا ہے اور بتایا ہے کہ وہ کیسے کام کر رہے ہیں۔ اللہ تعالیٰ نے اپنے ایک فرشتے کو سیدنا موسیٰ علیہ السلام کی تربیت کے لیے بھیجا۔ جب وہ ان سے ملے تو موسیٰ علیہ السلام نے ان سے درخواست کی کہ آپ مجھے اس شرط پر اپنے پاس رہنے کی اجازت دیں کہ جو علم آپ کو دیا گیا ہے اس میں سے کچھ آپ مجھے سکھائیں گے۔ اس کا استعمال آپ سے اللہ تعالیٰ کی نسبت میں ہوا ہے۔ ظاہر ہے کہ وہ چلا گیا اس کائنات کے اسرار کا علم فرشتوں کو بھی دیا جاتا ہے جس طرح انبیاء کو بھی کچھ علم دیا جاتا ہے اسی طرح ان کو بھی دیا جاتا ہے اور ان کی ذمہ داریوں کی نوعیت بھی وہی ہے تو موسیٰ نے اس سے درخواست کی۔ کہ آپ مجھے کچھ سکھائیں، میں بھی اس علم کا کچھ حصہ حاصل کرنا چاہتا ہوں، میں بھی اسے سمجھنا چاہتا ہوں، چچا تسیبہ نے کہا، انہوں نے جواب دیا، یہ خواہش اچھی ہے، لیکن آپ مجھ سے صبر نہیں کر پائیں گے، آپ صبر کیوں نہیں کر سکتے؟ کیونکہ ہمارے ارد گرد اس دنیا کے قوانین بالکل مختلف ہیں۔ یہ دنیا آزمائش کے قانون پر قائم ہے۔ اس میں ہمیں اخلاقی شعور دیا گیا ہے۔ اسے برتر رکھیں۔ وہ اخلاقی شعور قدم قدم پر سوالات اٹھاتا ہے۔ انسان کو جو خاص امتیازات حاصل ہیں وہ تین ہیں۔ وہ ایک عقلی وجود ہے۔ اسے ایک غیر معمولی شعور دیا جاتا ہے۔ اس کے ذریعہ وہ غیر معمولی ایجادات کرنے کے قابل ہے اس کے ذریعہ وہ اپنے حواس کی صلاحیتوں کو بڑھانے کے قابل ہے۔ وہ حکم دیتا ہے کہ چیزیں صحیح ہیں، غلط ہیں، اخلاقی طور پر غلط ہیں، اور اسے جمالیات کا ایک حصہ دیا جاتا ہے۔ اگر وہ ناپسندیدگی کا اظہار کرے تو انسان ہر وقت ان تینوں صلاحیتوں کے ساتھ ہوتا ہے، وہ ان خصوصیات کو ترک نہیں کر سکتا۔ کہا جاتا ہے کہ جس میں فرشتے کام کر رہے ہیں، جس میں دنیا کی حکومت ہو رہی ہے، اس کی نوعیت بالکل مختلف ہے، اس کے قوانین مختلف ہیں، اس کے ضابطے مختلف ہیں، نہ یہ عقل اسے اس طرح حکم دے سکتی ہے اور نہ یہ ہمارے اخلاقی اصول۔ وہ تابع رہ سکتا ہے اور ہم اپنی جمالیات کی بنیاد پر اس کے بارے میں کوئی فیصلہ نہیں کر سکتے۔ وہ ہمارے وجود سے باہر کی دنیا ہے۔ اس کے قوانین مختلف ہیں، اس لیے اس کے سامنے رکھ کر آپ نے سیدنا موسیٰ علیہ السلام کو جواب دیا کہ میں آپ پر صبر نہیں کر سکتا اور اس کی وجہ وقوف تسبر مہتہ ہے اور جو آپ کے علم سے باہر ہے، آپ اس پر بھی صبر کیسے کر سکتے ہیں، گویا قرآن کریم نے مجھے بتایا ہے کہ مجھے معاف کر دیں۔ انسان کے پاس علم کا ایک شعبہ ہے۔ پہلی چیز جس کا تعین کیا جانا چاہئے وہ یہ ہے۔ فلسفہ کی ڈھائی سے تین ہزار سال کی تاریخ پر نظر ڈالیں، سائنس کی تاریخ دیکھیں تو سب سے بڑی غلطی یہ ہے کہ لوگوں کو اپنے علمی شعبے کا علم نہیں۔ یہ اس بات کا تعین نہیں کرتا کہ انسان کے علم کا دائرہ کیا ہے، اس کو سب سے بڑھ کر جو چیز عطا کی گئی ہے، جس کے ذریعے اس کا علم سے تعلق پیدا ہوتا ہے، وہ اس کے حواس ہیں، اس کے حواس خارجی بھی ہیں، اس کے حواس اندرونی بھی ہیں۔ اس کو جو حواس عطا کیے گئے ہیں ان کا تعلق ظاہری دنیا سے ہے، وہ کچھ دیکھتے ہیں، کچھ سنتے ہیں، اسی طرح وہ اپنے نفس کے اندر دیکھتے ہیں اور اس کے نفس کے اندر کے حالات سے واقف ہیں۔ میں محسوس کرتا ہوں کہ میں اپنے غم سے واقف ہوں میں اپنے جذبات کو دیکھ سکتا ہوں میں انہیں کسی کو نہیں دکھا سکتا اسی طرح میرے حواس باہر کی دنیا میں چیزوں کو دیکھتے ہیں تو میرے اندر ایک اضطراری علم ہے یعنی میرے اندر حقیقی معنوں میں علم۔ اس کا تعلق بیرونی دنیا سے ہے، اس لیے وہ چیزوں کو نام دیتا ہے، ان کے درمیان تفریق پیدا کرتا ہے، ان کو الگ کرتا ہے اور ان کی درجہ بندی کرتا ہے، اور جب وہ درجہ بندی کرتا ہے تو ان کے ضروری عقلی نتائج بھی نکالتا ہے۔ وہ دیکھتا ہے کہ انسان کا علم اپنا سفر جاری رکھے ہوئے ہے، اس لیے انسان کے علم کا دائرہ یہ ہے کہ وہ اس سے پیچھے نہیں رہ سکتا، وہ مطمئن نہیں ہوسکتا، نہ اس سے آگے بڑھ سکتا ہے، اس کے پیچھے کے معاملات اس کے علم سے باہر ہیں۔ اس سے آگے کے معاملات اس کے علم سے باہر ہیں، اس لیے جو اس کے علم میں نہیں ہے، جس کا احاطہ نہیں کر سکتا، وہ کہتے ہیں کہ صبر کیسے ہو گا، سوال کرے گا، یعنی اسے خاموش نہیں رکھا جا سکتا جب کہ اس کے پاس علم کے دائرے سے باہر کوئی چیز آئے گی تو وہ اسے آسانی سے قبول نہیں کرے گا، وہ تو دوسری دنیا کا آدمی ہے، اس لیے اللہ کے اس فرشتے نے اسے خبر دی کہ اگر تم انسان کے علم میں اتنا صبر نہ کر سکو گے، تو تم اس کے علم میں نہیں رہ سکتے۔ یہ اللہ کے رسولوں کے ساتھ بھی ہوتا ہے کہ وہ اپنے لیے بھی یہی مسئلہ پیدا کرتا ہے کہ یہ آپ کے لیے مشکل ہے۔ اس سے آگے آپ کے علم کا دائرہ۔ یہی ہو گا، تم صبر نہ کرو گے، سوال اٹھاؤ گے، ساتھ چلنا مشکل ہو جائے گا، انشاء اللہ۔ موسیٰ نے کہا انشاء اللہ آپ مجھے صابر پائیں گے۔ مجھے اجازت دیجئے اور میں کسی معاملے میں آپ کی نافرمانی نہیں کروں گا، یعنی آپ جو کہیں گے میں مانوں گا، لیکن ذرا حجاب اٹھا کر مجھے اپنے علم سے روشناس کرائیں۔ میں یہ علم سیکھنا چاہتا ہوں، میں جاننا چاہتا ہوں، میں نے عرض کیا ہے۔ قرآن نے جس تناظر میں یہ واقعہ بیان کیا ہے، اس میں یہ سکھایا جا رہا ہے کہ دنیا میں جو کچھ پیش کیا جاتا ہے اس کے پیچھے کیا حکمت کارفرما ہو سکتی ہے، کیونکہ دنیا کے معاملات میں خواہ وہ دین کی طرف دعوت کا معاملہ ہو، ہمارے سیاسی معاملات ہوں یا معاشی معاملات، صبر لازم ہے، اور صبر کے لیے اللہ تعالیٰ کی رضا پر لازم ہے۔ وہ ہے، یہ ان حواس کی گرفت میں آجاتا ہے اور یہ آپ کا علم ہے، لیکن اس کے پیچھے ایک دنیا ہے، یہ اپنا کام کر رہی ہے، تو انہوں نے وعدہ کیا۔ ایک بات قابل غور ہے کہ میرے ساتھ رہو، اس نے کہا، پھر اگر تم میرے ساتھ رہنا چاہتے ہو تو مجھ سے اس وقت تک کچھ مت پوچھو جب تک میں خود تم سے اس کا ذکر نہ کروں۔ دیکھتے رہیں کہ کیا ہوتا ہے اور اپنے سوالات پوچھتے رہیں۔ میں تمہیں ایک موقع دوں گا۔ میں آپ سے پوچھوں گا کہ آپ کیا جاننا چاہتے ہیں۔ تم یہ وعدے کرو گے، منتیں کی گئیں، آخرکار وہ دونوں چلے گئے، یعنی اللہ کا وہ بندہ اور موسیٰ علیہ السلام، دونوں چلے گئے، دن کے آخر تک، وہ دونوں چلے یہاں تک کہ ایک جگہ ایک کشتی میں سوار ہو گئے، یعنی وہ چلے گئے۔ ایک دریا اوپر آیا اور اسے عبور کرنے کے لیے کشتیاں تھیں۔ انہوں نے ایک کشتی کرایہ پر لی اور اس میں سوار ہو گئے۔ جیسے ہی وہ بیٹھ گئے، اس شخص نے جو ان کے ساتھ اللہ کا بندہ تھا، کشتی میں سوراخ کر دیا۔ انہوں نے اسے توڑ دیا، موسیٰ صبر نہ کر سکے، یعنی ظاہر ہے کشتی میں بیٹھے ہیں۔ کشتی میں سوراخ کرنا بہت خطرناک چیز ہے۔ کھڑکتا موسیٰ نے کہا کہ تم نے اس میں سوراخ کیا تاکہ تمام کشتی والوں کو غرق کردے، تم نے کیا کیا ؟‘‘ اس نے صرف سوال کیا لیکن احتجاج کیا اور جب آپ موسیٰ کا کردار قرآن میں اور بالخصوص بائبل میں دیکھتے ہیں تو یہ بات بالکل واضح ہوجاتی ہے کہ یہ ان کے لیے کوئی آسان کام نہیں تھا اور لگتا ہے کہ اللہ تعالیٰ نے انہیں قبر کی تعلیم کے لیے ان کے ساتھ بھیجا تھا۔ اس نے بیٹھ کر کیا کیا؟ اس نے بیٹھتے ہی کشتی میں سوراخ کیا۔ اب اس نے سوچا کہ لوگ اس طرح کے مسائل پیش کریں گے تو مجھے کیا ضرورت ہے؟ میں ان سے سوال کروں گا۔ پیچھے جو ہو رہا ہے ہم آپ کو دیں گے، اسی طرح سمجھیں۔ ایسی کوئی وجوہات نہیں ہیں جو بنیامین کو روک سکیں۔ جس طرح موسیٰ (علیہ السلام) کو یہاں ایک خاص صورت حال کا سامنا ہے، وہی صورت حال یوسف (علیہ السلام) کو بھی درپیش ہے۔ کوئی صورت حال نہیں ہے۔ اور اس میں کوئی غیر اخلاقی بات نہیں تھی، سب کچھ قاعدے کے مطابق ہوا، لیکن اس کے اسباب تھے جن کے نتیجے میں ان کا روکنا ممکن ہوا، چنانچہ وہاں بھی یہی سبق دیا گیا ہے کہ اللہ تعالیٰ کبھی کبھی مداخلت کرتا ہے، اس لیے ہم اسے اس طرح کہتے ہیں۔ پھر معاہدہ ہوا، پھر وہی ہوا۔ جو شخص کشتی کا مالک ہے وہ سمجھے گا کہ یہ کشتی ہے، اس میں تختیاں ہیں، یہ ٹوٹتی ہے، یہ ٹوٹتی ہے، یعنی اسے یہ معاملہ نظر نہیں آتا، یہ سب اس کے سامنے نہیں ہو رہا، یہ اسی طرح ہو رہا ہے جس طرح دنیا ہے۔ سیدنا موسیٰ علیہ السلام نے سخت احتجاج کیا اور کہا کہ آپ نے اس میں سوراخ کر دیا تاکہ تمام کشتی والے ڈوب جائیں۔ نہیں، اس نے سب کچھ کیا۔ اب اس نے جواب دیا، اس نے کہا، میں نے یہ نہیں کہا تھا کہ تم میرے ساتھ صبر نہیں کرو گے، یہ وہ مشکل ہے، جس کی وجہ سے میں نے تم سے کہا تھا کہ تم صبر نہیں کرو گے۔ اور یہ ہو گا۔ اب سیدنا موسیٰ کو احساس ہوا کہ یہ میری غلطی تھی۔ موسیٰ علیہ السلام نے کہا کہ جو کچھ میں بھول گیا ہوں اس پر مجھ سے مواخذہ نہ کرو اور میرے معاملے میں مجھ پر سختی نہ کرو ۔ اعتراض کیا تھا، اسی طرح عاجزی سے معافی مانگ لی، مجھ سے غلطی ہوئی، میں نے اس پر توجہ نہیں دی، جو بھول گیا، وہ دونوں طرف سے ہوتا ہے، قرآن پاک میں اس کے استعمال کی مثالیں موجود ہیں اور ہم اپنی زبان میں بھی اسی طرح استعمال کرتے ہیں۔ اس کا مطلب یہ ہے کہ کوئی چیز یاداشت سے نکل گئی، اور یہ بھی کہا جاتا ہے کہ وہ چیز حافظے میں تھی، لیکن اس قدر نیچے چلی گئی تھی کہ جب کوئی صورت پیدا ہوئی تو اس کا خیال تک نہ رہا، اس کے لیے وہی فعل استعمال ہوتا ہے۔ کہا جاتا ہے کہ جو میں بھول گیا ہوں اس پر مجھے مت پکڑو، ولا ترحق من امری اسرا اور میرے معاملے میں مجھ پر زیادہ سختی نہ کرو۔ جو بتایا گیا ہے، بتایا گیا ہے کہ تیرے لیے یہ حال ہے کہ ایک کشتی تیرے حواس کی گرفت میں آرہی ہے، وہ ایک دریا ہے، آپ نے اسے عبور کرنا ہے، کشتی کو اپنا بالغ مل گیا، آپ بیٹھ گئے، اب یہ آپ کا ہے۔ خواہش یہ ہونی چاہیے کہ کشتی بحفاظت پار ہو جائے۔ اگر بورڈ اچانک ٹوٹ جائے، چھید جائے تو یہ حادثہ ہے۔ ظاہر ہے کہ آدمی یہی کرے گا، کیا گردن اٹکانے سے نہیں ؟ کیا کر رہے ہو؟ یہ ہماری دنیا ہے، جس میں وہ چیزیں ہو رہی ہیں جو ہمارے حواس کی گرفت میں آتی ہیں یا ہمارے اخلاقی حادثے کی زد میں آتی ہیں۔ حکم دیا گیا ہے اور اس کے پیچھے اللہ تعالیٰ کی حکمتیں ہیں۔ اب اللہ تعالی ان کو واضح کر دے گا۔ یہ سلسلہ ابھی تک جاری ہے۔ ہمارا وقت ختم ہو گیا ہے۔ الحمدللہ الحمدللہ رب العالمین وصلی اللہ علیہ وسلم علی محمد الامین آواز باللہ من الشیطان الارجم بسم اللہ الرحمن الرحیم خواتین و حضرات سورہ کہف کی آیت نمبر 74 سے یہ سبق جنم لے رہا ہے کہ سیدنا معاویؓ کا واقعہ زیر بحث ہے۔ طالہ کیا ہم نے اس واقعے کا کچھ حصہ پڑھا، اس کا تعلق ایک غریب ملاح کی کشتی سے تھا۔ آپ نے ایسا کام کیا ہے کہ اس کے نتیجے میں کشتی کے تمام مسافر پریشان ہو جائیں گے۔ سیدنا موسیٰ علیہ السلام نے فرمایا کہ تم یہ بتانے اور بتانے کے لیے اس سفر پر جارہے ہو کہ اللہ تعالیٰ کی حکمت کیا کام کر رہی ہے، لہٰذا اس پر سوال نہ کرو، اس نے معذرت کی اور دونوں کو ساتھ چلنے کی اجازت دی، اب اس نے کہا کہ فان تلکا تو دونوں آگے بڑھے، یعنی سفر شروع ہوا، کیا وہ یہ ہوتا ہوا دیکھ رہے ہیں یا یہ ان کا براہ راست مشاہدہ ہے جو لڑکا اس کے پاس سے گزرتا ہوا دیکھ رہا تھا۔ اس نے اسے ہلاک کیا ، یہ معاملہ مزید ہلاک ہوگیا ، کسی کی جان کو جان سے مارا جانا چاہئے یہ کہا ہے کہ تم نے ایک آدمی کی کشتی کو چھیدا اب تم نے ایک اور کارنامہ انجام دیا ہے۔ آپ نے ایک معصوم لڑکے کو راستے میں قتل کر دیا ہے۔ اگر ہم اس کے بارے میں سوچیں تو یہی ہو رہا ہے، یعنی جب چیزیں ہمارے سامنے آتی ہیں تو دسیوں سوالات پیدا کر دیتی ہیں۔ اگر یہ چیزیں ہمارے سامنے لائی جائیں تو ہم اسی طرح پوچھیں گے کہ یہ جانور کس چیز سے بنا ہے؟ یہ کر لیں، بچوں کو جتنی تکلیف ہو رہی ہے ایسے کیسز ہوتے ہیں جن میں کوئی چیز بری لگتی ہے، اسے دیکھنے کے بعد انسان کے ذہن میں یہ وہ سوالات اٹھنے چاہئیں۔ اللہ یہ واضح کرتا ہے کہ وہ کہاں سے کام کر رہا ہے، وہ آپ کو بتاتا ہے کہ جس نے اس کائنات کی ترتیب کو پیدا کیا ہے اس کے معاملات اور آپ کے علم کا آپس میں کیا تعلق ہے۔ ایک سوال پیدا ہوتا ہے اور یہ جواب ہے۔ تعلیم ہی نہیں سارے راز کھلے ہیں۔ وہ نہیں کر رہا ہے۔ رشتے طے ہوتے ہیں اور آپ انہیں سمجھ نہیں سکتے لیکن اگر آپ کو اپنی حدود کا علم ہے تو آپ کم از کم اندازہ لگا سکتے ہیں کہ آپ کو رب کے بارے میں نیند نہیں آنی چاہیے۔ یہ ہے تعلیم اور نتائج کے لحاظ سے صبر۔ صبر اور حکمت سے کام لینا چاہیے۔ یہ اس دنیا کی سب سے بڑی نعمت ہے۔ اگر کوئی نعمت ہے تو دن رات اس کے لیے رب کا شکر ادا کرنا چاہیے۔ اس نے کہا کہ تم نے ایک بے گناہ کی جان لے لی حالانکہ اس نے کسی کا خون نہیں بہایا تھا، تم نے بہت برا کام کیا ہے، تمہیں ایسا ہی سوچنا چاہیے تھا، یہی سوال پس منظر میں پوچھنا چاہیے تھا، انہوں نے جس پر اعتراض کیا، وہی اعتراض ہر ذہین آدمی اٹھائے گا۔ جس دنیا میں ہم رہتے ہیں، ہمارا حادثہ ان چیزوں کا حکم دیتا ہے جو اخلاقی ہیں۔ وہ علم کے ساتھ چیزوں کے بارے میں جاننا چاہتا ہے، وہ حکم دینا چاہتا ہے، وہ فیصلہ دینا چاہتا ہے، اگر وہ اپنے علم کی حدود میں چیزوں کے بارے میں سوچتا ہے تو وہ بھی پوری دنیا ہے۔ یہ وہی دنیا ہے جس میں سائنسی ذہن نے بہت سی ایجادات کی ہیں۔ ایسی سہولتیں پیدا کر دی ہیں کہ بجائے خود عجائبات کا کارخانہ بن گئے ہیں اور جس طرح انسان کی بہتری کے آلات ہیں اسی طرح غور و فکر کے بھی بڑے اوزار ہیں۔ اپنے علم اور اپنی ذہانت اور اپنی صلاحیتوں کو استعمال کرتے ہوئے، سمجھنے کی اہم بات یہ ہے کہ جب بھی ہم کسی چیز کے بارے میں سوچتے ہیں، جب ہم کسی چیز کے بارے میں رائے قائم کرتے ہیں۔ اگر آپ کسی معاملے میں فیصلہ دیں تو پہلی بحث علم کی بحث ہے، باقی اس کے بعد۔ میرا علم کہاں سے شروع ہوتا ہے، میرا علم کہاں ختم ہوتا ہے؟ جس پہلو سے آپ اپنے وجود کو تحلیل کرتے ہیں وہ یہ ہے کہ ہمارا علم ہماری مابعد الطبیعاتی معلومات سے شروع ہوتا ہے جو ہمارے اندر ہے، ہمارے حواس بھی ہمارے باطن سے رابطہ پیدا کرتے ہیں اور ان کی کچھ حدود ہوتی ہیں۔ حواس ہمیں بیرونی دنیا یا بیرونی دنیا سے جوڑتے ہیں۔ ان کی بھی کچھ حدود ہیں۔ کچھ چیزیں معلوم ہوئی ہیں اور اگر ہم ان کے اندر جھانکیں تو وہ حواس کی صلاحیت میں اضافے کا نتیجہ ہیں۔ یہ عقلی سوالات بھی پیدا کرتا ہے اور کچھ عقلی نتائج ہمارے سامنے رکھتا ہے، ان میں سے کچھ فرض کے درجے میں ہیں، وہ ناگزیر ہیں، ان سے بچنے کا کوئی راستہ نہیں ہے، اور کچھ نتائج کا امکان ہے۔ یہ بھی ممکن ہے، یہ بھی ممکن ہے، یہ بھی ممکن ہے کہ ہم انہیں جیسے ہیں پیش کریں اور پھر ان کی تحقیق کریں یہاں تک کہ کوئی چیز ہمارے تجربے اور مشاہدے میں اس طرح آجائے کہ ہمیں اس کے بارے میں کچھ معلوم نہ ہو۔ نتیجہ اخذ کرنے کے لیے، یہ ہمارا علم دراصل کام کرنے کا طریقہ ہے، اور یہ ایک ایسی دنیا تخلیق کرتا ہے جس کے نتیجے میں ہم آگے بڑھتے ہیں۔ پھر وہ اپنے تخیلات کو استعمال کرتے ہیں اور انہیں امکانات کے طور پر پیش کرنے کے بجائے انہیں حقائق کے طور پر پیش کرنا شروع کر دیتے ہیں۔ دوسری بات یہ ہے کہ وہ اس کے پیچھے جا کر اپنے ہی حواس کی دنیا کو چیلنج کرنے لگتے ہیں۔ جو کچھ ہم سن رہے ہیں وہ حقیقی ہے، حقیقی ہے یا غیر حقیقی، ہمارا علم وہیں سے شروع ہوتا ہے، ہمارے پاس اس کے پیچھے جانے کا کوئی ذریعہ نہیں ہوتا، اس لیے باری وہاں پہنچ جاتی ہے، اور پھر ہمارا اپنا وجود بھی تحقیق کا ہدف ہوتا ہے۔ ہو جاتا ہے اور بندہ یہاں کھڑا کہتا ہے کہ میرا خیال ہے اس لیے میں ہوں، اس سے آگے جانا ممکن نہیں، اس لیے اللہ تعالیٰ نے ہمیں جو تعلیم دی ہے وہ یہ ہے کہ ہم اپنے علم پر اپنے حواس پر بھروسہ کرنا سیکھیں۔ دنیا اللہ رب العزت نے بنائی ہے، اگر ہم اس علم کے ذرائع پر بھروسہ کریں جو اس نے ہمیں عطا کیے ہیں تو اس سے کچھ نہ کچھ بڑھے گا۔ یہ اعتماد اسی سطح کا ہونا چاہیے جس حد تک ہم ان حواس اور اپنے اندر کے اضطراری علم سے ہم آہنگ ہیں۔ معلومات سے رشتہ پیدا ہوتا ہے اور علم کا عمل شروع ہوتا ہے۔ اعتراض کر رہے ہیں اور ان پر اعتراض ہونا چاہیے، پھر یہاں بھی انہوں نے بالکل بجا طور پر اعتراض کیا ہے کہ جب اخلاقی طور پر یہ طے ہو جائے کہ انسان کی جان کی حرمت ہے تو کسی انسانی جان کو بلا وجہ نہیں لیا جا سکتا۔ اگر اللہ تعالیٰ نے جان لینے کا حق اپنی کتابوں میں دیا ہے یا اپنی شریعت میں جان لینے کا حق دیا ہے تو اس کی حدیں بالکل ٹھیک بیان فرما دی ہیں۔ انہوں نے بتایا کہ کسی کی جان اس وقت تک نہیں لی جاسکتی جب تک کہ وہ کسی اور کی جان نہ لے، یا اگر وہ لوگوں کی جان و مال لے کر مصیبت پیدا کرے، یہ وہ دو صورتیں ہیں جن میں جب کوئی سڑک پر چلتے لڑکے کی جان لے لے تو کوئی وجہ نہیں تھی، نہ اس لڑکے نے کسی کو مارا تھا، نہ اس نے کوئی ہنگامہ کیا تھا، نہ ہی لوگوں کی جان کو خطرہ تھا۔ ہوا تو پھر جان کیوں لی؟ ظاہر ہے، اس سوال کو واپس لے لیں۔ اللہ نے ہمیں کاغذ کا ایک ٹکڑا دیا ہے۔ وہ ہمارے ساتھ کھیل رہا تھا۔ ہمارے درمیان بہت پیار کا رشتہ تھا۔ اگر کوئی قتل کرنے والا ہے تو وہ اعتراض کر رہا ہے، لیکن اس قتل کے بارے میں سوچیں تو یہ روزانہ ہو رہا ہے، پریشانیاں ہیں، ہم ان کے سبب اور اثر کے درمیان تعلق قائم نہیں کر پا رہے، مصیبتیں آسمان سے برس رہی ہیں۔ وہ اپنی وجہ کا تعین نہیں کر سکتے، یعنی اخلاقی سوالات خارجی حالات کی بنیاد پر پیدا ہوتے ہیں، اس کی تعریف برائی کا مسئلہ ہے، یعنی برائی کے اسباب معلوم نہیں ہوتے۔ کہا جا رہا ہے کہ اس کے پیچھے کسی عالم اور عقلمند کی تدبیر ہے، اس سکیم کو پوری طرح سمجھنا کسی کے بس میں نہیں، البتہ کچھ حجاب اٹھا کر یہ کہا جا سکتا ہے کہ کن چیزوں کا اندازہ ہو سکتا ہے، یعنی گویا اس نے سوچا کہ نظر کا ایک دانہ ہے جو ذائقے کے لیے دیا گیا ہے اور پھر کہا کہ اب یہ بات کیوں سمجھے یا اس کے علم پر آدمی کا قیاس کیوں ہے۔ اگر ایسا ہے تو وہ عقلی استدلال کے ذریعے اسے کلی میں بدل دیتا ہے۔ یہ سفر بتانے اور سمجھانے کے لیے کیا جا رہا ہے۔ اس نے کہا میں نے تمہیں نہیں کہا تھا کہ تم مجھ سے صبر نہیں کرو گے میرا مطلب ہے یہ دیکھو وہ بار بار جملہ دہرا رہے ہیں مجھ سے صبر مت کرو۔ آپ سیکھیں گے، کیونکہ جس دنیا میں میں کام کر رہا ہوں، وہ آپ کے علم سے نہیں پکڑ سکتا، اس میں کیا ہو رہا ہے، کیسے ہو رہا ہے، کن اصولوں کے مطابق ہو رہا ہے، یہ آپ کی سمجھ سے باہر ہے، اس لیے آپ صبر نہیں کر رہے۔ آپ اسے کرنے کے قابل ہو جائیں گے. یہ صرف زجر کا جملہ نہیں ہے یعنی کوئی تنبیہ نہیں ہے۔ وہ ایک سچی کہانی کیا سنا رہے ہیں کہ یہ کوئی آسان کام نہیں ہے۔ اسی لیے میں نے تم سے کہا تھا کہ ایمان لاؤ جب اللہ نے تمہیں بتایا کہ میرے گزرنے کے بعد تمہیں کچھ علم ملے گا، اس لیے اس علم کے لیے صبر ضروری ہے۔ اس نے کہا میں نے تم سے یہ نہیں کہا تھا کہ تم میرے ساتھ صبر نہ کرو۔ آپ صبر کریں گے۔ بعد میں صاحب موسیٰ علیہ السلام کو تنبیہ کی گئی۔ اس نے سمجھا کہ یہ میرا معاملہ ہے اور میں صبر نہیں کر پا رہا، اس لیے اسے بھی درد ہے اور اگر وہ یہ کہہ رہے ہیں تو اسے بھی تکلیف ہے۔ اس نے کہا، ٹھیک ہے۔ اس کے بعد موسیٰ علیہ السلام نے فرمایا کہ اگر میں تم سے کچھ پوچھوں تو تم مجھے بھر دو، یعنی جتنا علم مجھے درکار ہے اتنا ہی میں نے حاصل کر لیا ہے۔ تو اس کے بعد یہ صحبت بھی ختم ہو جائے گی، اگر میں تم سے کچھ پوچھوں تو مجھے اپنے ساتھ نہ رکھنا۔ جتنا آپ نے مجھے بتانا تھا، میں نے وہی سیکھا جو مجھے سیکھنا تھا۔ اب اس سفر میں اگر میں مزید صبر نہ کروں تو ٹھیک ہے۔ آپ کا عذر بھی ٹھیک ہے اور مجھے بھی اب جانا چاہیے۔ میں نے کر دیا اور یہ بھی کہا کہ تم میری طرف سے عذر کی حد کو پہنچ گئے ہو، یعنی اب تم سے کوئی شکایت نہیں رہے گی، اب تم سے کوئی شکایت نہیں ہو گی، چیزیں واضح ہو گئی ہیں۔ یہ ہو گیا، چلو بات کرتے ہیں، دونوں آگے بڑھے، کہنے لگے اچھا ہے، اگر تم جانے کے لیے تیار ہو تو چلتے ہیں، لیکن واضح ہوا کہ یہ آخری موقع ہے، اس کے بعد دونوں الگ ہو جائیں گے۔ دونوں آگے بڑھے، اسی پر ایک اور واقعہ پیش آیا، دونوں آگے بڑھے، یہاں تک کہ جب ایک گاؤں پہنچے تو وہاں کے لوگوں سے کہا کہ انہیں کھانا کھلاؤ۔ یہ ایک عام روایت تھی جب لوگ سفر کر رہے تھے۔ کہا گیا کہ وہ یہیں اس ہوٹل میں ٹھہریں گے اور یہاں سے کھانا خریدیں گے۔ اس زمانے کی روایت یہ تھی کہ اگر مسافر ہوتے تو بستی کے لوگ ان کے لیے کھانے کا انتظام کرتے، یعنی ایک جگہ سے دوسری بستی جاتے۔ وہ تیسری بستی میں اسی طرح گئے جس طرح لوگ سفر کرتے تھے۔ انہوں نے اپنے ساتھ تھوڑا سا اور سفر کیا ہے، چلے گا، ورنہ یہ ہے کہ وہ بستی کی مسجد میں گئے جہاں وہ اترے اور سو گئے یا اس دوران کسی اور جگہ پہنچ گئے۔ وہ گاؤں کے باہر حلقے بنا کر بیٹھ جاتے تھے۔ لوگ دیکھتے کہ شام کا وقت ہے، کھانے کا وقت ہے اور کوئی مسافر ہے جو رات گزارنے جا رہا ہے۔ میں نے جا کر وہاں کے لوگوں سے درخواست کی کہ ہم مسافر ہیں اگر وہ ہمیں کھانا دے دیں لیکن انہوں نے ان کی میزبانی کرنے سے انکار کر دیا۔ وہ اتنے اچھے لوگ تھے ورنہ عام طور پر ایسا نہیں ہوتا۔ میں نے گزارش کی ہے کہ اگر واقعہ کے نقطۂ نظر سے دیکھیں تو بھی ایک تاثر پیدا ہوتا ہے اور اگر خواب کی نگاہ سے دیکھیں تو معاملات بس اسی طرح چل رہے ہیں۔ کہنے لگے لیکن انہوں نے میزبانی سے انکار کر دیا، اب یہ سلوک بستی میں ہوا۔ وہ بستی چھوڑ کر جا رہے تھے۔ پھر انہوں نے وہاں ایک دیوار دیکھی۔ گرہ مطلوب تھی، مطلب معلوم تھا کہ دیوار ابھی گری یا کل، یہی حال تھا۔ جو ان کے ساتھ تھے انہوں نے گرتی ہوئی دیوار کو دیکھا تو اسے دوبارہ بنانا شروع کر دیا۔ فقامہ کا مطلب ہے دوبارہ تعمیر کرنا یا اس کی تائید کرنا۔ دیا، وہ دیوار کو گرانا چاہتی تھی، لیکن اب انہوں نے اسے ٹھیک کر دیا۔ موسیٰ نے کہا کہ آپ کو گاؤں والوں سے اس پر بحث کرنی چاہیے تھی، یعنی یہاں صورتحال یہ ہے کہ کھانے کو کچھ نہیں۔ وہ جاری ہیں، انہوں نے آپ کو کھانا بھی نہیں دیا۔ اگر آپ دیوار کو ٹھیک کرنا چاہتے ہیں تو آپ گاؤں والوں سے کہہ دیں کہ آپ ہمیں مہمان بنا کر کھانا نہیں دے رہے ہیں تو ہم آپ کی دیوار کو انعام کے طور پر ٹھیک کر دیں گے۔ اگر آپ ہمیں کھانا دیتے تو یہ معقول بات ہوتی۔ ہمارے یہاں حالات خراب ہوتے جا رہے ہیں۔ موسیٰ نے دیوار بنائی اور کہا اگر تم چاہتے تو اس پر مزدوری رکھ سکتے تھے یعنی اتنا غریب گاؤں تمہاری میزبانی کے لیے تیار نہیں ہے تم نیکی کرنے کا سوچ رہے ہو نیکی کے مواقع موجود ہیں اس وقت ہم بھوکے ہیں۔ ان کے لیے کچھ ہونا چاہیے تھا۔ اگر وہ میزبانی کے لیے تیار نہ ہوتے تو مزدوری لیتے۔ آپ چاہتے تو اس کے لیے مشقت لے سکتے تھے۔ انہوں نے ہماری درخواست پر ہمیں کھانا کھلانے سے انکار کر دیا، لیکن آپ نے اس کے باوجود ان کی دیوار کو ٹھیک کر دیا۔ تم نے ان کمینوں کے لیے یہ محنت کیوں برداشت کی ؟ وہ اس کے مستحق نہیں تھے۔ آپ ان سے کچھ مزدوری لے سکتے تھے، جس سے ہم کھانا خرید لیتے۔ یعنی اگر وہ میزبانی کے لیے تیار نہ ہوتے اور مہمانوں کو مہمان ماننے کے لیے تیار نہ ہوتے تو ہم پیسے دے کر ان سے کھانا لے لیتے، لیکن یہ کام آپ نے کیا۔ اب جب اس نے یہ اعتراض اٹھایا تو فرمایا کہ یہ تمہارے اور میرے درمیان جدائی ہے اور وہ نیکی برابر نہیں ہے، یعنی جیسا کہ ہم نے ماضی میں دیکھا کہ برائی آجاتی ہے، کسی غریب کی کشتی ٹوٹ جاتی ہے، آدمی کا جوان لڑکا چلا جاتا ہے، یہ وہ چیزیں ہیں کہ ایک علم اس طرح زندگی میں سوال اٹھتے ہیں، اسی طرح کبھی کبھی نیکی، نیکی، ثمرات نہیں ہوتے۔ وہ اخلاقی حدود کی پرواہ نہیں کرتے، لیکن احسانات ہوتے رہتے ہیں، یعنی ان کی دیواریں پختہ ہوتی رہتی ہیں، تو یہ وہ تصویر ہے جو اس دنیا میں آئے روز سامنے آتی ہے، اس شکل میں سامنے آئی کہ انہوں نے اس کی دیوار کو ٹھیک کیا۔ اب اس نے کہا کہ یہ جدائی بینک اب ہمارے لیے الگ ہونے کا موقع ہے، اب ہمیں الگ ہونا چاہیے، لیکن صبر سے الگ ہونے سے پہلے اب میں تمہیں ان باتوں کی حقیقت بتاؤں گا جو تم برداشت نہیں کر سکتے۔ جواب نہیں دیا تھا، واقعات ہوتے رہے اور تینوں یکے بعد دیگرے وقوع پذیر ہوتے رہے اور میں نے عرض کیا کہ یہ تینوں نمائندہ واقعات ہیں ان سوالات کے لحاظ سے جو ہماری دنیاوی زندگی میں پیدا ہوتے ہیں، یعنی برائی پیدا ہوتی ہے اور وجہ سمجھ میں نہیں آتی۔ اچھائی آتی ہے اور ان تک پہنچتی ہے جو اس کے مستحق نہیں کیونکہ وہ وجہ نہیں سمجھتے۔ اس نے کوئی جواب نہیں دیا لیکن اب جب وہ الگ ہو رہے ہیں اور اسی لیے انہیں بھیجا تو اس نے کہا کہ اب میں ان باتوں کی حقیقت بتاؤں گا۔ ہوا یہ کہ حضرت موسیٰ کی شہادت تو ختم ہو گئی لیکن جس مقصد کے لیے یہ سفر کیا گیا وہ بھی حاصل ہو گیا۔ اللہ نے جس سکیم کے تحت یہ سفر کیا تھا وہ پورا ہوا۔ یہ بھی معلوم ہوا کہ اس دنیا میں ہم سے جو چیز مطلوب ہے وہ اللہ کی معرفت پر ایمان کا تقاضا ہے، اللہ کی حکمت پر ایمان کا تقاضا، اللہ کے فیصلوں پر بھروسہ کا تقاضا، اللہ تعالیٰ کے پیدا کردہ نتائج پر بھروسہ کا تقاضا ہے۔ اس کا تقاضا یہ ہے کہ وہ تمام تقاضے جو ہم سے بنائے گئے ہیں، جو ہمارے دین نے بنائے ہیں، جن پر اللہ تعالیٰ قائم ہے، ہر ایک کے وہ تمام تقاضے ہیں، جن کا اندازہ اس سطح پر ہو کہ وہ جس سطح پر اٹھتے ہیں، اس کا مطلب یہ ہے کہ ہر چیز پر سوالات ختم نہیں ہوتے۔ جواب نہیں ملے گا۔ اللہ تعالیٰ چند چیزیں ہمارے سامنے رکھ کر ہمیں امتحان میں ڈال رہا ہے۔ امتحان یہ ہے کہ ہم اپنی عقل کو استعمال کریں اور ان چند چیزوں پر قیاس کریں جو ہم نے باقی کے بارے میں سمجھی ہیں۔ اپنے رب پر بھروسہ رکھیں۔ یہ ایک امتحان ہے۔ پھر وہ ہمیں اس کشتی کے بارے میں کیا بتائیں؟ وہ ایک ایک کر کے لے گئے۔ اس کشتی کا مسئلہ یہ ہے کہ یہ کچھ غریب لوگوں کی تھی۔ وہ مزدوری کرتے تھے، یعنی کشتیاں اسی طرح چلاتے تھے اور اپنی محنت سے کچھ حاصل کرتے تھے، اس لیے میں نے اسے عیب دار بنانا چاہا، یعنی میں نے اس کے بارے میں یہ فیصلہ کیا۔ اسے عیب دار بنانے کا مطلب یہ ہے کہ کشتی عام استعمال کے لیے موزوں نہیں ہے۔ میں نے اس میں ایک بورڈ توڑ دیا۔ آخر میں کیا وہ مجھے بتاتے ہیں کہ میں نے اس میں اپنی رضامندی سے کچھ نہیں کیا، میں نے اللہ کے حکم سے کیا، اگلی وضاحت یہ ہے کہ انہوں نے اللہ کے حکم سے کیا، لیکن یہاں اس کے مقابلے میں یہ ان کی طرف ہے۔ قیاس کیا گیا ہے کہ حکم غالباً صرف بادشاہ کے غضب سے کشتی کو بچانے کے لیے تھا، اس لیے اس نے خود فیصلہ کیا کہ اس کے لیے کیا کیا جائے، اس لیے اسے بھی اپنی طرف منسوب کیا، یعنی فرشتوں کو ہدایت دینے والا اللہ۔ اور کہتے ہیں کہ فلاں فلاں کام ہونا چاہیے، پھر پورا پلان پہلے بتانے کی ضرورت نہیں۔ وہ ذہین مخلوق ہیں، وہ اپنے فیصلے خود کرتے ہیں۔ چنانچہ یہاں اوپر سے صرف حکم آیا کہ کشتی کو بادشاہ کے قبضے سے بچایا جائے، یہ کشتی ان غریبوں کے پاس رہے کیونکہ ان کی روزمرہ کی زندگی اسی پر منحصر ہے، یہ بادشاہ کے پاس گئی، اس لیے یہ معلوم نہیں کہ کب واپس آئے گی۔ اس میں یہ غریب لوگ بھوک سے مر جائیں گے، اس لیے آپ اس کشتی کو بچا لیں، تو وہ کہتے ہیں کہ میں نے اس کو توڑنے کا منصوبہ بنایا تھا، اب یہ جائے گی، کشتی پکڑنے والے وہاں آئیں گے، وہ دیکھیں گے کہ ظاہر ہے کہ جو کشتیاں کام کرنے کے قابل ہوں گی اسے لے جائیں گے اور چھوڑ دیں گے، اس لیے یہ منصوبہ ہے جو میں نے اختیار کیا ہے۔ اس کے پیچھے یہی حکمت تھی۔ یہ وہی ہے جو انہوں نے پہلی چیز کے بارے میں بتایا۔ مزید یہ کہ اس میں کچھ اور حقائق بھی بیان کیے ہیں اور پھر ظاہر کیے ہیں جن میں سے ایک چیز کو سمجھنا ہے اور وہ ہے علم کے امتحان میں کامیابی کے لیے ہماری ضرورت جو ہمارے سامنے رکھی گئی ہے۔ وقت ختم ہو گیا اور کلام یہ ہے، اور خدا نے مجھے معاف کر دیا۔ ہم آپ کو خوش آمدید کہتے ہیں۔ الحمدللہ ، الحمدللہ، رب العالمین وعلیکم السلام۔ محمد الامین، فوز باللہ، من من الشیطان ، تمام غضب، بسم اللہ، رحمن الرحیم۔ خواتین و حضرات، ہم سورہ کہف کی آیت 79 سے ایک سبق تخلیق کر رہے ہیں۔ یہ واقعہ قرآن میں کیوں بیان ہوا، اس کا سیاق و سباق کیا ہے ؟ اور انہوں نے تینوں سے سوال کیا، وہ اپنے وعدے کے مطابق صبر نہ کر سکے، وہ بار بار ایک ہی بات کہتے، یہ کیوں ہوا، یہ کیسے ہوا، اگر آپ غور کریں تو انہیں یہی کرنا چاہیے تھا۔ ہم عقلی مخلوق ہیں، ہمارے اندر اخلاقی جز ہے۔ اللہ رب العزت نے ہمیں حسن کا ایک حصہ دیا ہے۔ یہ ممکن نہیں کہ انسان کے سامنے بہت بدصورت چیز کو خوبصورت کہا جائے اور وہ سوال نہ کرے۔ یہ ممکن نہیں کہ جھوٹ اور بے ایمانی ہو۔ یہ ممکن نہیں کہ کسی بے گناہ کو قتل کیا جائے اور وہ اس پر اعتراض نہیں کرتا۔ عقلی ہستی اپنے ظہور سے باز نہیں رہ سکتی۔ وہ وہی کرے گا۔ چنانچہ سیدنا موسیٰ نے بار بار سوال کیا۔ یہاں تک کہ موقع پیدا ہوا کہ ان کے سامنے یہ بات رکھ دی گئی کہ یہ جدائی اب جدائی کا موقع ہے، تو میرے سارے کام ایک جیسے ہوں گے، ان میں ظاہر کچھ ہوگا اور باطن کچھ اور ہوگا۔ اللہ رب العزت نے دنیا کو پیدا کیا ہے اور ہم پر ظاہر کرنے کے لیے اس اسکیم میں کام کیا ہے، قانون ہمارے لیے اخلاقی اصول ہیں جو متعین کیے گئے ہیں اور یہ فتنہ کے لیے لازمی تقاضا ہے۔ اللہ تعالیٰ اپنے احکام اور طریقوں کے مطابق کام کرتا ہے اور اس کے فرشتے اس کے حکم پر چلتے ہیں۔ سوال ہوں گے تو اللہ کے فرشتے نے جس سے ملاقات کی اس نے صاف کہہ دیا کہ مجھے بھی ایسے ہی کام کرنے ہیں، ہر قدم پر وہی سوال ہوں گے۔ مجھے کچھ سمجھانا تھا، میں نے سمجھا دیا، اب اس کے بارے میں اندازہ لگا لو اور باقی تمام معاملات کو خود سمجھ لو، اب تمہارے اور میرے درمیان جدائی کا وقت آگیا ہے، اس لیے بہتر ہے کہ ہم ابھی الگ ہو جائیں، لیکن جدائی سے پہلے میں تمہیں ان سب باتوں کے بارے میں بتاؤں گا۔ میں آپ کو وہ سچ بتاتا ہوں جس پر آپ صبر نہ کر سکے۔ یہ وہ پس منظر ہے جس میں انہوں نے ہر کیس کو لے کر سچ کہا ہے۔ سب سے پہلے انہوں نے کشتی کا معاملہ لیا۔ اس کا اہتمام کیا گیا کہ وہ کہتے ہیں اما سفینہ فقانت مسکین، اس کشتی کا معاملہ یہ ہے کہ یہ کچھ غریبوں کی تھی، یعنی تم نے اللہ کو دیکھا، مسافروں کو دیکھا، لیکن ہمیں معلوم تھا کہ یہ غریبوں کی کشتی ہے، یہ یملون کیا کر رہے ہیں۔ سمندر میں جو لوگ دریا میں محنت مزدوری کرتے تھے، یعنی کشتیاں چلاتے تھے، مسافروں کو لاتے تھے، وہ چار پیسے کما لیتے تھے، یہی ان کا رزق تھا، یہی ان کا روزگار تھا۔ میں نے اس کی وضاحت کی ہے، یعنی میں اپنی ایک جنگی مہم کے لیے زبردستی اس پر قبضہ کر رہا تھا۔ ہمارے زمانے میں ایسے واقعات ہوتے تھے کہ شہر بھر سے اچھی اچھی گاڑیاں اس لیے لی جاتی تھیں کہ ایک بڑی کانفرنس ہو رہی تھی جس میں بادشاہ اور حکمراں موجود تھے۔ حکومت کو لاہور میں اس کی ضرورت تھی۔ مجھے معلوم ہے کہ کچھ گھر اسی طرح خالی کرائے گئے تھے اور بڑے لوگوں کے گھر تھے، انہیں دوسری جگہ جانے کو کہا گیا تھا اور وہ گھر گیسٹ ہاؤسز میں تبدیل ہو گئے تھے۔ مہم کے لیے تمام کشتیاں ضبط کی جائیں۔ ایسی ضرورت کا اندازہ ہی لگایا جا سکتا ہے۔ اس کے پیش نظر کشتیوں کو زبردستی لے جایا جا رہا تھا یعنی جہاں کشتی کو پہنچنا تھا وہاں بادشاہ کے اہلکار کھڑے تھے اور وہ کشتیاں ضبط کر لی گئیں۔ وہ کہہ رہے تھے کہ یہ حالت دیکھ کر اللہ تعالیٰ نے حکم دیا کہ کشتی کو بچایا جائے، چنانچہ وہ کہتے ہیں کہ میں نے جو طریقہ اختیار کیا وہ یہ ہے کہ میں نے کشتی کو ناکارہ بنا دیا۔ مجھے جو منصوبہ کرنا تھا وہ حکم دیا گیا، چنانچہ میں نے اس کو عیب دار بنانے کا ارادہ کیا، اس لیے میں نے چاہا کہ اسے عیب دار کر دوں، کیونکہ ان سے پہلے ایک بادشاہ تھا جو ہر کشتی کو زبردستی چھین لیتا تھا، کہتا تھا کہ یہ چیز اگر آپ کو نظر ہوتی تو آپ نے بظاہر دیکھا کہ کشتی خراب ہو گئی ہے، کشتی کا تختہ ٹوٹ گیا ہے، اس لیے آپ کا یہ اعتراض قطعی طور پر درست ہے کہ اس کشتی کے ٹوٹنے کا اندیشہ بالکل درست ہے۔ اس میں پانی بھر جائے گا اور نتیجتاً اگر کشتی ڈوب جائے تو مسافروں کو بھی یہ حشر بھگتنا پڑے گا اور اگر غریب آدمی کی کشتی خراب ہو جائے تو وہ کام نہیں کر سکے گی، اور اب اسے مرمت کرنا پڑے گا۔ انہیں غصے سے بچانے کے لیے وہ انہیں زبردستی نہیں لے جا سکتے تھے۔ کشتی میں ایک چھوٹی سی خرابی کو ٹھیک کیا جائے گا اور اسے ٹھیک کیا جائے گا۔ تمام باتوں کو مدنظر رکھتے ہوئے ہم نے یہ طریقہ اختیار کیا تو اس نے پہلی بات سمجھائی اور یوں بتایا کہ جس طرح باہر سے چیزیں ہوتی ہیں اللہ تعالیٰ اندر سے کچھ کیوں نہیں ہونے دیتا، حادثہ کیوں ہوتا ہے۔ کوئی کیوں راستے میں آتا ہے؟ یہ کیا چیزیں ہیں ؟ لڑکا مارا گیا اور لڑکا بچ گیا۔ اس کے والدین دونوں مومن تھے یعنی اللہ تعالیٰ نے انسان کو جو مہلت دی ہے اس سے لڑکے کے والدین نے خوب فائدہ اٹھایا۔ صلوٰۃ کی زندگی اہل ایمان نے اختیار کر لی تھی۔ یہاں مومنین کو ایک جامع تشریح کے طور پر استعمال کیا گیا ہے۔ لڑکا لڑکا ہی رہا، اس کے ماں باپ دونوں مومن تھے، اور ہمیں ڈر تھا کہ کہیں وہ بڑا ہو کر ان کو اپنی نافرمانی اور کفر سے تنگ نہ کر دے۔ کرائے کا مطلب یہ ہے کہ وہ اس عمر کو پہنچ گیا تھا کہ اس میں نشانات نظر آنے لگے تھے کہ اس میں بغاوت ہے، اس میں بغاوت ہے، وہ چیزوں کو دائیں طرف سے نہیں دیکھتا، اللہ نے اس کی فطرت میں جو نور رکھا ہے وہ اسے بجھانا ہے۔ جب یہ واضح ہو گیا تو اب اس خوف سے کہ یہ بڑا ہو کر ماں باپ کی نافرمانی اور کفر سے مصیبت بن جائے گا۔ بچوں کے معاملے میں جنہیں قرآن نے فتنہ قرار دیا ہے، وہ بڑی غلطیاں کرتے ہیں۔ ہم جانتے ہیں کہ بچوں کی محبت، بیوی کی محبت، عائزہ کی محبت کچھ فیصلوں پر کیسے اثر انداز ہوتی ہے، اس لیے انسان ہر حال میں یہی چاہتا ہے۔ کہ میرے بچے خیریت سے ہوں خواہ وہ کچھ بھی ہوں لیکن اللہ تعالیٰ نے مستقبل کو دیکھتے ہوئے انہیں نافرمانی اور کفر سے بچانے کا فیصلہ فرمایا۔ یہ لڑکا زندہ نہ رہے، اس لیے وہ لڑکا مر گیا، یعنی اللہ تعالیٰ نے اس کی موت کر دی۔ اس نے کہا کہ وہ میں ہوں اور ہم نے فیصلہ کیا کہ اب اسے زندہ نہیں رہنا چاہیے، اس لیے ان کا کہنا ہے کہ لڑکے کے معاملے میں اصل بات یہ تھی کہ وہ یہ تھا کہ ہم نے یہ معاملہ اس کے سامنے رکھا۔ کیا ہمیں اندیشہ تھا کہ کہیں وہ کافر نہ ہو جائے، میں نے اس پر حاشیہ لکھا ہے۔ یہاں بھی وہی حال ہے جیسا کہ اوپر بیان ہوا کہ انہیں صرف حکم دیا گیا تھا کہ یہ لڑکا کافر ہو گا، اس لیے اسے قتل کرنے کا مطلب یہ ہے کہ ہدایت صرف اتنی تھی، انہوں نے اس سے یہ اندازہ لگایا کہ اللہ تعالیٰ نے اس کے والدین پر احسان کیا ہے، یعنی دنیا میں یہ دستور نہیں کہ جو کافر ہونے والا ہو اسے قتل کر دیا جائے، اس لیے اب وہ سمجھے کہ اس کے ماں باپ بھی اتنے اچھے ہیں کہ اس کے لیے وہ لوگ ماننے والے ہیں اور وہ بھی اپنے لیے اچھے فیصلے کرنے والے ہیں۔ . جو کافر ہونا چاہے قتل کر دے۔ سوال یہ ہے کہ اس کے کفر سے کون متاثر ہوتا ہے اور اس کی نافرمانی سے کون متاثر ہوتا ہے۔ اگر یہ دوسرے لوگوں کو متاثر کر رہا ہے، تو ان دوسرے لوگوں کا حال۔ یہ کیا ہے؟ کیا وہ مومن ہیں؟ کیا ان کا اللہ سے حقیقی تعلق ہے؟ لیکن اللہ تعالیٰ مجموعی صورت حال دیکھتا ہے اور فیصلہ کرتا ہے کہ انہیں اس امتحان میں ڈالنا ہے یا نہیں، چنانچہ انہوں نے یہ قیاس کیا کہ اللہ تعالیٰ نے یہ حکم دیا ہے یہ سوچ کر انہوں نے اس سے یہ اندازہ لگایا کہ اللہ تعالیٰ نے ان کے والدین پر احسان کیا ہے کہ جب وہ بڑے ہو جائیں تو ان کو اپنی نافرمانی اور کفر سے تکلیف نہ پہنچائیں، اس لیے ان کے لیے خوشینہ کا لفظ استعمال کیا گیا۔ اس کے لیے انہوں نے جمع کا استعمال کیا ہے اس کی وجہ یہ ہے کہ مجھے ڈر ہے کہ بجائے اس کے کہ ہم ڈر رہے ہیں کہ یہ الفاظ کیوں استعمال ہوئے ہیں، وجہ یہ ہے کہ وہ یہاں اس پوری جماعت کی نمائندگی کر رہے ہیں۔ جن کارکنوں کو قضا اور قدر کے کاموں پر مامور کیا گیا ہے یعنی وہ یہ کام کر رہے ہیں اور موسیٰ علیہ السلام اللہ کے دس اور فرشتوں کو بھی اسی طرح کا کام کرتے ہوئے دیکھ رہے ہیں۔ انہوں نے وہاں اپنے زمرے یا اپنی جماعت کی طرف اشارہ کیا ہے، جیسا کہ عام طور پر قانون نافذ کرنے والے ادارے ایسے مواقع پر ایسی تشریح کرتے ہیں، اس لیے انہوں نے اپنے زمرے کی رعایت کے ساتھ، یعنی اپنی جماعت کی رعایت کے ساتھ ایسا کیا۔ جمع کا استعمال ہماری زبان میں بھی عام ہے، ہم بھی کرتے ہیں، تو گویا اس نے کہا کہ اللہ تعالیٰ اس کے والدین کو اس کے کفر سے بچانا چاہتا ہے۔ اس میں مصلحت کیا ہے؟ انہوں نے اللہ کے حکم کی تعمیل کی تو ایسا ہی ہوا۔ پھر کہتے ہیں ربا خیر من۔ پس ہم نے چاہا کہ ان کا رب انہیں اپنی جگہ خیر من اور زکوٰۃ جیسی اولاد عطا کرے۔ اس سے بہتر ہے اور اس سے بڑھ کر ہمدردی اور محبت میں، یعنی یہ ختم ہو گیا ہے، لیکن اب ہماری نظر میں یہی چیز ہے کہ رب ان والدین پر رحم کرے اور ان والدین کو اولاد سے محروم نہ کرے، یعنی اب ان کے ہاں اچھا بچہ پیدا ہوگا۔ اللہ عزوجل عطا فرمائے، وہ چاہے گا تو ہو جائے گا۔ اس کا مطلب یہ بھی ہے کہ مہلت دی جائے گی۔ ان تمام پہلوؤں کا تذکرہ کرنے کے بعد فرمایا کہ یہاں بھی آپ کو یہ سبق سیکھنا چاہیے کہ جو کچھ ظاہری طور پر ہو رہا ہے وہ بالکل مختلف پہلوؤں سے اندرونی طور پر ہو رہا ہے، یعنی اللہ اندر ہے۔ عقل کیا کام کرتی ہے، انسان کے سامنے نہیں ہوتا، اس نے بچے دیے، کیا ہوا جب بچہ ماں کے پیٹ سے چلا گیا، اس نے ابھی آنکھ بھی نہیں کھولی تھی، فرشتہ بچپن میں مر گیا، پڑھایا، لکھا، کھیلا ، اس کے سامنے چھلانگ لگاتا رہا۔ ماں باپ کی آنکھوں کی ٹھنڈک ہے۔ اللہ تعالیٰ نے اسے لیا اور فرمایا کہ یہ سب کچھ کسی مصلحت کے تحت ہوتا ہے، یعنی ہم نے موت کا وقت مقرر نہیں کیا۔ کہ انسانوں کو یہ نہیں بتایا گیا کہ وہ فلاں عمر میں اس دنیا سے چلے جائیں گے۔ اس سے پہلے موت نہیں آئے گی۔ موت ہر مرحلے پر آتی ہے اور جب آتی ہے تو اللہ کے سامنے فضیلت ہے کیونکہ جو مرے گا وہ مرے گا۔ کسی کی اولاد ہوگی، کسی کے ہاں بیٹا ہوگا، کسی کی بیٹی ہوگی۔ یہ رشتے رشتے ہیں۔ یہ ظاہر ہے کہ وہ صدمے کا سبب بنتے ہیں۔ بڑی نعمتیں لیتے ہیں اور چھین لیتے ہیں۔ ایسی حالت میں صبر کرنا آسان نہیں اس لیے بتایا گیا کہ عموماً یہ سارے معاملات آپ کے فائدے کے لیے ہیں۔ اللہ تمہیں نافرمانی، کفر، سرکشی سے بچانا چاہتا ہے اور یہی اللہ ہے۔ وہ حکمتیں اور مصلحتیں ہیں جو اس وقت آپ کے سامنے نہیں ہیں۔ اگر ایسا ہوتا تو آپ خود دیکھ لیتے۔ البتہ قیامت کے دن ہم پردہ اٹھا کر وہاں اسی طرح دکھائیں گے جس طرح ہم یہاں دکھا رہے ہیں۔ خدا کی حکمت اور مشیت سے اس نے حجاب اٹھایا اور ان سے کہا کہ یہ صورت حال ہے، معاف کیجئے گا، اور پھر تیسری صورت اور مدینہ منورہ میں دیواروں والے یتیموں کا کام۔ ایک دیوار بھی تھی۔ یہ اصل میں شہر کے دو یتیم لڑکوں کی ملکیت تھی۔ معاف کیجئے گا۔ تاویل مست۔ صابرہ۔ دیوار کا معاملہ یہ ہے کہ یہ شہر کے دو یتیم لڑکوں کی تھی۔ وہ اس کے نیچے دب گئے۔ باپ نیک آدمی تھا یعنی دو یتیم لڑکے تھے۔ باپ دنیا سے رخصت ہو چکا تھا۔ اس کے والد نے اس دیوار کے نیچے اپنے بچوں کی تدفین کی جگہ رکھی تھی۔ جب وہ جا رہا تھا، ہم چاہتے تھے کہ وہ محفوظ رہے اور اس کے لیے یہ انتظام کیا۔ وہ یہ نہیں چاہتے تھے، یعنی آپ کا رب نہیں چاہتا تھا کہ یہ دوسرے لوگوں کے ہاتھ لگ جائے، تاکہ وہ اسے اس طرح چھپا سکیں۔ ایسا کرنا ضروری تھا، دیوار ٹیک ہوئی تھی، وہ کسی بھی لمحے گر سکتی تھی، ہم نے اسے کھڑا کیا، یہ کیسے ہوا، اس کی وضاحت یہاں نہیں ہے، ظاہر ہے، کچھ اینٹیں ہٹا کر واپس رکھ دی گئیں، یا دیوار جیسا کہ فرشتے کر سکتے ہیں۔ انہوں نے اسے بنایا ہے۔ پھر کیا ہوا، آپ کے رب نے چاہا کہ وہ اپنی جوانی کو پہنچ جائیں اور اپنے پیچھے ان کی قبر کھودیں جس کے پیچھے سیدنا موسیٰ علیہ السلام نے فرمایا کہ اگر انہوں نے مشقت لی تو اس سے معلوم ہوتا ہے کہ گویا کچھ بگڑ گیا ہے۔ مثلاً کسی جگہ سڑک ٹوٹی ہوئی دیکھ کر اللہ کا بندہ بغیر کسی اجر کے کھڑا ہو کر اس کی مرمت شروع کر دیتا ہے، یا اس طرح کی کوئی اور چیز ہو تو اسے ٹھیک کر دے تو واضح ہے۔ کہ گاؤں کے لوگ یہ بھی دیکھ رہے تھے کہ یہاں کچھ لوگ ہیں جو ایک دیوار کی مرمت کر رہے ہیں، انہوں نے کہا کہ یہ اچھی بات ہے کہ دیوار گرنے والی تھی اور انہوں نے اس کی مرمت کر دی کیونکہ وہ کہتے ہیں کہ مزدوری اور کم از کم روٹی تو مانگ لیتے۔ اگر ہم اسے نہ کھلاتے تو وہی خرید لیتے، اس سے اندازہ ہوتا ہے کہ کوئی ظاہری عمل بھی ہوا، اس لیے آپ کے رب نے چاہا کہ وہ اپنی جوانی کو پہنچیں اور اپنی تدفین خود کھودیں۔ لیکن ہم چاہتے تھے، ہم ڈرتے تھے۔ یہاں اللہ تعالی کی طرف براہ راست حوالہ ہے۔ میں نے اصل الفاظ لکھے ہیں ربک۔ نیت کا عمل براہِ راست اللہ تعالیٰ کی طرف ہوتا ہے، یعنی جب حکم دیا گیا تو اس نے کس نزاکت کا مشاہدہ کیا اور اپنی طرف سے کوئی اجتہاد کیے بغیر اس پر عمل کیا۔ دیا گیا، وہاں تعلق بھی اللہ تعالیٰ کی طرف سے ہے اور جہاں خلاصہ حکم دیا گیا، یعنی حاصل کیا جائے، یہ نتیجہ نکلے، والدین کو بچے کی سرکشی سے بچایا جائے یا اسے قتل کر دیا جائے، لیکن یہ نہیں بتایا گیا کہ مصلحت کیا ہے، پھر اگر وہ اپنی طرف سے اس مصلحت کو بیان کرتے ہیں، تو وہ جمع کو استعمال کرتے ہیں، اور اگر وہ خود بیان کرتے ہیں، تو اس کو بھی استعمال کرتے ہیں۔ جمع شکل. یہ یہاں واضح ہے۔ کہ سارا معاملہ کچھ یوں تھا، اس نے کہا کہ وہ اپنی جوانی تک پہنچیں اور اپنی تدفین خود کھودیں۔ اعتراض یہ تھا کہ ہمارے ہاں پوری دنیا میں ایک اچھی روایت ہے کہ اگر کوئی مسافر اس طرح اجنبی ہو تو اسے کھانا کھلایا جائے اور ہم نے خود اس کی درخواست کی لیکن وہ اس کے لیے تیار نہ ہوئے۔ فقیروں اور کمینوں کا اعتراض یہ تھا کہ آپ نے دیوار کی مرمت کی کہ یہ احسان ان غریبوں کو نہیں دیا گیا جو ایک مہمان کو کھانا کھلانے کے لیے بھی تیار نہیں تھے، بلکہ دیوار گرنے کی وجہ سے دو یتیموں کو دیا گیا تھا۔ اس کا خزانہ ان بدمعاشوں کے ہاتھ نہ لگ جائے اور ایسا اس لیے کیا گیا کہ اس کے والد نیک آدمی تھے یعنی نیک انسان تھے۔ یہ ان کا کام ہے۔ ہم نے اس کی حفاظت کا فیصلہ کیا۔ اگر دیوار گر جاتی تو تدفین اس بستی کے لوگوں کے ہاتھ میں ہوتی۔ چنانچہ ہم نے دیوار کی مرمت کی۔ اب یہ کھڑا رہے گا۔ اس لحاظ سے وہ اسے ہٹا دیں گے جیسا کہ پرانے زمانے میں ہوا کرتا تھا، چنانچہ انہوں نے اس کی حکمت بیان کی اور پھر کہا، رحمت من ربک، یہ آپ کے رب کے فضل سے ہوا، یعنی ان میں سے کوئی بھی عمل ایسا نہیں ہے۔ میں نے جو کچھ اپنی مرضی سے کیا ہے، جو کچھ میں نے اپنی رائے کے مطابق کیا ہے، میں نے امتیصل کو معاملے کے لیے حکمت عملی کے طور پر اختیار کیا ہے، یا میں نے کسی جگہ اس کی حکمت کو اپنی سمجھ میں لیا ہے۔ یہ اللہ تعالیٰ کا طریقہ ہے۔ ایک چیز ہمیں بتاتی ہے کہ اللہ تعالیٰ معاملات سے کیسے نمٹتا ہے۔ قرآن پاک میں بعض مقامات پر وہ ہمارے سامنے حکم دیتا ہے۔ ہمارا غور فکر کرنا چھوڑ دیا جاتا ہے، لیکن بعض جگہ احکام کو اس طرح دہرایا جاتا ہے یا مختلف چیزوں پر اس طرح لاگو کیا جاتا ہے کہ وہ چیز بہت نمایاں ہو جاتی ہے، چنانچہ ان کے یہاں بھی یہ سب کچھ ہوا ہے۔ دوسرے لفظوں میں اللہ تعالیٰ فرشتوں کے ساتھ اسی طرح پیش آتا ہے۔ بعض جگہ اس نے حکم بھی دیا ہے۔ اس نے حکم بھی دیا ہے۔ بعض جگہ اس نے حکم دیا ہے۔ اس نے مصلحت کو ان کے اندازوں پر چھوڑ دیا ہے۔ اس میں اگر آپ ان جگہوں کو غور سے پڑھیں تو ان کا استعمال کردہ لفظ آپ کو بتاتا ہے کہ یہاں کیا صورتحال ہے، اس لیے میں نے ہر جگہ اسے واضح کرنے کی کوشش کی۔ اگر کچھ ہوا، یعنی لڑکا مارا گیا تو تیرے رب کی رحمت، اگر اس لائم بستی کے لیے یہ دیوار دوبارہ بنائی گئی تو تیرے رب کی رحمت ہو، اور اگر کشتی میں کوئی ظاہری خرابی پیدا ہوگئی جو بظاہر کوئی برائی معلوم ہوتی ہے۔ لیکن تیرے رب کی رحمت، تو انہوں نے اس سے کہا کہ اگر تیرا رب ایسا مہربان رب نہ ہوتا تو اپنی حکمت کے مطابق معاملات کو چھوڑ دیتا۔ کہا جاتا ہے کہ یہ تیرے رب کے فضل سے ہوا، رحمت من ربک وما فلت امری اور کبھی یہ نہ سوچنا کہ میں نے یہ کام اپنی رائے سے کیا ہے یا میں بھی ایک ہستی ہوں۔ میں نے وہی کیا جو مجھے اوپر کہا گیا ہے اس کے سلسلے میں کرنے کا حکم دیا گیا ہے۔ یہ بیان کیا گیا کہ یہ مشاہدہ کیوں کیا گیا، بتانے کا مقصد کیا تھا، سیدنا موسیٰ کی تربیت کیا تھی اور یہاں میں بیان کر چکا ہوں کہ اس سورت کے تناظر میں رسول اللہ صلی اللہ علیہ وسلم اور مسلمانوں کو بتایا گیا کہ اس وقت آپ کو جن حالات کا سامنا ہے، ان میں فکر کرنے کی کوئی بات نہیں، اس کے پیچھے اللہ کی حکمت ہے۔ جبکہ وہ حکمتیں بھی سامنے لائی جاتی ہیں، پھر یہ قصہ یہیں ختم ہوتا ہے، اس کے بعد ایک اور سوال ہے، ذوالقرنین کے بارے میں پوچھا گیا۔ قرآن نے جو سوال مناسب سمجھا اسے موضوع کے طور پر لیا اور پھر تفصیل سے جواب دیا، میں نے اس کہانی کے آخر میں ایک مفصل نوٹ لکھا ہے۔ کہانی کیا ہے، اس میں علم و حکمت کا خزانہ ہے، جو ہمارے سامنے آتا ہے، تو یہ چند آیات، چند سطریں، ایک قصہ، ایک واقعہ، اس میں تین باتیں بیان کی گئی ہیں، لیکن اگر غور کیا جائے تو یہ گنج گرام مایا ایک بھاری خزانہ ہے، ایک غیر معمولی خزانہ ہے، اس کے لیے صبر و استقامت کی تربیت، صبر و استقامت کا واضح ترین اصول ہے۔ اس دنیا میں انسان اپنے اندر جو خوبی پیدا کر سکتا ہے وہ صبر اور آمادگی ہے، یعنی ایک ثابت قدم انسان۔ اسے چلنے دو، اسے اپنی آزادی میں سلب نہ کرنے دو، اسے انعام نہ دینے دو، اسے پیش آنے والے واقعات سے کوئی پریشانی نہ ہونے دو، وہ اللہ کے فیصلوں پر مطمئن ہو کر صحیح نتائج کے لیے جدوجہد کرتا رہے، پھر یہ ساری چیز سب سے بڑی چیز ہے۔ اس دنیا میں انسان جو کچھ حاصل کر سکتا ہے اسے صبر اور اطمینان کی تربیت حاصل کرنی چاہیے۔ معلوم ہوا کہ اس ساری کہانی کا سبق کیا ہے، میں نے اس کے بہت سے پہلوؤں کی نشاندہی کی، اب ان کو ایک ترتیب سے دیکھا جائے، ایک یہ کہ اس دنیا میں جو کچھ ہوتا ہے، وہ سب اللہ کے حکم اور اس کی مشیت سے ہوتا ہے۔ مرضی کے تحت ہوتا ہے، یعنی اس دنیا میں خود سے کچھ نہیں ہوتا، حالانکہ ایک خودکار نظام کام کر رہا ہے، وہ خودکار نظام کچھ اصولوں پر کام کر رہا ہے، لیکن یہ خود کار طریقے سے یا اس میں کام کرتا ہے۔ کسی بھی قسم کی مداخلت خدا کی اجازت اور اس کی مشیت اور اس کی مرضی سے ہو، یعنی چاہے وہ خودکار ہی کیوں نہ ہو، خدا کی مرضی اور اس کی مرضی کے تحت ہے۔ یہ نہیں کیا جا سکتا۔ یہ ہمارے ایمان کی بنیاد ہونی چاہیے۔ اس سے ایمان کی مٹھاس ملتی ہے۔ اگر کسی شخص کو ہر چیز میں اللہ تعالی کی نیت اور مرضی نظر نہیں آتی تو اس کا مطلب یہ ہے کہ ایمان اس کے لیے محض ایک فلسفیانہ تصور ہے۔ اس نے اسے لے کر کہیں چھوڑ دیا ہے۔ ایمان جب حقیقی زندگی سے تعلق رکھتا ہے تو اس چیز کو پیدا کرتا ہے، یعنی انسان کی پوری زندگی اللہ تعالیٰ کی رضامندی اور ہر چیز میں اس کی مرضی اور حکمت و حکمت پر استوار ہے۔ ہمارا دیکھ کر راضی ہونے کا وقت ختم ہوگیا، اب اس کے مختلف نکات زیر بحث آئیں گے۔ اس کے بعد خضر کا واقعہ آیت 82 پر ختم ہوا، اس واقعہ میں علم و حکمت کی کیا دولت پوشیدہ ہے۔ استاد امام امین احسن اصلاحی نے اسے نکات کی صورت میں بیان کیا ہے۔ میں نے اس کا حوالہ دیا ہے۔ انہوں نے لکھا ہے کہ ایک یہ کہ اس دنیا میں جو کچھ بھی ہوتا ہے اس کی حقیقت میں سب سے پہلی بات جو قرآن نے اس کہانی کو بیان کر کے واضح کی ہے وہ یہ ہے کہ اس دنیا میں جو کچھ ہوتا ہے وہ خدا کے حکم اور اس کی مرضی اور مرضی سے ہوتا ہے۔ اور مقصد ایک عارضی دنیا ہے، دنیا کا مادّہ بنایا گیا ہے، لیکن اسے وسیع و عریض جگہوں پر بکھیر دیا گیا ہے اور ایک چھوٹا سا علاقہ چنا گیا ہے، جسے ہم زمین کہتے ہیں، اور اس میں لوگ بستے ہیں، جنہیں اللہ نے بنایا ہے۔ خدا کے قوانین اپنی جگہ پر ہیں، یہ سب خود کار طریقے سے کام کرتے ہیں، اور بعض اوقات ان میں دخل اندازی ہوتی ہے، اور پردے کے پیچھے فرشتے بھی اپنے فرائض انجام دیتے ہیں۔ ایسا نہیں ہے جو اللہ تعالیٰ کی اجازت کے بغیر ہو اور وہ نہیں جانتا کہ اس کائنات میں اس کی مشیت اور رہنمائی کے بغیر کیا ہوا یا ہو سکتا ہے۔ یہ سب چیزیں اسی نے پیدا کی ہیں۔ اس کے سامنے ایک سکیم رکھی گئی ہے اور اس سکیم کے اجزا کا تعین کر دیا گیا ہے اور پھر جس نے اس میں اختیار دیا ہے وہ اس کی اجازت سے اپنا اختیار استعمال کر رہا ہے۔ اس نے کہاں مداخلت کا فیصلہ کیا ہے، اس کی اپنی حکمت کے تحت کیا ہے، پھر ہر چیز میں اللہ تعالی کام کرتا ہے اور اس کی نیت، کوئی چیز اس کی گرفت سے باہر نہیں ہے اگر ہم اپنے اختیار کو استعمال کرتے ہوئے غلطی کر بیٹھیں۔ مجھے اس میں کوئی شک نہیں کہ وہ اپنا اختیار استعمال کرتے ہوئے کرتے ہیں، اس لیے وہ اس کے ذمہ دار اور جوابدہ ہیں، لیکن اگر یہ اختیار ہمارے رب نے نہ دیا ہوتا تو ہمیں کہیں سے بھی حاصل نہ ہوتا۔ سب کچھ ہوچکا ہے، اس لیے ہر چیز کو اس کی اجازت ملتی ہے، پھر عمل ہوتا ہے، اس کی اجازت اور مشیت کے بغیر ایک ذرہ بھی اپنی جگہ سے نہیں ہل سکتا۔ یہ پہلی چیز تھی۔ یعنی اس کی نیت اس کی حکمت کے ساتھ ہوتی ہے، اس سے کوئی برائی وجود میں نہیں آتی، وہ جو کچھ بھی کرتا ہے وہ سراسر اچھا ہوتا ہے، لعنت نیکی ہوتی ہے اور اس کے ہر فیصلے میں حکمت ہوتی ہے۔ اس وجہ سے اس کی کوئی نیت نیکی اور حکمت سے خالی نہیں ہے۔ اگر وہ اہل باطل کو چھوڑ دیتا ہے تو اس کی وجہ یہ نہیں کہ وہ باطل سے محبت کرتا ہے یا اس کے سامنے بے بس اور مجبور ہے۔ وہ بھلائی کی پرورش کرتا تھا یعنی لگتا ہے کہ جھوٹ کو مہلت دی گئی ہے، جھوٹ زمین پر بکھر رہا ہے، جھوٹ لوگوں کے گلے کاٹ رہا ہے، جھوٹ فساد برپا کر رہا ہے۔ اس کا مطلب یہ ہے کہ وہ ابھی تک گرفت میں نہیں آ رہا ہے، کیونکہ دنیا فتنہ کے اصول پر بنی ہے، اس لیے اس نے موقع دیا ہے، لیکن اس موقع کے دینے کا یہ مطلب بھی نہیں کہ اللہ تعالیٰ کو کچھ عرصے کے لیے باطل سے محبت ہو گئی ہے یا وہ اس باطل سے بے نیاز ہے۔ بس اتنا ہے کہ اس کا ہاتھ نہیں پکڑ سکتا، لیکن اس کے اندر بھی وہ ایک عظیم نیکی کو پالتا ہے، یعنی ایک بہت بڑی نیکی ظاہر ہوتی ہے، جو ظاہر ہے برسوں بعد بعض مواقع پر سامنے آتی ہے اور بعض مواقع پر خود قیامت۔ میں ظاہر کروں گا کہ اس میں کیا حکمت پوشیدہ تھی۔ یہ سب اس کی طرف سے اچھا ہے۔ اسی طرح اگر وہ اہل حق کو مصائب و آلام میں مبتلا کرتا ہے تو انبیاء کرام کو مصائب اور انبیاء کو اللہ تعالیٰ کے اعلان میں مصائب کا سامنا کرنا پڑا۔ جن بندوں سے اس نے اپنی کتابوں میں اپنی محبت کا اظہار کیا ہے، ان پر آزمائش اور آزمائش کی گھڑیاں گزر چکی ہیں۔ ایسا کیوں ہوتا ہے؟ وہ حق کے دکھوں میں دلچسپی رکھتا ہے، یعنی انہیں مصیبت میں ڈالنا چاہتا ہے۔ وہ ایسے امتحانوں سے گزرتا ہے، چنانچہ سب سے بڑا گروہ، جسے اللہ تعالیٰ نے خود ایک اچھی قوم کہا، وہ گروہ ہے جو اللہ کے رسول صلی اللہ علیہ وسلم پر ایمان لائے، جسے ہم صحابہ کہتے ہیں۔ میں نے بیان کیا ہے کہ اللہ ان سے راضی ہوا، یعنی انہوں نے اللہ کی بیعت کی، اور ان کے مجموعی طرز عمل کی بنا پر اس پوری جماعت کے بارے میں کہا گیا ہے کہ تم بہترین جماعت ہو۔ پھر کہا گیا کہ یہ وہ بہترین گروہ ہے جس کو میں نے آخری زمانے میں ذمہ داری دی ہے کہ اب جاؤ اور دنیا والوں کو پیغام پہنچا دو۔ ان سب نے کہا اور ان سب کے باس نے ان کے بارے میں کہا کہ من الخفین من الفارہ کہ جس عہدے پر تم فائز ہو اس کے تقاضوں کے مطابق امتحانات اور آزمائشوں کا ایک سلسلہ شروع ہونے والا ہے وہ امتحانات میری طرف سے ہوں گے اور میں تمہیں فلاں فلاں میں آزماؤں گا، اس میں بھوک کا ذکر ہے، اس میں خوف کا ذکر ہے، اس میں دنیا کے خوف کا ذکر موجود ہے۔ گڑیا ، دولت ، یہ دیکھنے کے لئے کہ آپ کو یہ عظیم مقام دیا گیا ہے ، جب تک کہ یہ بات آپ کے پاس نہیں ہے۔ ایمان کا اظہار کریں، وہ اپنے رب کے ساتھ اپنے تعلق کا اظہار کریں گے، اور ان کو امتحان میں نہیں ڈالا جائے گا، ایک اچھی امت ہے اور میں نے انہیں مقام دیا ہے۔ یہ انبیاء ہیں، یہ شہداء ہیں، یہ صالحین ہیں، اب ان کے لیے بخشش ہے۔ کیونکہ عظیم بھلائی کا راستہ کھلنا ہے، اس لیے انہوں نے اس دنیا میں جو مقام حاصل کیا، جس طرح وہ ہمارے لیے نمونہ بن گئے، جیسا کہ اب ہم انہیں دینی رہنمائی کے نمونے کے طور پر دیکھتے ہیں اور جس طرح انہیں اللہ کے رسول صلی اللہ علیہ وسلم نے سکھایا تھا۔ یہ سند ملی کہ اگر کسی چیز کے بارے میں فیصلہ کرنا ہو کہ وہ صحیح ہے یا غلط، تو دیکھو کہ ما انا الیہ والصحابی اور میرے صحابہ کا رویہ کیا تھا، ان کا طریقہ کیا تھا، تو یہ اتنا بڑا وقت ہے کہ اللہ تعالیٰ نے انہیں دنیا میں ایک عظیم سلطنت عطا فرمائی، روم و ایران کی بڑی سلطنتیں آپ کے سامنے پروں کی طرح اڑ گئیں۔ آپ اس کا اندازہ نہیں لگا سکتے، لیکن اس دنیا میں جو آزمائشیں اور فتنوں کا سامنا کرنا پڑتا ہے وہ وہ آزمائشیں اور فتنے ہیں جن کے ذریعے انسان کی تربیت ہوتی ہے، وہ جنت کی ابدی زندگی میں اللہ رب العزت کی اسکیم کا حصہ بننے کے لیے تیار ہوتا ہے۔ علم میں خیر ہے۔ دنیا میں یہ ممکن ہو سکتا ہے۔ یہ میری کائنات میں ممکن ہے۔ اس علم کا ایک بڑا حصہ ہے جو میں نے تم لوگوں کو دیا ہے، من علم القلیلہ، یعنی یہ علم جسے ہم پسند کرتے ہیں، جس کے نتائج دیکھ کر لوگ حیران رہ جاتے ہیں۔ یہاں تک کہ ہمارے شاعر کا کہنا تھا کہ میں سوچتا ہوں کہ دنیا کیا بنے گی، یہ علم جو انسان کو قوی بناتا ہے اس کا موازنہ اللہ کے علم سے کیا جاتا ہے، اللہ کے فرشتوں کے علم سے اور علم کا موازنہ ایک ذرہ سے زیادہ نہیں، اسی لیے اسے اس طرح بیان کیا گیا ہے، ایک جگہ کہا گیا ہے کہ من العلما چھوٹا ہے، یہی وجہ ہے کہ انسان کو علم نہ سمجھنے کی وجہ سے بڑی نعمت ہوتی ہے۔ اور اس کی تربیت کے لیے قرآن میں ہی اللہ تعالیٰ نے فرمایا کہ میں نے اس قرآن کو نازل کیا ہے، پھر میں نے اس میں دین کی وضاحت کی ہے، اس کے حصے یہ ہیں، یعنی ان میں کوئی چیز مخفی نہیں ہے، نہ یہ عام ہے، نہ کوئی راز ہے، ہر چیز جملے سے نکلی ہے، ہر چیز جملے سے نکلی ہے۔ بیان کیا تو فیصلہ کن ہو گیا، اس میں کوئی ابہام نہیں، وہ اپنی رہنمائی کو آخری درجے تک پہنچا دیتا ہے، لیکن اس ہدایت کا ایک حصہ وہ ہے جس میں آپ اپنے علم کی حدود سے باہر کے علم سے بھی واقف ہوں۔ دیا جاتا ہے، یعنی بتایا جاتا ہے کہ اس کائنات کا رب کیسے کام کرتا ہے۔ بتایا جاتا ہے کہ اس دنیا کا کارخانہ بعض مواقع پر کیسے کام کرتا ہے۔ آنے والا جس میں آپ کی پرورش کی جائے گی اس میں نئے نوام اور نئے قوانین ہوں گے۔ اس دنیا میں کیا نعمتیں ہوں گی ؟ کتاب میں کوئی ابہام نہیں ہے، لیکن یہ حقائق ایسے ہیں کہ آپ اس دنیا میں ان کی گرفت نہیں کر سکتے۔ یہ بتانا بھی ضروری ہے کہ جنت کیا ہو گی اور یہ بتانا بھی ضروری ہے کہ جنت کے دوست کیسے وجود میں آئیں گے۔ بتاتے ہوئے زیادہ سے زیادہ علم دینے کے لیے ہم نے تشبیہات کا طریقہ اختیار کیا ہے، یعنی آپ کی دنیا میں موجود چیزوں سے مشابہت۔ اس طرح ہم نے خود شناسی کا ایک راز کھول دیا، یعنی جب کوئی چیز اس دنیا میں ہمارے سامنے ظاہر ہوتی ہے اور اسے ہمارے حواس گرفت میں لے لیتے ہیں، خواہ وہ ظاہری حواس ہوں یا اندرونی حواس، تو عقل کی ایسی سمجھ۔ ہم دولت لائے ہیں کہ اگر اس سے کوئی اصول نکلتا ہے تو فرعون تک پہنچتا ہے، فرعون سامنے ہو تو اصول تک پہنچ جاتا ہے۔ نتائج سامنے آنے لگتے ہیں۔ یہ ہمارے علم کی عظیم نعمت ہے، وہ عظیم دولت جو ہمیں عطا کی گئی ہے، لیکن یہ عظیم دولت اور یہ عظیم نعمت کیا ہے جو لامحدود ہے۔ آپ تصور کر سکتے ہیں کہ یہ ایک غیر معمولی صلاحیت ہے جو آپ کو دی گئی ہے، اس لیے ہم نے اس کا آپ کی دنیا سے موازنہ کیا ہے اور آپ کو کچھ ایسے حقائق بتائے ہیں جو آپ کے حواس کی گرفت یا آپ کی عقل کی گرفت سے باہر ہیں۔ یہ بھی بتایا اور ان کی دوبارہ تلقین کی گئی ہے کہ علم کے وارث، علم میں اثر رکھنے والے کبھی بھی ان تشبیہات کے دروازے پر نہیں آتے، یعنی اب مجھے ان کی حقیقت کا علم ہو جائے گا اور میں اس میں ہوں۔ وہ اسے کبھی بھی اپنے اندازوں کا ہدف نہیں بناتے بلکہ جتنا علم انہیں دیا جاتا ہے۔ ہمیں ان کی طرف سے یہ اطلاع ملی ہے کہ ہم ان کی حقیقت کے سامنے کبھی نہیں ہوں گے اور پھر لوگوں کو متنبہ کیا کہ جو لوگ ان کی حقیقت کے سامنے ہیں یعنی ان کے علم کی حد سے باہر چیزوں کی حقیقت دو ہیں۔ وہ لالچ سے اس دنیا میں اترتے ہیں، اس دلدل میں اترتے ہیں، یا فتنہ فتنہ کیا ہے؟ ان کے خیال میں ایک فتنہ پیدا کرنا ہوگا۔ یہ شیطانیت اور تاویل کی ابتداء کو جنم دیتے ہیں کہ فطری تجسس اس حد سے آگے نکل جاتا ہے کہ ہمیں ان چیزوں کو بھی سمجھنا ہوگا۔ ان کو سمجھے بغیر ہم کیسے مطمئن ہو سکتے ہیں ؟ انہوں نے کہا کہ جو لوگ ان دو چیزوں کو اپناتے ہیں جو ہمارے دلوں میں ہیں انہیں اپنے باطن کو جھانک کر دیکھنا چاہیے کہ دوسری چیز اپنے آپ میں بری نہیں ہے لیکن دل کی کجی کے بغیر نہیں رہ سکتی۔ یہ بتایا گیا ہے کہ آپ کے علم کی حدود کیا ہیں تو استاد امام یہاں اس کی وضاحت کرتے ہیں۔ جہاں تک وہ اپنے حواس کو تیز کر سکتا ہے، وہ بھی محدود ہے کیونکہ اس کے علم کی رسائی محدود ہے، اس لیے وہ اس دنیا میں، یعنی یہاں خدا کے ہر ارادے کی حکمت نہیں جان سکتا۔ باتیں ایک حد تک معلوم ہوتی ہیں، یہی وجہ ہے کہ ان تینوں واقعات میں اٹھنے والے سوالات کو موضوع نہیں بنایا گیا، لیکن تھوڑا سا پردہ اٹھا کر ہم نے بتایا ہے کہ ہم اس دنیا میں کیا کر رہے ہیں، بظاہر وہ برائی آپ کو نظر آتی ہے۔ اس کے پیچھے کیا خوبی ہے؟ آپ اس کے پیچھے اچھائی دیکھ سکتے ہیں۔ اس کے پیچھے کیا حکمت ہے؟ پھر یہ بات بتائی گئی ہے کہ ہم یہ سب اس طرح کر رہے ہیں۔ اس لیے یہ خدا کے ہر ارادے کی حکمت ہے۔ اس کے ارادوں کے سارے اسرار اس دنیا میں نہیں جان سکتے، یعنی اللہ نے یہ کیوں چاہا، یہ کیوں چاہا، اس نے دنیا کو ایسا کیوں بنایا، اس میں یہ اصول کیوں رکھے، اس کے قوانین ایسے کیوں ہیں، اس میں یہ برائی کیوں پیدا ہوئی؟ اس میں خیر ہے۔ ہم کبھی کبھار کیوں مغلوب ہو جاتے ہیں؟ ایسے حالات کیوں پیدا ہوتے ہیں؟ ہماری خواہشات اور تمنائیں ادھوری کیوں رہتی ہیں؟ ہزاروں سوال۔ لیکن خدا اپنے بندوں پر اپنے ارادے ظاہر نہیں کرتا، اس کے ارادوں کے تمام اسرار آخرت میں ہی کھلیں گے۔ تمام فیصلوں پر اپنا فرض ادا کرو، یعنی اگر علم کسی حد تک دیا جائے تو اس پر صبر اور شکر کرو۔ اگر اسے حد کے اندر رکھا جائے تو اس میں بھی صبر اور شکر کرنا، وہی راسخون فی العلماء کمال کا کلمہ ہے، یعنی فرمایا کہ جو لوگ ان چیزوں کی تلاش میں ہیں، وہ اپنے آپ کو عالم سمجھتے ہیں، جب کہ ان کا علم مطلع علم ہے۔ ایسا ہوتا ہے کہ وہ اپنے علم کی حد سے تجاوز کر رہے ہیں۔ وہ ایسی چیزوں کی تلاش میں ہیں جو انسان کی سمجھ سے باہر ہیں۔ ان کے ساتھ پھر یہی ہوتا ہے، پھر فرمایا کہ وہ انہیں صبر اور شکر کرنے کی اجازت نہیں دیتا اور ان جہانوں میں جانے کے بعد یہ سمجھتے ہیں کہ ہم نے علم کے اسرار دریافت کر لیے ہیں، جب کہ راسخون فی علم وہ نہیں جو یہ کہتے ہیں، کل آدمی سے مراد وہ ہے جو اپنے علم کی حدود کو پہچان کر صحیح سرحد پر کھڑا ہو اور اس سے آگے قدم نہ بڑھائے۔ ہماری حدود یہ ہیں۔ ہم اس دور تک جائیں گے۔ واضح طور پر تجزیہ کریں اور سمجھیں کہ انسانی علم کی حدود دراصل کیا ہیں۔ میرے اور آپ کے درمیان علم کی حدیں زیر بحث نہیں آتیں۔ ہو سکتا ہے کہ مجھے کمتر علم ہو۔ ہو سکتا ہے آپ زیادہ علم والے ہوں۔ علم اور علم میں فرق یہ ہے کہ علم کی حدود کا مطلب ہے انسان بحیثیت انسان، علم کی حدود کا تعین ہونا چاہیے۔ عمر وہ ہے جس میں وجود کی حقیقت جاننے کے لیے ایک قدم بھی آگے نہیں بڑھایا جا سکتا۔ جب چیزیں وجود میں آئی ہیں، وہ وجود میں آگئی ہیں، تب ہم ان کا علم حاصل کر سکتے ہیں، کسی حد تک ہم اس سے آگے نہیں بڑھ سکتے، پھر دنیا نے اس سے توبہ کر لی، اس لیے اب علم کا چرچا ہے۔ یہ ایک مفید بحث ہے ، اور اس سے بہت اچھے نتائج برآمد ہوئے ہیں۔ انسان نے علم کی حدیں جان لی ہیں۔ جب دنیا پیدا ہوئی تو گویا علم کی حدود جاننے کا عمل شروع ہو گیا۔ اس میں عرض کر رہا ہوں کہ جیسے ہی آپ اس کی تعریف کریں گے، اس کے بعد آپ کو علم میں اصل اثر ملے گا۔ ان کی تعریف یہ ہے کہ ہمارے تمام علم کی بنیاد دراصل ہمارے حواس کا مشاہدہ اور تجربہ ہے۔ یہاں ہر دلیل کو اپنی بنیاد تلاش کرنی پڑتی ہے۔ اس سے آگے، جب ایک چھلانگ لگائی جائے گی، تو آپ اپنی حدوں سے باہر ہو جائیں گے۔ جو علم ہم اپنے حواس کے ذریعے حاصل کرتے ہیں وہ ہماری بنیاد ہے، پھر اللہ۔ ہمیں عقل دی ہے کہ جو علم ہم اپنے حواس سے حاصل کرتے ہیں، اس علم کو اپنے شعور کی ساخت میں متعین کرنا، اسے ممتاز کرنا، اسے نام دینا، درجہ بندی کرنا، یہ تمام صلاحیتیں موجود ہیں، یعنی یہ کیسا علم ہے؟ ہمارے حواس سے جو کچھ حاصل ہوتا ہے اس کے بعد ہماری عقل دو کام کرتی ہے: یا تو وہ ان چیزوں کو جاننا شروع کر دیتی ہے جو ہمارے شعور کی ساخت میں واجب ہوتی ہیں، مثلاً ہم جانتے ہیں کہ فعل بغیر کسی مضمون کے نہیں۔ ہو سکتا ہے کہ ہم نے فعل کو دیکھا ہو اور اس کا مشاہدہ کیا ہو، ہمارے باطن میں ہمارے شعور کی ساخت اسے عقلی طور پر لازم کر دیتی ہے کہ اب اس کا بھی مضمون ہونا چاہیے یا اثر دیکھا ہے تو سبب بھی ہونا چاہیے۔ اگر ہم اثر کو دیکھیں تو اثر ہونا چاہیے۔ گویا فرض عقلی ہے۔ عقل نے اسے واجب کر دیا۔ دوسری بات کیا ہے؟ وجہ امکانات پیدا کرتی ہے۔ یہ ایک امکان ہے۔ یہ ہے ارباب تصوف نے جسے زوال سے تعبیر کیا ہے یا جسے افلاطون نے عالمگیر مثال قرار دیا ہے۔ یہ دراصل عقلی امکانات کا اظہار ہے۔ اسی طرح جدید سائنس نظریہ ارتقاء کو بیان کرتی ہے۔ یہ عقلی امکانات کا اظہار ہے۔ عقلی امکانات تحقیق کا موضوع ہیں۔ یہ مشاہدے سے بنایا جائے گا یا بنایا جائے گا۔ عمل ایک جیسا ہو گا۔ اسے الگ نہیں کیا جا سکتا۔ یہ ہمارے علم کی آخری حد ہے۔ اس سے بڑھ کر علم کا کوئی دوسرا ذریعہ نہیں ہے۔ اس سے آگے جو کام ہے وہ دراصل وہ کام ہے جسے کرنے کی انسان میں ایک اور صلاحیت ہے۔ یہ دیا جاتا ہے کہ وہ حقائق سے چیزیں نکالتا ہے اور اس کی بنیاد پر وہ تخیل کی دنیا بناتا ہے، پھر تخیل کی دنیا دیکھنے میں اتنی ہی خوبصورت ہوتی ہے جتنی کہ کوئی بے نظیر ناول پکڑتا ہے۔ اس میں کوئی حقیقت نہیں ہے اگر حقیقت نہیں ہے تو یہ علم کی حد ہے، چنانچہ اس کو سمجھایا کہ اس دنیا میں انسان کے لیے صحیح رویہ یہ ہے کہ وہ صبر و تحمل کے ساتھ خدا کے تمام فیصلوں پر شکر ادا کرتے ہوئے اپنا فرض ادا کرے اور اس بات پر مطمئن رہے کہ روح افزا جام کی لکیروں میں جو مٹھاس چھپی ہوئی ہے وہ کل ظاہر ہو جائے گی، انشاء اللہ یہ دعا ایمان اور ایمان کے ساتھ زندہ باد۔ واقعہ اس لیے کیوں بیان کیا گیا ہے تاکہ اللہ کے رسول اور صحابہ کرام جو اس وقت موجود ہیں اللہ کے فیصلوں، اللہ کے ارادوں، اللہ کے اعمال اور اللہ کی سنت کے بارے میں دیے گئے علم کو بھی سمجھ سکیں اور اس سے؟ اگر وہ سمجھ جائیں تو ان کے لیے ذلت کا کوئی امکان نہیں لیکن اگر وہ اس حد سے تجاوز کریں تو اللہ کا فتویٰ وہی ہے۔ فی قلوبیت سے مراد وہ لوگ ہیں جن کے دلوں میں کجی ہے، پھر وہ اسے اپنی دنیا، اپنی عزت، اپنا امتیاز بنا لیتے ہیں اور پھر کہتے ہیں کہ علم ایسا ہے، چیزوں کو اس طرح سمجھا جاتا ہے، اس کی کوئی حقیقت نہیں، بالکل ایک ہے۔ وہ سمجھتے ہیں کہ آپ نے لکڑی کی ٹانگ بنائی ہے اور یہ سوچ رہے ہیں کہ آپ اس پر کھڑے ہیں کسی چیز پر نہیں۔
Affiliate Disclosure: This blog may contain affiliate links, which means I may earn a small commission if you click on the link and make a purchase. This comes at no additional cost to you. I only recommend products or services that I believe will add value to my readers. Your support helps keep this blog running and allows me to continue providing you with quality content. Thank you for your support!
The text explores the contrast between the unchanging aspects of nature and the achievements of human consciousness, particularly highlighting the impact of philosophers and scientists throughout history. It emphasizes the significance of human effort in shaping societies and creating advancements, using the Turkish Revolution of 1924 under Atatürk as a prime example of a successful, lasting societal transformation. The writing also touches on the tension between religious fervor and secular progress within Muslim societies, examining how religious leaders sometimes prioritized personal gain over societal advancement. Finally, the text advocates for a reevaluation of intellectual heritage and urges a move toward modernization and democracy, inspired by Atatürk’s legacy
Study Guide: Exploring Human Consciousness, Revolutions, and Intellectual Heritage
Quiz
Answer each question in 2-3 sentences.
According to the text, what is the difference between nature’s processes and human-made divisions of time?
What does the text suggest is the primary source of the wonders and masterpieces in the universe beyond natural landscapes?
Who does the text identify as truly deserving of gratitude for their contributions to humanity’s progress?
What is the significance of the UN and the UN Human Rights Charter, as mentioned in the text?
What is meant by the phrase “fake paradise” as opposed to “earthly paradise” in this context?
What was the most impactful event of 1979, according to the text, and why?
How did the author view Kamal Ataturk’s 1924 revolution in Turkey?
According to the text, what was the British government’s motive for suppressing Communist movements in India?
How did the “architects” of the Pakistani nation view Ataturk’s revolution, according to the text?
What does the author suggest regarding the future of the Turkish Revolution?
Quiz Answer Key
The text states that nature’s processes like the rising and setting of the sun are eternal and beyond human control, while human-made divisions of time, like months and years, are a product of human consciousness and not relevant to nature itself.
The text suggests that the primary source of wonders and masterpieces in the universe, beyond natural landscapes, is the pure human struggle and the efforts of human consciousness, not the work of gods or transcendent forces.
The text identifies great philosophers and scientists as truly deserving of gratitude because they have made miraculous contributions to humanity that even those claiming divine power could not achieve.
The UN and the UN Human Rights Charter are presented as concrete examples of human progress, showing that humanity has not only advanced through innovative inventions but also intellectually and consciously, establishing a platform for global human rights.
“Fake paradise” likely refers to the promise of a blissful afterlife, while “earthly paradise” signifies that true happiness and fulfillment are found in the real-world, through human achievement and struggles.
The most impactful event of 1979 was the rise of madujis which highlighted the importance of the Indian Ocean, however, 1979 was the year that the text writer became most impressed by the revolution of Kemal Ataturk in 1924.
The author views Ataturk’s 1924 revolution in Turkey as an amazing one that cut the roots of the caliphate system based on personal dictatorship and moved its direction towards democracy and an elected parliament.
The British government suppressed Communist movements in India out of a need to counter their influence and prevent their spread, using the idea of “special combinations” in order to entangle the communist ideas with other religious zealotry and slogans.
The “architects” of the Pakistani nation, despite using religion for personal and political gain, still admired Ataturk’s revolution and recognized its intellectual greatness and the blessings of the revolution, even in the midst of the Caliphate movement.
The author suggests that even after the Turkish Revolution has suffered many conservative attacks, it will eventually rise again with a new climate and shine as a role model for other Muslim nations, as envisioned by Iqbal.
Essay Questions
Discuss the author’s perspective on the relationship between nature, human consciousness, and the creation of “masterpieces” in the universe.
Analyze the significance of Kamal Ataturk’s revolution in 1924, according to the author, and its implications for Muslim nations.
Explore the concept of “earthly paradise” presented in the text, and how it differs from traditional notions of heaven or spiritual salvation.
Evaluate the author’s critique of religious institutions and their role in hindering or promoting human progress.
Considering the text’s perspective, how might one interpret the call for a reevaluation of intellectual heritage, and what are its implications for national identity?
Glossary of Key Terms
Human Consciousness: The state of being aware of and responsive to one’s surroundings; the collective awareness and understanding of humanity.
Eternity: Infinite or unending time; a state that is timeless and without beginning or end.
Caliphate: The rule or reign of a caliph; the political-religious leadership of a Muslim state.
Kayapult: A term from the text meaning “revolution” or “upheaval,” often referring to a fundamental shift or change in society or thinking.
Intellectual Heritage: The cumulative body of knowledge, ideas, and traditions passed down through generations within a specific group or society.
Rabbani: A term used in the text, possibly meaning divine, sacred, or of the Lord.
Jawar Bhata: A term used in the text, possibly referring to the ebb and flow or tides; a significant or impactful occurrence.
Madujis: A term used in the text with no explicit definition but seems to refer to specific notable events or people in relation to the Indian Ocean in the year 1979.
Tehreek Caliphate: A movement focused on the revival or establishment of the caliphate.
Moderate Revolution: Used in reference to the Turkish Revolution; a revolution promoting moderate views or a middle path of social reform.
Atatürk’s Revolution: A Model for Modernity
Okay, here’s a briefing document summarizing the key themes and ideas from the provided text:
Briefing Document: Analysis of “Pasted Text”
Date: October 26, 2023
Subject: Analysis of a philosophical and historical reflection on nature, human consciousness, and societal progress with particular emphasis on the Ataturk revolution.
Executive Summary:
This text presents a multi-faceted reflection on the nature of reality, human achievement, and the importance of intellectual and societal progress. It contrasts the immutable laws of nature with the transformative power of human consciousness and effort. The author celebrates human achievements, particularly in science and philosophy, while critiquing the reliance on outdated religious systems. The text culminates in a strong endorsement of Mustafa Kemal Atatürk’s revolution in Turkey as a model for other Muslim nations, emphasizing secularism, modernization, and patriotic identity over outdated religious concepts. The text is a passionate plea for intellectual re-evaluation and progress.
Key Themes and Ideas:
Nature vs. Human Consciousness: The text establishes a stark contrast between the indifferent, cyclical nature of the universe and the dynamism of human consciousness.
Nature’s Passivity: “Nature or nature has nothing to do with when which day, month or year comes and when it passes…”. The text emphasizes that nature operates without purpose or concern for human constructs like calendars or anniversaries. Events like birth, death, and revolutions are just part of its ongoing cycle.
Human Agency: Human achievements are presented as a direct result of conscious effort and struggle: “The wonders and masterpieces that have been created in this universe through pure human struggle…”. The text highlights human contributions in science, philosophy, and societal advancement.
Critique of Religious Mysticism and “Fake Paradise”: The author implicitly critiques religious beliefs that focus on a heavenly afterlife, arguing they distract from the pursuit of earthly improvement and progress.
“Fake Paradise”: The text implicitly contrasts a heaven-focused worldview with the possibility of achieving a “real heaven on earth” through human effort and good deeds. It suggests that a focus on mystical beliefs leads to a passive acceptance of difficulties, rather than striving for real improvement.
Value of Philosophers and Scientists: “The real and truly deserving of our gratitude are those great philosophers and scientists of the world who have done miracles in the universe…”. This directly contrasts the text’s view with any idea of divine or transcendental power, praising instead tangible human achievements.
Emphasis on Intellectual and Moral Struggle: The author highlights the importance of intellectual and moral struggles for human advancement.
“Humanitarian Deeds and Struggles”: The author emphasizes that the world can be improved through positive human action. This contrasts with accepting difficult circumstances as a predetermined fate.
Quote from “Sargasht Adam”: The lines “Mila mood swings, I did not say anything under the sky. I removed stone idols from Kaaba and sometimes I made idols into Haram” imply a constant reevaluation of ideas and a challenging of outdated beliefs. The text is advocating for action to achieve change, even if it means upending tradition.
The Ataturk Revolution as a Model: The author praises the secular and modernizing revolution led by Mustafa Kemal Atatürk in Turkey.
Secularism and Democracy: “the amazing revolution of the world’s greatest man Kamal Ataturk in 1924, which has forever cut the root of the caliphate system… and moved its direction to democracy.” This is presented as a definitive break from outdated theocratic systems towards a more progressive governance structure.
Patriotic Identity: “…your real nationality is not an outdated, conceptual and spiritual nationality but a patriotic nationality like other civilized nations.” This emphasizes a civic identity rooted in national belonging over religious identification.
A Model for Muslim Nations: The author suggests that the Ataturk revolution is a paradigm for other Muslim-majority nations seeking modernization and self-determination. The text encourages readers to learn from the Turkish example: “We too, like the Turks, will have to reevaluate our rational and conscious heritage one day.”
Re-evaluation of Intellectual Inheritance: The author calls for a critical assessment of established beliefs and traditions.
Call to Readers: The author urges “friends who are interested in the knowledge and research of Darwish” to consider what “intellectual and conscious heritage of ours” needs reevaluation, connecting the ideas to a specific intellectual tradition.
Iqbal’s Influence: The text repeatedly refers to Iqbal’s desire to re-evaluate the intellectual and religious heritage of his time: “Whose Iqbal wanted to re-evaluate like the Turks??” and “The foundation of Reconstruction Of Religous Thought in Islam is the modern kayapult of Ata Turk and Turks”.
Critique of Religious Manipulation for Political Gain: The text is critical of figures who use religion for their own political ends, even if they cannot deny the value of the Ataturk revolution: “What an interesting and amazing story it is for the Pakistani nation that each of its two architects kept using religion as much as political and social for their personal or national interests… but… the voices of their conscience did not let them deny the intellectual greatness of Ataturk…”.
Conclusion:
The text is a powerful and impassioned call for human progress driven by reason, conscious effort, and a rejection of outdated religious dogmas. It promotes the Ataturk revolution as a historical turning point and a model for achieving a more just, modern, and prosperous society. The author encourages self-reflection, critical reevaluation of established beliefs, and active participation in shaping a better future. The text makes a case for a “real heaven on earth” achievable through hard work and dedication to ideals of secularism, democracy, and patriotism.
Human Ingenuity and the Turkish Revolution
Frequently Asked Questions
What is the text’s perspective on the role of nature versus human effort in shaping our world?
The text emphasizes a stark contrast between nature’s indifference and the significance of human consciousness and struggle. It argues that natural phenomena like the changing of seasons are simply consistent, while human constructs such as calendars, festivals, and even political revolutions are the products of deliberate human effort. The text credits human ingenuity, specifically scientific and philosophical achievements, as the main force behind progress, while nature provides a neutral background.
According to the text, what are some examples of human achievements that deserve gratitude?
The text expresses gratitude for the contributions of great philosophers, scientists, thinkers, and political leaders who have shaped human consciousness. It specifically praises advancements from Greek philosophy to modern scientific achievements, as well as the creation of human rights frameworks like the UN Charter. These achievements, it argues, are responsible for improving living conditions and intellectual understanding. The text values accomplishments that lead to a better earthly experience, rather than solely focusing on otherworldly rewards.
How does the text view the concept of “heaven” and its relation to human action?
The text contrasts a “dream-like” heaven after death with the potential for an “earthly paradise” created through human actions. It suggests that focusing on real-world achievements and humanitarian deeds provides a more meaningful and tangible form of satisfaction. The text implicitly criticizes the idea of relying solely on the promise of an afterlife and encourages readers to focus on improving our current existence.
What are the main ideas conveyed by the lines from “Sargasht Adam” quoted in the text?
The lines from “Sargasht Adam” suggest a theme of intellectual and spiritual independence and iconoclasm. The speaker claims to have challenged established norms, removing idols from holy places and advocating for new perspectives. The lines also reflect a journey of intellectual exploration, from Greek thought to various Eastern cultures. The speaker emphasizes their commitment to seeking truth and wisdom, suggesting that true progress comes from challenging and reshaping societal norms. The last lines reference a commitment to the honor of “this earth.”
What significance does the text place on the Turkish Revolution of 1924 led by Kemal Ataturk?
The text regards the Turkish Revolution of 1924 as an incredibly important event that fundamentally changed the Islamic world by abolishing the Caliphate system and establishing a democratic, secular state with an elected parliament. The text views Ataturk’s revolution as a model for other Muslim nations, emphasizing its modern, progressive nature and the shift from spiritual nationalism to patriotic nationalism. It celebrates its impact on the national identity of Turks and its shift from an old, autocratic structure to a new, modern system.
Why does the text criticize the Caliphate system?
The text portrays the Caliphate system as an outdated and dictatorial form of personal rule that is detrimental to Muslim societies. It contrasts it with the democratic ideals of the Turkish Revolution, highlighting the latter’s emphasis on elected parliaments and patriotic nationalism. The text criticizes any system that is rooted in personal dictatorship rather than democracy, suggesting the Caliphate had failed its people due to its outdated nature.
How does the text view the role of religion in politics?
The text portrays the architects of Pakistan using religion for political purposes to increase their power, acknowledging their success, but also highlights that their consciences recognized Ataturk’s intellectual greatness as well as the blessings of the Turkish Revolution. It critiques the use of religious fervor for political ends, viewing it as a means to personal or national gain rather than a genuine attempt to improve the condition of society. The text advocates for a separation of religion and politics.
What is the core message of the text regarding the intellectual and political legacy that should be reevaluated?
The text advocates for a reevaluation of the intellectual and rational heritage within Muslim societies, drawing inspiration from the Turkish Revolution, which prioritized progress and democracy over outdated religious systems. The text implies that Muslim societies should critically examine their inherited traditions and political structures, encouraging a move toward modernity, rationality, and democratic principles. It calls on its readers to be intellectually honest and to recognize the legacy of progress in the world, like the Turkish Revolution, while critiquing the legacy of outmoded authoritarian theocracies.
Atatürk’s Revolution and the Future of Islam
Okay, here is a timeline and cast of characters based on the provided text:
Timeline of Main Events
Ancient Times (Unspecified): The text reflects on the nature of time and the universe, contrasting nature’s unchanging rhythms with human constructs like calendars and festivals. It posits that human consciousness and struggle are the sources of advancements and meaning.
Ancient Greece (Unspecified): Greek philosophers are mentioned as foundational figures in the progression of human thought.
1924: Mustafa Kemal Atatürk leads a revolution in Turkey, abolishing the Caliphate and establishing a secular, democratic state. This is presented as a pivotal event with lasting significance.
Pre-1947: The text references the destruction in India in 1947.
1979: The year 1979 is noted as significant for the author’s personal experiences, witnessing the importance of various events in relation to the Indian Ocean.
Time of the Caliphate Movement in India: The Caliphate Movement, led by the Ali brothers, is described as a time of religious fervor and political maneuvering in India. The author notes that despite the religious fervor, some leaders admired Ataturk’s revolution.
Present (Time of writing): The author reflects on the legacy of Atatürk’s revolution in Turkey, noting ongoing attempts to undermine it by conservative elements, while predicting a resurgence of the revolution’s principles. The author also calls for a reevaluation of intellectual heritage in Islamic nations similar to what Turkey undertook.
Future (Implied): The author anticipates that Turkey’s secular, democratic revolution will serve as a model for other Muslim nations in the future, which is also presented as Iqbal’s wish.
Cast of Characters
Nature: Not a person, but a force representing the unchanging universe and the source of physical phenomena, contrasted with human-made concepts.
Great Philosophers and Scientists: A general group encompassing thinkers throughout history, particularly from ancient Greece, who advanced human knowledge and understanding.
Western Scientists, Thinkers, and Political Leaders: A broad group credited with transforming humanity through innovations and establishing concepts like the UN and human rights, but specifically not Ghalib who is mentioned later in the text.
Ghalib: Mentioned in the context of a poet, he is used as a contrasting example, someone whose “hobbies” are inconsequential compared to great leaders and thinkers. His poetry is referenced with a specific poem to highlight the contrast between worldly and heavenly concerns.
“Sargasht Adam”: The title of the poems from which excerpts are provided. His poetry explores themes of rebellion, questioning established religions, and spreading wisdom, with imagery of travel and struggle. He seems to be a symbol of humanistic thought.
Kamal Ataturk: The central figure of the text. The leader of the Turkish Revolution in 1924. He is portrayed as a great visionary who abolished the Caliphate, established a secular state, and is presented as a positive model for other Muslim nations.
Iqbal: A figure who admired the Turkish Revolution and desired for a similar reevaluation of intellectual heritage in other Muslim nations. The text notes that Iqbal’s wish has yet to be fulfilled. He wrote the “Hindi Anthem.”
The Ali Brothers: Leaders of the Caliphate Movement in India. They are described as experiencing emotional distress due to the abolition of the caliphate in Turkey, though the author stresses that they did not express sympathy for the system.
Conservative Spokesman of the Turks: A collective group representing those attempting to undermine Atatürk’s revolution in contemporary Turkey. They are described as opposing the secular and democratic nature of the revolution.
“Two Architects” of Pakistan: Implied to be political leaders of Pakistan. The text suggests that they used religion for their personal and political gain but that they secretly admired Ataturk and the Turkish Revolution.
Darwish: The author himself. A person interested in human history and philosophy, and concerned about the intellectual heritage of Muslim nations.
Let me know if you have any other requests!
Atatürk’s Legacy and the Modernization of Turkey
The text explores the contrast between the unchanging aspects of nature and the achievements of human consciousness, particularly highlighting the impact of philosophers and scientists throughout history. It emphasizes the significance of human effort in shaping societies and creating advancements, using the Turkish Revolution of 1924 under Atatürk as a prime example of a successful, lasting societal transformation. The writing also touches on the tension between religious fervor and secular progress within Muslim societies, examining how religious leaders sometimes prioritized personal gain over societal advancement. Finally, the text advocates for a reevaluation of intellectual heritage and urges a move toward modernization and democracy, inspired by Atatürk’s legacy.
Human Consciousness: Shaping Our World
Human consciousness is presented as a powerful force that has shaped the world, responsible for the creation of culture, and for the advancements of human civilization [1]. The sources contrast the works of human consciousness with the natural world, and suggest that nature is indifferent to human constructs of time and events [1, 2].
Here are some key aspects of human consciousness discussed in the sources:
Creation of Culture: Human consciousness is responsible for the creation of systems like months, years, days, festivals, and anniversaries [1]. These are seen as human efforts to create structure and meaning [1].
Human Struggle and Progress: The sources emphasize the “greatness of human conscious efforts and human struggle” and the wonders that have been created through it [1]. Without human endeavors, life would be difficult and desolate [1].
Intellectual and Scientific Achievements: The text highlights the importance of philosophers, scientists, thinkers, and political leaders who have advanced human consciousness and have led to significant changes in human life [3].
Reversal of Humanity’s Shape: Through innovative inventions, intellectual and conscious platforms, and human rights charters, humanity’s shape has been reversed and improved [3].
A Source of Pride: The accomplishments of human consciousness are presented as something humanity can be proud of [1]. The source contrasts these achievements with the desolate state of existence that would exist without these advancements [1].
Influence on Religion: The text discusses how some leaders have used religion for political and social purposes, but also acknowledges that their conscience led them to respect the intellectual achievements of others, such as Ataturk [4].
Reevaluation of Intellectual Heritage: The need for reevaluating the intellectual and conscious heritage is highlighted [4]. This is tied to the idea of progress and the need to question established norms and ideas [4].
A Distinction from Nature: The sources emphasize a clear distinction between nature and human consciousness. Nature is portrayed as a force that is indifferent to the passage of time and the events in human history [2]. In contrast, human consciousness is a driving force of change and progress [1].
Earthly Paradise: The idea of creating a “real heaven on earth” through humanitarian efforts and struggles is presented as a goal that surpasses seeking a dreamlike heaven [5].
Nature’s Indifference to Humanity
Natural processes are depicted in the sources as separate from and indifferent to human constructs and events [1, 2]. Here’s a breakdown of how the sources discuss natural processes:
Nature’s Timelessness: Nature is presented as being unconcerned with the passage of time, including days, months, and years, and with human events like births, deaths, and revolutions [1]. The sources say that nature has “nothing to do with when which day, month or year comes and when it passes” [1]. The rotation of days and the changing of days and nights are described as “masterpieces of nature, which have been the same since eternity” [1].
Indifference to Human Events: Nature is depicted as being unaffected by human activities and structures such as festivals and anniversaries [2]. The sources state that “it doesn’t matter to nature… if none of these happens” [2]. This suggests that natural processes operate independently of human concerns and calendars.
Celestial Cycles: The rising and setting of the sun, and the phases of the moon, are given as examples of natural phenomena that are constant and independent of human perception. The moon is described as appearing sometimes small and sometimes full, but in fact it is neither, just as the sun neither rises nor sets [1]. These celestial cycles are presented as “masterpieces of nature” that occur without human influence [1].
Contrast with Human Consciousness: The sources present a distinct contrast between natural processes and the creations of human consciousness [2]. While nature operates according to its own timeless rhythms, human consciousness is responsible for creating culture, structure, and meaning. The sources also suggest that nature’s beauty exists independently from human structures and that only human conscious efforts have the power to bring about change [2].
In summary, the sources portray natural processes as consistent, timeless, and unaffected by human actions, existing in contrast to the dynamic and transformative power of human consciousness [1, 2].
Human Achievement: Conscious Effort and Progress
Human achievements are portrayed in the sources as the result of conscious effort and struggle, and they are contrasted with the natural world, which is presented as indifferent to human activity [1]. The sources suggest that human accomplishments are a source of pride and have fundamentally altered the course of human existence [1, 2].
Here are some key areas of human achievement discussed in the sources:
Cultural Constructs: The creation of systems like months, years, days, festivals, and anniversaries are described as “a masterpiece of the efforts of human consciousness” [1]. These constructs are seen as ways that humans have created structure and meaning in the world, in contrast to the timelessness of nature [1, 3].
Scientific and Intellectual Progress: The sources emphasize the contributions of philosophers, scientists, thinkers, and political leaders [2]. These individuals are credited with doing “miracles in the universe” and leading humanity to its current heights through innovative inventions and intellectual advancements [2].
Political and Social Advancements: The establishment of the United Nations and the UN Human Rights Charter are highlighted as significant political achievements that have had a positive impact on humanity [2]. The sources suggest these accomplishments have provided platforms for intellectual and conscious growth and have reversed the “shape of humanity” [2].
Overcoming Desolation: The sources suggest that without the achievements of human consciousness, life would be “difficult, desolate” [1]. The implication is that human struggle and achievement are necessary to overcome a bleak existence and to find satisfaction [1].
Creating an Earthly Paradise: The text speaks of creating a “real heaven on earth” through humanitarian deeds and struggles [4]. This suggests that human effort can lead to tangible improvements in life, offering a different perspective than relying on the promise of a heavenly afterlife [4].
Reevaluation of Heritage: The sources advocate for a reevaluation of intellectual and conscious heritage, suggesting that progress requires questioning and updating established norms and ideas [5]. This is linked to the idea of constant improvement and a forward-looking approach [5].
Examples of Transformative Leadership: The sources present Mustafa Kemal Ataturk as an example of a transformative leader whose revolution in Turkey led to modernization and a shift towards democracy [6]. Ataturk’s revolution is portrayed as a model for other Muslim nations [5, 7].
In summary, the sources present human achievements as a testament to the power of consciousness and a driving force for progress. These accomplishments are not merely material but include intellectual, cultural, political, and social progress, all of which contribute to a richer, more meaningful existence [1, 2, 4]. The sources also underscore the importance of continually reevaluating and building upon the achievements of the past to further advance human civilization [5].
Atatürk’s Revolution: A Model for Muslim Nations
Ataturk’s revolution is presented in the sources as a significant and transformative event that serves as a model for other Muslim nations [1, 2]. The revolution is described as having modernized Turkey and shifted its direction towards democracy [1]. Here’s a breakdown of key aspects of Ataturk’s revolution, as presented in the sources:
Overthrowing the Caliphate System: The revolution is credited with cutting the root of the caliphate system, which was based on personal dictatorship, from the world of Islam [1]. This move is portrayed as a crucial step towards a more democratic and modern society [1].
Establishment of Democracy: Ataturk’s revolution shifted Turkey’s governance towards an elected parliament, which is seen as a major advancement for the nation and a model for other Muslim nations [1]. This change is linked to a broader movement towards modernity and progress.
Promoting Patriotic Nationality: The revolution promoted a patriotic nationality, as opposed to an outdated, conceptual, and spiritual nationality [1]. This suggests a shift towards a more secular and civic-based identity, aligning with the norms of other civilized nations [1].
Intellectual Greatness: Even those who used religion for political and social purposes were unable to deny the intellectual greatness of Ataturk and the blessings of the Turkish Revolution [2].
A Role Model: The revolution is presented as a role model for other Muslim nations, with the sources suggesting that these nations should re-evaluate their intellectual heritage like the Turks [2, 3].
Enduring Impact: The revolution is described as having been established on its foundations for a century, despite attempts to undermine it by conservative elements within Turkey [4]. The sources predict that the moderate revolution will continue to rise with new lights and serve as a role model for other Muslim nations as per Iqbal’s wishes [3, 4].
Contrast with Traditional Systems: The revolution is implicitly contrasted with the “rotten” Caliphate system, which the source notes even staunch supporters of that system could not defend [2].
Significance for Iqbal: The sources suggest that the foundation of the book Reconstruction Of Religious Thought in Islam is based on the modern revolution of Ataturk and the Turks. Iqbal’s desire to reevaluate intellectual heritage, as the Turks did, is also emphasized [2].
Relevance for Pakistani Nation: The sources note that both of the architects of Pakistan, despite using religion for their political and social aims, could not deny the intellectual greatness of Ataturk [2]. The sources suggest that the Pakistani nation has an interesting and amazing story in the context of Ataturk’s revolution, given the actions and ideas of its founders [2].
In summary, Ataturk’s revolution is presented as a pivotal moment in the history of Turkey, marked by the overthrow of the Caliphate, the establishment of a democratic system, and the promotion of a patriotic national identity. The revolution’s legacy is portrayed as an inspiration and a model for other Muslim nations, with its enduring impact and transformative nature still relevant today [2, 3]. The source emphasizes its importance as a key example of human achievement and progress [1].
Atatürk’s Revolution and Modern Political Ideologies
Political ideologies are discussed in the sources primarily through the lens of nationalism, democracy, and the rejection of personal dictatorship, particularly in the context of Ataturk’s revolution. The sources also touch on the use of religion for political purposes and the tension between traditional and modern systems of governance.
Here’s a breakdown of the political ideologies and concepts discussed in the sources:
Patriotic Nationalism: The sources promote the idea of patriotic nationality as a modern and progressive concept, contrasting it with outdated notions of spiritual or religious nationality [1, 2]. The Turkish revolution is presented as an example of a movement that successfully shifted its focus to a patriotic identity, with the idea that Turks should have Turkish nationality and Arabs should have Arab nationality [2]. This is framed as aligning with other civilized nations and as a break from older, more religiously-defined systems of identity [1, 2]. The source suggests that even Hindi Muslims should embrace a Hindi patriotic nationality [2].
Democracy and the Rejection of Dictatorship: The sources strongly support democracy and the idea of elected parliaments, portraying them as significant advancements in governance [1]. Ataturk’s revolution is specifically praised for cutting the roots of the caliphate system, which is described as a form of personal dictatorship [1]. This demonstrates a preference for systems of government that involve the representation of the people and a rejection of autocratic rule [1].
Secularism: The emphasis on patriotic nationality and the rejection of the caliphate system indicate a leaning towards secularism, where political identity is separated from religious or spiritual identity [1, 2]. The sources suggest that modern, civilized nations have moved away from religiously-defined identities towards more civic-based ones [2].
Use of Religion for Political Purposes: The sources acknowledge that some leaders use religion for political and social purposes [3]. However, the sources also point out that even these leaders often recognize the intellectual greatness of those who promote more modern and secular ideas, like Ataturk [3]. The use of religion to manipulate political discourse is shown as a tool to gain support and advance personal or national interests [3].
Clash of Traditional and Modern Systems: The sources discuss a clear contrast between traditional, outdated systems of governance, such as the caliphate, and modern systems, such as democratic republics [1]. The caliphate is referred to as a “rotten system” [3]. The sources favor modern systems, highlighting the importance of progress, innovation, and intellectual advancement [1, 3].
The British Government’s Role: The source notes that the British government used “jihadi slogans” to counter communist influence, which is mentioned as an example of political maneuvering for national interests [2].
Iqbal’s Perspective: The source presents the views of Iqbal, who is shown as supporting the reevaluation of intellectual heritage like the Turks and admiring the modernizing influence of Ataturk’s revolution [3].
In summary, the sources advocate for a move away from religiously-based political systems and towards more secular, democratic, and patriotic forms of government. The sources present Ataturk’s revolution as a key example of successful modernization and a model for other nations to follow. The role of political leaders using religion is also addressed, while emphasizing the importance of intellectual and conscious advancements over outdated systems of governance.
Nature vs. Human Creation
The sources present a distinct contrast between natural phenomena and human constructs, emphasizing that nature operates independently of human activity while human creations are the result of conscious effort and struggle [1, 2].
Here’s how the sources differentiate between the two:
Nature’s Timelessness vs. Human-Made Time: The text describes nature as being constant and unaffected by human concepts of time [1]. The rising and setting of the sun and the phases of the moon are cited as examples of natural phenomena that occur without regard for human calendars [1]. In contrast, the division of time into months, years, days, and the establishment of festivals and anniversaries are described as “a masterpiece of the efforts of human consciousness” [2]. This highlights that these constructs are human inventions to create structure and meaning [2].
Nature’s Indifference vs. Human Consciousness: The sources suggest that nature is indifferent to human activities, with the text stating that “it doesn’t matter to nature or nature if none of these happens, goes or comes” [2]. This implies that nature functions according to its own laws, regardless of human existence or constructs. On the other hand, the sources portray human constructs as deliberate and purposeful, resulting from the application of “human conscious efforts” [2].
Natural Landscapes vs. Human Infrastructure: The sources contrast the “beautiful landscapes or deserts” of nature with human infrastructure [2]. It is suggested that apart from natural beauty, there is little that humanity can be proud of without human efforts [2]. This further emphasizes that human achievements are distinct from the natural world and are a result of deliberate effort.
Nature’s Desolation vs. Human Achievement: The text suggests that without human constructs, life would be “difficult, desolate,” implying that human achievement is essential to improve life beyond the natural state [2]. This is juxtaposed with the idea that nature does not offer inherent meaning or satisfaction, so humans must actively create these.
Human Effort as a Source of Pride: The sources suggest that the “wonders and masterpieces” created through human struggle, as well as intellectual and conscious effort, are things that humanity can be proud of [2, 3]. This is implicitly contrasted with nature, which is presented as lacking any intention and agency, which is what humans bring to the world and what creates purpose.
Real Heaven on Earth: The sources suggest that humans can create a “real heaven on earth” through their efforts, contrasting this with a heavenly afterlife that is detached from the physical world [4]. This indicates that human actions and constructs are capable of generating meaning, satisfaction and paradise, rather than relying on nature or a divine plan.
In summary, the sources draw a clear distinction between the natural world and human-made constructs. Nature is depicted as timeless, indifferent, and constant, while human constructs are portrayed as conscious, deliberate, and transformative. The text suggests that human achievements are what make life meaningful, providing purpose and direction in contrast to the indifference of the natural world.
1979: A Year of Reflection
The year 1979 is significant in the text as a point of reflection for the author, marking a time of learning and observation of important events [1]. Here’s a breakdown of its significance:
Madujis and the Indian Ocean: The year 1979 is noted for the emergence of “madujis” which highlighted the importance of the “Jawar Bhata” (likely referring to tidal phenomena or some other event related to water levels) of the Indian Ocean [1]. This suggests that the year was marked by particular occurrences that drew attention to the natural world and its influence.
Destruction and Historical Context: The author notes that the region of India was torn apart by destruction in 1947 and that the “sparks” of this destruction are still felt by new generations [1]. This provides historical context for the year 1979, situating it in a broader timeline of regional conflict and its lasting impacts, implying that the events of 1979 should be understood through the lens of this historical trauma.
The Impact of the 1924 Revolution: The author contrasts the events of 1979 with the revolution led by Kamal Ataturk in 1924, which is described as having a significant positive impact [1]. The 1924 revolution is presented as a model of positive change, a point of comparison for assessing the events of 1979 and other historical moments. The 1924 revolution established a democracy and cut the root of the caliphate system from the world of Islam [1].
Personal and Cultural Significance: The author highlights that, personally and as a Muslim, the revolution of Kamal Ataturk impressed him the most, specifically the shift towards democracy and the rejection of the caliphate system. This underscores how the author’s personal experiences and cultural background shape his perception of historical and political events and provides another lens for understanding the significance of 1979 [1]. The author calls the revolution “amazing.”
Reflection on Intellectual Inheritance: The author uses the observations of 1979 to prompt reflection on the intellectual and conscious heritage of the author’s culture. This connects the personal experience of the year with the broader theme of intellectual progress and self-evaluation [2]. The author states, “We too, like the Turks, will have to reevaluate our rational and conscious heritage one day” [2]. This highlights the idea that historical moments, such as those of 1979, should prompt societies to reflect on and improve their own intellectual foundations.
In summary, the year 1979 is presented as a significant year for the author, marked by specific events and observations that prompt reflection on both personal and historical experiences. The events of 1979 serve as a point of comparison for both the negative impact of the destruction of 1947 and the positive changes of Ataturk’s revolution, and they highlight the importance of evaluating cultural and intellectual heritage. The author’s cultural and personal experiences contextualize his reflections on the significance of 1979 and other historical events.
Religion, Politics, and Modernity in Turkey
Religion plays a complex and multifaceted role in the events described in the sources, acting as both a source of conflict and a tool for political and social change. Here’s a breakdown of the role of religion:
Critique of Religious Authority: The sources critique traditional religious systems, specifically the caliphate, which is described as a form of “personal dictatorship” [1]. The text praises the Turkish Revolution for cutting the roots of this system, framing it as a positive move towards democracy [1]. This indicates a rejection of religious authority in political governance. The sources emphasize that a “rotten system” based on religion should not be supported. [2]
Religion as a Tool for Political Gain: The text suggests that some leaders use religion for political and social purposes, exploiting religious sentiments to gain support [2]. However, the sources also note that these leaders often recognize the intellectual achievements of those who promote secular and modern ideas [2]. This points to a manipulative use of religion to further personal or national interests.
Rejection of Spiritual Nationality: The sources advocate for a shift away from spiritual or religious nationality to a more patriotic nationality [1]. The text presents this shift as a move toward progress and civilization, implying that religiously-defined national identities are outdated and problematic [1]. The author notes that “your real nationality is not an outdated, conceptual and spiritual nationality but a patriotic nationality like other civilized nations.” [1]
Religious Conflict and Division: The sources briefly allude to the British government’s use of “jihadi slogans” to counter communist influence, highlighting how religion can be manipulated to fuel conflict [3]. The text also notes that the region of India was torn apart in 1947, suggesting religious conflict might have contributed to the destruction, though this is not explicitly stated [1].
The Caliphate Revival: The sources describe the “Tehreek Caliphate” as a religious movement that caused grief to the Ali brothers, who were leaders in the movement [2]. The text notes that despite the religious fervor of this movement, figures like the Ali brothers did not show sympathy for the caliphate system, demonstrating a critique of the religious system [2].
The Contrast with Modernization: The sources present Ataturk’s revolution as a model of modernization and secularism, contrasting it with religious systems of governance. The revolution is praised for moving the direction of the country to democracy, and it serves as an example of how a nation can successfully modernize while moving away from religious authority [1]. The author indicates that Ataturk’s revolution is the only one of its kind that has lasted for a century, even with attacks from conservatives [4].
Iqbal’s Viewpoint: The text suggests that Iqbal, despite using religion for political means, admired the Turkish Revolution and wanted a similar reevaluation of intellectual heritage [2]. The sources state that Iqbal’s “Reconstruction Of Religious Thought in Islam” was based on the modern kayapult of Ata Turk and Turks. [2]
In summary, the role of religion in the described events is complex. While it is portrayed as a powerful force capable of mobilizing people and influencing political outcomes, it is also critiqued for its potential to be used for personal gain and to maintain outdated systems of governance. The sources favor a move towards secular, democratic, and patriotic forms of identity, while acknowledging that religion can have significant impacts on the political landscape, even for people who oppose such religiously-defined systems. The author’s personal experiences are shown to be influenced by these various uses of religion, shaping his perspective on the events he describes.
Rethinking National Identity: A Turkish Model
The author urges a reevaluation of the intellectual and conscious heritage of his own culture, specifically in light of the reforms enacted by the Turkish Revolution [1, 2]. This reevaluation is prompted by the author’s observations and reflections on historical events, particularly the revolution led by Kamal Ataturk in 1924 and the events of 1979 [1]. The author’s desire to reevaluate their intellectual heritage is directly inspired by the Turkish experience of modernizing and secularizing their nation [2].
Here’s a breakdown of what this intellectual heritage entails, according to the sources:
Rejection of outdated systems: The author suggests that their intellectual heritage must be examined in light of the need to move beyond outdated systems, such as the caliphate, and embrace modern, democratic values [1, 2]. The caliphate is described as a form of “personal dictatorship” [1]. This indicates a need to reject systems of governance based on religious authority.
Shift from spiritual to patriotic nationality: The author calls for a move away from a “conceptual and spiritual nationality” to a “patriotic nationality” [1]. This implies a reevaluation of how national identity is defined, advocating for a more secular, civic-based approach rather than one rooted in religious or spiritual affiliations. This is something the Turks have done and that the author believes is necessary.
Modernization and progress: The author views the Turkish Revolution as a model of modernization [1, 2]. This suggests that the intellectual heritage must be reevaluated to align with progress, innovation, and the principles of democracy [1]. The author highlights the Turkish shift to an elected parliament, which offers an alternative to religious forms of governance [1].
Secular values: The text highlights the importance of secularism and the separation of religious and political powers [1, 2]. The Turkish Revolution is presented as a positive example of secularism, and this implies that the author’s intellectual heritage must be reevaluated to incorporate secular values and institutions [1].
Conscious and Rational Heritage: The author specifically refers to the need to reevaluate their “rational and conscious heritage,” which suggests a move towards a more logical, evidence-based, and self-aware understanding of their culture and traditions [2]. This is presented in contrast to outdated religious ideas.
Iqbal’s Influence: The author references Iqbal’s desire for a similar reevaluation, suggesting that even figures who used religion for political means recognized the importance of the Turkish model [2]. The author calls Iqbal’s book, The Reconstruction of Religious Thought in Islam as being based on the modern shift in thought that came from Ataturk’s revolution [2].
The author urges his readers to consider “what is that intellectual and conscious heritage of ours? Whose Iqbal wanted to re-evaluate like the Turks?” This indicates that the author’s intellectual heritage includes religious and traditional political thought that must be critically examined [2]. The author suggests that just as the Turks reevaluated their heritage to modernize, so too must his culture reconsider its intellectual inheritance to promote progress and a more relevant and forward thinking national identity [1, 2]. The author’s focus is on a conscious and rational reevaluation that moves away from outdated, spiritually-defined concepts towards modern and secular forms of governance [1, 2].
Earthly Paradise vs. Fake Paradise
The author contrasts the concept of a “fake paradise” with an “earthly paradise” to emphasize the importance of human effort and achievement in the real world, as opposed to relying on religious promises of an afterlife [1, 2]. Here’s how the author differentiates between the two:
“Fake paradise”: This concept refers to the traditional religious idea of heaven as a reward after death, often presented as a place of eternal bliss and satisfaction [1]. The author implies that this notion of paradise is a “dream-like dream,” suggesting that it is not grounded in reality and does not require any action or effort in the present world [3]. The author uses the term “fake paradise” to indicate that the promise of a heaven after death is not as valuable or meaningful as the achievements that humans can accomplish on earth [2]. The author also suggests that the notion of heaven after death can be used to distract from real issues in this life [1].
“Earthly paradise”: This refers to the idea that a fulfilling and meaningful existence can be created in the real world through human effort and consciousness [1, 2]. This “earthly paradise” is achieved through concrete actions and the application of human intellect, such as the advancements in science, philosophy, and politics [4]. The author also suggests that an “earthly paradise” is achieved through humanitarian deeds and struggles [3]. The text suggests that the wonders created through human struggle make life meaningful and offer real satisfaction, whereas relying on the idea of heaven after death leads to a desolate existence [1]. The author indicates that the “earthly paradise” is a “masterpiece of greatness and human consciousness” [2].
The author contrasts these two ideas by highlighting that the “earthly paradise” is achievable through human efforts and tangible actions that produce concrete results, while the “fake paradise” is merely a hope or a dream with no foundation in reality [1-3]. The text suggests that true progress and satisfaction come from working to improve the world and achieve real-world goals rather than waiting for a promised afterlife [1, 4].
The author uses the contrast between the “fake paradise” and the “earthly paradise” to emphasize the value of human struggle and achievement [1]. The author’s emphasis on human actions and the importance of the real world align with his admiration of the Turkish Revolution, which is presented as a model of progress through human consciousness [2, 4]. He also emphasizes that the true path to a fulfilling life is found in active participation in the world, creating an “earthly paradise” through real achievements, rather than waiting passively for a “fake paradise” after death [1, 3].
Atatürk’s Revolution: A Model for Modern Muslim Nations
Ataturk’s revolution is presented as a highly significant event in the sources, serving as a model for modernization and a rejection of outdated systems [1, 2]. The revolution’s importance is highlighted through several key points:
Rejection of the Caliphate: The revolution is praised for cutting the roots of the caliphate system, which is described as a “personal dictatorship,” from the world of Islam [1]. This act is viewed as a move toward democracy and a rejection of religious authority in political governance [1]. The author sees this as a crucial step for any Muslim nation seeking progress [2].
Shift to Democracy: The revolution moved the country towards an elected parliament, emphasizing a move from traditional, religiously-based governance to a modern, democratic system [1]. This shift to a more secular and representative form of government is a crucial aspect of the revolution’s significance [1]. The text suggests this transition is essential for progress and civilization [1, 3].
Model for Modernization: Ataturk’s revolution is presented as a model of modernization and secularism for other Muslim nations [1, 4, 5]. The author emphasizes that other Muslim societies should follow this example and re-evaluate their own “intellectual inheritance” [2]. The revolution provides a concrete example of how a nation can modernize while moving away from religious authority [1, 2].
Inspiration for Intellectual Reevaluation: The revolution inspired figures like Iqbal to call for a reevaluation of their own intellectual and conscious heritage [2]. The author notes that Iqbal’s book The Reconstruction of Religious Thought in Islam was based on the modern shift in thought that came from Ataturk’s revolution [2]. This reevaluation includes a shift from a spiritual to a patriotic nationality, which is viewed as a move toward progress and civilization [1-3].
Enduring Legacy: Despite attacks from conservative elements, the revolution has endured for a century, demonstrating its strength and importance [4]. The author suggests the revolution’s enduring nature proves its validity as a model for other nations [4, 5]. The text notes that intellectuals who wish to overthrow this revolution are being pushed out of cultural centers, suggesting its continuing influence and popular support [4].
Contrast with “Fake Paradise”: The revolution is aligned with the concept of an “earthly paradise” by emphasizing the importance of human effort and achievement in the real world, as opposed to relying on the idea of a “fake paradise” in the afterlife [1, 6]. This reinforces that Ataturk’s revolution is about creating a better life through real world, tangible actions [6].
In summary, Ataturk’s revolution is significant because it represents a shift towards democracy, secularism, and modernization for Muslim societies. The author uses the revolution as a lens through which to critique traditional religious systems and emphasize the importance of human agency and achievement. The revolution serves as a concrete example of how a nation can successfully modernize while moving away from outdated systems and religious authority, and is presented as an ideal model for other Muslim nations to follow [5].
Atatürk’s Revolution: A Legacy Contested
The sources present a clear contrast in viewpoints regarding Ataturk’s legacy in Turkey, specifically highlighting the tension between supporters of his modernizing reforms and those who seek to undermine them [1]. Here’s a breakdown of the contrasting views:
Positive View: Modernization and Progress [1-4]
Ataturk’s revolution is seen as a positive force for modernization, secularism, and democracy [2, 3].
His actions, such as abolishing the caliphate and establishing an elected parliament, are viewed as essential steps towards progress and civilization [2].
The revolution is considered a model for other Muslim nations seeking to modernize and move away from outdated systems [4].
The enduring nature of the revolution, even a century later, is presented as evidence of its strength and importance [1].
The revolution is aligned with the concept of an “earthly paradise,” emphasizing the importance of human effort and achievement in the real world [2].
Negative View: Conservative Opposition [1]
Conservative elements within Turkey have been actively trying to undermine Ataturk’s revolution for the last quarter century [1].
These groups seek to overturn the liberal, secular, and democratic aspects of the revolution [1].
They are described as trying to “dig the foundations” of the revolution and “topple it down,” suggesting a fundamental opposition to Ataturk’s vision [1].
These opposing viewpoints are not supported by the educated classes, in major cultural centers like Istanbul and Ankara, and are being pushed out [1].
Key Points of Conflict:
Secularism vs. Religious Authority: At the heart of the contrasting viewpoints is the tension between the secular principles of Ataturk’s revolution and the desire of some groups to reassert religious authority in governance [1, 2].
Modernization vs. Traditionalism: The conflict also highlights a clash between the forces of modernization and those who are clinging to traditional, outdated systems and values [2, 3].
Democracy vs. Dictatorship: Ataturk’s revolution is praised for dismantling the caliphate system, described as a “personal dictatorship,” and establishing a democratic parliament. The opposing viewpoint would therefore favor a return to autocratic forms of governance [1, 2].
Overall:
The sources emphasize that despite the ongoing attacks, Ataturk’s revolution and legacy are enduring. The text suggests that the positive view of Ataturk’s legacy is supported by the educated classes and is aligned with the forces of progress. The conflict highlights the ongoing struggle between different visions for Turkey’s future, but the text implies the liberal, secular, and democratic elements of the Turkish Revolution will ultimately prevail [1].
Atatürk’s Revolution: A Model for Muslim Nations
The author has a strongly positive perspective on Atatürk’s legacy, viewing his revolution as a crucial and transformative event for Turkey and a model for other Muslim nations [1-3]. Here’s a breakdown of the author’s perspective:
Admiration for Modernization and Secularism: The author admires Ataturk’s revolution for its commitment to modernization and secularism [1, 3]. The revolution is seen as a rejection of the outdated caliphate system, which is described as a “personal dictatorship” [1]. This rejection is viewed as a step towards democracy and a move away from religiously-based governance [1].
Emphasis on Democracy and Progress: The author praises Ataturk for establishing an elected parliament, emphasizing a shift towards a modern and representative form of government [1]. This move is seen as crucial for progress and civilization, aligning with the author’s view that an “earthly paradise” is achievable through human effort [1, 4].
Atatürk as a Model for Muslim Nations: The author explicitly presents Ataturk’s revolution as a model for other Muslim nations to follow [2, 3, 5]. The revolution is presented as a concrete example of how a nation can modernize and move away from religious authority [1, 2]. The text suggests that Muslim societies should re-evaluate their own “intellectual inheritance” in light of Ataturk’s achievements [2].
Rejection of Conservative Opposition: The author notes that there are conservative elements within Turkey that have been trying to undermine Ataturk’s revolution for the last quarter century [3]. However, the author makes it clear that these groups do not represent the educated classes and are being pushed out of cultural centers [3]. This suggests the author believes that the revolution’s ideals are ultimately stronger and will prevail [3].
Alignment with “Earthly Paradise”: The author’s view of Ataturk’s legacy is closely linked to the concept of an “earthly paradise” [1]. By emphasizing the importance of human effort and achievement in the real world, the author sees Ataturk’s revolution as creating a better life through tangible actions and progress, rather than relying on the idea of a “fake paradise” in the afterlife [1, 4].
Enduring Significance: The author highlights the enduring nature of Ataturk’s revolution, noting that it has lasted for a century despite attacks [3]. This longevity underscores its importance and relevance, suggesting that its principles of liberalism, secularism, and democracy will ultimately triumph [3].
In summary, the author views Ataturk as a visionary leader whose revolution was a pivotal moment in the history of the Muslim world, offering a path towards modernization, democracy, and progress. The author admires the revolution’s secular and humanistic values, contrasting them with traditional systems of religious authority and viewing them as a way to achieve an “earthly paradise” [1, 4]. The author makes it clear that he believes Ataturk’s revolution will endure and serve as a continuing model for other Muslim nations [3, 5].
Atatürk’s Revolution: A Model for Modern Muslim Societies
The author has a strongly positive assessment of Atatürk’s 1924 revolution, viewing it as a pivotal moment of modernization and progress, particularly for the Muslim world [1-3]. The author sees the revolution as a crucial step away from outdated systems and towards a more enlightened future [1, 2].
Here are the key points of the author’s assessment:
Rejection of the Caliphate: The author praises the revolution for dismantling the caliphate system, which is described as a “personal dictatorship,” and replacing it with a more democratic system [1, 2]. This move is viewed as essential for progress and a move away from religiously-based governance [1, 2].
Shift to Democracy: The revolution’s establishment of an elected parliament is seen as a significant step towards a modern and representative form of government [1, 2]. The author emphasizes the importance of this transition for the advancement of society [1, 2].
Model for Modernization: The author presents Ataturk’s revolution as an ideal model for other Muslim nations seeking to modernize [1-3]. The revolution provides a concrete example of how a society can move away from religious authority and towards a secular, democratic system [1-3].
Inspiration for Intellectual Reevaluation: The revolution inspired figures like Iqbal to call for a reevaluation of their own intellectual and conscious heritage [2]. The author notes that Iqbal’s book The Reconstruction of Religious Thought in Islam was based on the modern shift in thought that came from Ataturk’s revolution [2].
Enduring Legacy: The author highlights that the revolution has endured for a century despite attacks from conservative elements [3]. The author also notes that intellectuals who wish to overthrow this revolution are being pushed out of cultural centers [3]. This suggests that the revolution’s ideals are ultimately stronger and will prevail [3].
Alignment with “Earthly Paradise”: The author’s view of Ataturk’s revolution is closely linked to the concept of an “earthly paradise” [1]. By emphasizing the importance of human effort and achievement in the real world, the author sees Ataturk’s revolution as creating a better life through tangible actions and progress, rather than relying on the idea of a “fake paradise” in the afterlife [1].
Contrast with Traditional Systems: The author contrasts Ataturk’s revolution with the “rotten” system of the caliphate, emphasizing the revolution’s modern, forward-thinking nature [2]. The author suggests that the revolution’s rejection of outdated systems is essential for the progress of Muslim nations [2, 3].
Rejection of Conservative Opposition: The author makes it clear that the conservative opposition within Turkey is not aligned with the educated classes, who support the revolution’s values of liberalism, secularism, and democracy [3].
In summary, the author views Atatürk’s 1924 revolution as a transformative event that embodies the ideals of modernization, democracy, and secularism. The author believes it serves as an important model for other Muslim nations to follow in order to move away from outdated systems and create a better future through human effort and progress [1-3]. The author believes that the revolution will endure despite opposition and continue to serve as an inspiration for other Muslim societies [3, 4].
Atatürk’s Revolution and the End of the Caliphate
Atatürk’s 1924 revolution had a profound and decisive impact on the caliphate, effectively dismantling it and fundamentally altering the political landscape of the Muslim world [1, 2]. The sources highlight the following key points regarding the revolution’s impact on the caliphate:
Abolition of the Caliphate: The revolution is credited with definitively cutting “the root of the caliphate system” based on personal dictatorship [1]. This action is portrayed as a major step towards modernity and progress, signaling a clear break from the traditional system of religious authority [1, 2].
Rejection of Personal Dictatorship: The caliphate system is described as a form of “personal dictatorship” [1, 2]. By dismantling this system, Atatürk’s revolution aimed to establish a more democratic and representative government [1, 2].
Shift Towards Democracy: The revolution replaced the caliphate with an elected parliament, moving the country towards a more modern, secular, and democratic structure [1]. This shift is emphasized as a critical step for the advancement of society [2].
End of Religious Governance: The revolution is presented as a rejection of religiously based governance, with a focus on the importance of establishing a secular state [2]. This transition marked a significant change from the traditional role of the caliphate in Islamic societies [2].
Inspiration for Modernization: The dismantling of the caliphate by Atatürk’s revolution is presented as an inspirational model for other Muslim nations seeking to modernize [1, 2]. It demonstrated a move away from outdated systems and towards a more progressive future [2].
Contrast with “Rotten System”: The author contrasts Ataturk’s revolution with the “rotten” system of the caliphate, emphasizing the revolution’s modern, forward-thinking nature [2].
Criticism of Caliphate Supporters: The author notes that during the time of the Caliphate revival movement in India, leaders like the Ali brothers were deeply affected by the caliphate’s weakening. However, the author points out that these leaders never showed sympathy for the system but rather opposition and contempt for it [2].
In summary, Atatürk’s 1924 revolution had a revolutionary impact on the caliphate by abolishing it entirely and replacing it with a secular, democratic system [1, 2]. This action is viewed as a pivotal moment in the history of the Muslim world, setting an example for other nations seeking to modernize and move away from religious rule and personal dictatorships [1, 2]. The revolution is portrayed as a definitive break from the past, with the caliphate system seen as an outdated and oppressive system that was rightly overthrown [1, 2].
Iqbal and Atatürk’s Revolution
The sources suggest that Iqbal viewed Atatürk’s revolution as a significant and positive event, particularly in its implications for other Muslim nations. Here’s a breakdown of Iqbal’s perspective as presented in the sources:
Inspiration for Reevaluation: Iqbal was inspired by Atatürk’s revolution to call for a reevaluation of the intellectual and conscious heritage of Muslim societies [1]. This suggests that Iqbal saw the revolution as a catalyst for critical self-reflection and change within the Muslim world.
Model for Modernization: The author indicates that Iqbal saw the Turkish revolution as a model for other Muslim nations [2]. This suggests that Iqbal believed that Atatürk’s actions offered a concrete path for Muslim societies to modernize and move beyond outdated systems.
Rejection of Outdated Nationalism: Iqbal’s famous “Hindi Anthem” is mentioned in the context of rejecting outdated, conceptual and spiritual forms of nationality in favor of a more patriotic, civic nationalism [3]. This aligns with Atatürk’s revolution which rejected the caliphate in favor of a modern, secular, and democratic state and is presented by the author as a model for other Muslim nations to follow.
Foundation for Intellectual Work: The author notes that Iqbal’s book, The Reconstruction of Religious Thought in Islam, was based on the modern shift in thought that came from Atatürk’s revolution [1]. This suggests that Iqbal saw the revolution as a pivotal moment of change that had far-reaching intellectual and philosophical implications.
Emphasis on National Identity: Iqbal’s view that “Hindi Muslims have Hindi patriotic nationality” [3] aligns with the idea of a modern, secular state, a concept promoted by Ataturk’s revolution. This reinforces the idea that Iqbal saw the revolution as a means for Muslim societies to reframe their national identities in a modern context.
Role Model for Muslim Nations: According to the author, Iqbal wished for other Muslim nations to see the Turkish revolution as a role model [2, 4]. This desire underscores the significant influence that Iqbal believed the Turkish revolution had on the future direction of the Muslim world.
Admiration for the Revolution: The author implies that Iqbal admired the revolution [1], and that Iqbal’s son considered the foundation of Reconstruction of Religious Thought in Islam to be rooted in the kayapult (modern shift in thought) of Ataturk and the Turks [1].
In summary, Iqbal, as portrayed in the sources, saw Atatürk’s revolution as a pivotal event that called for a reevaluation of Muslim societies’ intellectual and national identities. Iqbal believed the revolution offered a model for modernization and progress, advocating for a move away from outdated systems and toward a more secular and democratic future for other Muslim nations [1, 2]. He viewed the revolution as a source of inspiration and intellectual renewal that could guide Muslim societies toward a more progressive future.
Atatürk’s Enduring Revolution
The text assesses Atatürk’s lasting impact as profound and enduring, particularly in the context of his 1924 revolution and its implications for both Turkey and the wider Muslim world. Here’s a breakdown of the text’s assessment:
Enduring Revolution: The text emphasizes that Atatürk’s revolution has stood firm for a century despite attempts by conservative elements to undermine it [1]. This highlights the strength and resilience of the revolution’s foundations. The author notes that the revolution continues to be a source of inspiration for reform, and this enduring legacy is a key aspect of its lasting impact [1, 2].
Model for Other Nations: The text suggests that Atatürk’s revolution was intended to serve as a role model for other Muslim nations seeking to modernize and break free from outdated systems [1-3]. The author notes that Iqbal hoped that other Muslim nations would see the Turkish revolution as a model for reform [2]. This underscores the revolution’s broader impact beyond Turkey’s borders.
Rejection of Conservatism: The text notes that, even a century later, conservative elements in Turkey have tried to dismantle the revolution but have been largely unsuccessful [1]. The author observes that intellectuals who wish to overthrow the revolution are being pushed out of major cultural centers [1]. This suggests that the core values of the revolution, namely liberalism, secularism, and democracy, continue to hold sway and exert a lasting influence [1].
Intellectual and Conscious Heritage: The text highlights that the revolution prompted a reevaluation of intellectual and conscious heritage, not just in Turkey, but also in other Muslim societies [3]. This lasting intellectual impact is attributed to the revolution’s progressive principles.
Continued Relevance: Despite the passing of time, the text indicates that the revolution’s impact is far from diminished, and that it will likely continue to be a guiding force in the future [1]. The text suggests that the revolution will rise again with new lights and ultimately serve as a role model as Iqbal hoped [1, 2]. The author implies that the revolution’s ideals will continue to be relevant and influential [1, 2].
In summary, the text’s assessment of Atatürk’s lasting impact is that his 1924 revolution has been a transformative event with an enduring legacy. The revolution continues to serve as a model for other Muslim nations, while its core values of secularism and democracy remain resilient in Turkey. The text makes it clear that the revolution will continue to be a source of inspiration and influence in the years to come.
The Future of the Turkish Revolution
The author views the Turkish Revolution’s future with optimism and confidence, despite past and present challenges [1]. Here’s a breakdown of the author’s perspective on the revolution’s future, drawing from the sources and our conversation history:
Resilience and Endurance: The author emphasizes that the revolution has endured for a century despite attempts by conservative elements to undermine it [1]. This demonstrates the revolution’s strong foundations and its ability to withstand opposition [1]. This suggests that the revolution’s core principles are deeply ingrained and will likely persist.
Rejection of Conservative Opposition: The author notes that intellectuals who wish to dismantle the revolution are being pushed out of major cultural centers [1]. This indicates that the revolution’s values of liberalism, secularism, and democracy continue to hold significant sway and that those who oppose these values are losing influence [1].
Renewal and Reemergence: The author believes that the revolution will “rise again with new lights,” suggesting that it will experience a resurgence and continue to be a guiding force in the future [1]. This implies that the revolution’s ideals are not static, but rather will evolve and adapt to new contexts while still maintaining its core values.
Model for Muslim Nations: The author believes that the revolution will ultimately serve as a role model for other Muslim nations in their respective territories, as was Iqbal’s wish [1, 2]. This demonstrates the author’s conviction that the revolution’s impact is not limited to Turkey but extends to the wider Muslim world [2].
Iqbal’s Vision: The author states that the revolution’s future aligns with Iqbal’s desire for other Muslim nations to follow Turkey’s example [1, 2]. This connects the revolution’s future with a broader vision of progress and reform in the Muslim world, giving it a sense of purpose that transcends national borders.
Positive Trajectory: The author implies that the revolution’s future trajectory is positive, with the expectation that it will not only endure but also gain renewed strength and influence [1]. The author’s tone is optimistic and projects a sense of confidence in the revolution’s ability to overcome current challenges.
In summary, the author’s view of the Turkish Revolution’s future is highly optimistic. They believe that despite facing challenges from conservative forces, the revolution will not only endure but will also experience a renewal, reemerging with greater strength and influence. The author sees it as a continued source of inspiration and a model for other Muslim nations, thus emphasizing its lasting and widespread impact [1, 2].
Affiliate Disclosure: This blog may contain affiliate links, which means I may earn a small commission if you click on the link and make a purchase. This comes at no additional cost to you. I only recommend products or services that I believe will add value to my readers. Your support helps keep this blog running and allows me to continue providing you with quality content. Thank you for your support!
These articles from the Al Riyadh newspaper cover a variety of topics, with a significant focus on Saudi Arabia’s celebration of its national Flag Day, highlighting its historical significance and cultural importance. The sources also report on various local and international events, including sports news, cultural initiatives like the Saudi Film Festival and King Abdulaziz Library’s efforts to empower women, and regional developments such as humanitarian aid in Jordan and Tanzania, and the ongoing situations in Palestine and Syria. Furthermore, there are articles discussing economic trends, specifically fluctuations in oil prices, and social initiatives, such as mosque renovations and efforts to combat animal cruelty. Finally, some articles provide local news and features related to Ramadan activities and market developments in Saudi cities.
The Saudi National Flag: A Symbol of Unity and History – Study Guide
I. Core Concepts and Significance:
Historical Roots: Trace the origins of the Saudi flag back to the establishment of the First Saudi State in 1727 and its connection to the Islamic call for unity.
Religious Symbolism: Explain the meaning and significance of the Shahada (“There is no god but Allah; Muhammad is the messenger of Allah”) as the central element of the flag.
The Sword: Describe the addition of the sword and its symbolism of justice and strength. Note the direction of the sword’s blade.
Color Significance: Detail the meaning and importance of the green color of the flag in Islamic tradition and its representation of growth and prosperity.
National Identity: Analyze how the flag serves as a powerful emblem of Saudi national identity, unity, pride, and belonging for its citizens, both within the Kingdom and abroad.
Evolution of the Flag: Outline the key changes and modifications the flag has undergone throughout the history of the Saudi states, including the reign of King Abdulaziz Al Saud.
Respect and Protocol: Explain the regulations and restrictions surrounding the use and display of the Saudi flag, emphasizing the prohibition of lowering or disrespecting it due to its sacred inscription.
International Representation: Describe the role of the flag in representing the Kingdom in international forums, embassies, and diplomatic events, highlighting its significance in asserting sovereignty and projecting influence.
Cultural Influence: Discuss how the flag has become a source of inspiration in Saudi culture, poetry, and artistic expression, embodying national sentiments and values.
Modern Significance: Understand the contemporary relevance of the flag as a symbol of the Kingdom’s enduring values, historical depth, and aspirations for the future, as reflected in Vision 2030.
II. Key Events and Figures:
1727: Establishment of the First Saudi State and the early use of a green flag with the Shahada.
Imam Muhammad bin Saud: His role in establishing the First Saudi State and the initial flag.
1744: Date associated with the consolidation of the First Saudi State and its religious mission.
King Abdulaziz Al Saud: His crucial role in unifying the Kingdom and the modifications made to the flag during his reign, including the addition and positioning of the sword.
1902: King Abdulaziz’s recapture of Riyadh and the raising of the Shahada flag with a new addition.
1926: Unification of Hejaz and the return to a rectangular green flag with the white Shahada.
1932: Official establishment of the Kingdom of Saudi Arabia and the adoption of the current flag design.
1357 AH (1938 AD): Formal regulations issued by King Abdulaziz concerning the raising of the Saudi flag.
1393 AH / 1973 AD: Official system of the flag issued, specifying its dimensions and details.
Ali al-Qarni and Rayanah Barnawi: Their 2023 space mission and raising of the Saudi flag in space.
III. Quiz:
What is the central inscription on the Saudi national flag and what does it signify?
Describe the symbolism of the sword on the Saudi flag and when it was formally added.
Why is the Saudi flag always green, and what does this color traditionally represent?
Explain why the Saudi flag is never flown at half-mast, even during periods of national mourning.
How did King Abdulaziz Al Saud contribute to the evolution of the Saudi national flag during his reign?
What is the historical significance of the year 1727 in relation to the Saudi national flag?
In what ways does the Saudi national flag represent the national identity and unity of Saudi Arabia?
Describe the protocol that Saudi embassies follow regarding the display of the national flag in foreign countries.
How has the Saudi national flag served as a source of inspiration in Saudi poetry and culture?
What is the significance of the Saudi flag being raised at international conferences and summits?
IV. Quiz Answer Key:
The central inscription is the Shahada: “There is no god but Allah; Muhammad is the messenger of Allah.” It is the fundamental declaration of Islamic faith and signifies the religious foundation of the Kingdom.
The sword symbolizes justice, strength, and the defense of the Islamic faith and the nation. A single sword below the Shahada was formally added by King Abdulaziz Al Saud to represent a new era of unity and sovereignty.
The Saudi flag is always green because green is a color of great significance in Islam, often associated with Paradise, growth, and prosperity. It also historically represented the banners of early Islamic states.
The Saudi flag is never flown at half-mast because the Shahada inscribed upon it is considered sacred and must always be flown at its full height as a sign of respect for the Islamic creed.
King Abdulaziz Al Saud played a key role in the flag’s evolution by standardizing its design. He initially used a square green flag with the Shahada and a sword, later adopting the rectangular shape and the specific positioning of the sword below the Shahada.
The year 1727 marks the establishment of the First Saudi State. The sources indicate that a green flag bearing the Shahada was used during this early period, signifying the foundational link between the flag and the origins of the Saudi nation.
The Saudi flag embodies national identity by visually representing the Kingdom’s core values: Islam, unity, justice, and historical heritage. It serves as a focal point for national pride and a symbol of belonging for all Saudi citizens.
Saudi embassies around the world raise the Saudi flag prominently to symbolize the Kingdom’s sovereignty, independence, and diplomatic presence. It underscores Saudi Arabia’s standing and influence on the international stage.
The Saudi national flag has inspired numerous expressions of national sentiment in Saudi poetry and culture, serving as a potent symbol of patriotism, loyalty to the leadership, and the nation’s historical journey.
Raising the Saudi flag at international conferences and summits confirms the Kingdom’s presence and influence as a significant political and economic power on the global stage, reflecting its active participation in international affairs and organizations.
V. Essay Format Questions:
Analyze the evolution of the Saudi national flag from the establishment of the First Saudi State to its current design, discussing the key historical events and symbolic changes that shaped its form and meaning.
Evaluate the significance of the religious symbolism embedded in the Saudi national flag, particularly the Shahada, and discuss how this symbolism influences national identity, values, and international relations.
Discuss the ways in which the Saudi national flag serves as a unifying symbol for the diverse population of Saudi Arabia, both within the Kingdom and among Saudis living abroad, considering its historical, religious, and cultural resonance.
Examine the regulations and cultural protocols surrounding the use and display of the Saudi national flag, explaining the rationale behind these rules and their importance in upholding the flag’s sanctity and national significance.
Assess the role of the Saudi national flag in representing the Kingdom on the international stage, particularly in diplomatic relations, international organizations, and global events, and discuss how it projects Saudi Arabia’s image and influence.
VI. Glossary of Key Terms:
Shahada: The Islamic declaration of faith: “There is no god but Allah; Muhammad is the messenger of Allah.” It is the central tenet of Islam and the inscription on the Saudi flag.
Tawhid: The concept of the oneness of God in Islam. The Shahada is a declaration of Tawhid.
Sovereignty: Supreme power or authority; in this context, the flag symbolizes the Kingdom’s independent authority and control over its territory and affairs.
National Identity: A sense of belonging to a nation, sharing common values, culture, history, and often language. The flag is a key visual representation of this identity.
Allegiance (Wala’): Loyalty and devotion to the leadership and the nation, a sentiment deeply connected to the national flag.
Unity (Talahum/Wahda): The state of being united or joined as a whole. The flag symbolizes the unification of the different regions into the Kingdom of Saudi Arabia.
Justice (Adl): Fairness and moral integrity, symbolized by the sword on the flag.
Strength (Quwwa): The capacity to exert force or resist opposition, also symbolized by the sword.
Historical Depth (Al-Umq al-Tarikh): The long and significant history of the Saudi state, reflected in the evolution of the flag.
National Pride (Iftikhar bil-Hawiyya al-Wataniyya): A feeling of satisfaction and esteem associated with one’s national identity, often evoked by the sight of the national flag.
This briefing document summarizes the main themes and important ideas presented in the provided excerpts from the March 11, 2025 issue of the Saudi Arabian newspaper “Al Riyadh.” The analysis focuses on key events, social and cultural discussions, economic updates, international relations, and sports news highlighted in the selected articles.
1. National Identity and “Flag Day”
Theme: The prominent theme across several articles is the significance of the Saudi national flag, particularly in commemoration of “Flag Day.” The flag is presented as a deeply symbolic representation of national unity, the Islamic faith, historical roots, and the Kingdom’s values.
Key Ideas/Facts:Flag Day commemorates the establishment of the first Saudi state in 1139 AH (1727 AD), rooted in the values of unity and Islam.
The flag’s green color, the “Shahada” (declaration of faith: “There is no god but Allah; Muhammad is the messenger of Allah”), and the sword symbolize unity, justice, strength, and the Kingdom’s historical journey.
The current design of the flag was officially adopted in 1393 AH (1973 AD), featuring a green rectangle, the “Shahada” in white, and a drawn sword beneath it pointing towards the flagstaff.
The flag’s history traces back to the banner of the first Saudi state, which was green with the “Shahada.” The sword was later added during the reign of King Abdulaziz to symbolize strength and justice during the unification of the Kingdom.
The flag is more than just a symbol; it embodies national identity, sovereignty, and a rich history. As stated, “The Saudi flag is not just a flag waving in the sky, but a deep-rooted and noble message that carries within it the identity of faith, sovereignty, and ancient history.”
There are strict regulations regarding the use and handling of the flag to preserve its sanctity and respect. “The Ministry of Interior has previously warned against prohibitions on the use of the flag of the Kingdom, including: raising the flag of the Kingdom faded or in a bad condition, when it becomes old from use that does not permit its continued use.”
The flag is a source of pride for Saudi citizens, representing their belonging, love, and loyalty to the leadership and the nation. “Every citizen, male and female, cherishes in their hearts the flag bearing the ‘Shahada of Tawhid,’ taking pride in the national identity, and expressing feelings of cohesion, love, and loyalty stemming from the spirit of belonging and allegiance to the leadership and the homeland.”
2. International Relations and Diplomacy
Theme: The excerpts touch upon Saudi Arabia’s active role in international diplomacy, particularly in the Middle East and the ongoing conflict in Ukraine.
Key Ideas/Facts:Meeting with Ukrainian President: Crown Prince Mohammed bin Salman received Ukrainian President Volodymyr Zelenskyy in Jeddah, underscoring the Kingdom’s interest in peace efforts. Zelenskyy acknowledged Saudi Arabia’s “pivotal role” in the Middle East and the world.
Diplomatic Presence: The Saudi flag is raised high at Saudi embassies worldwide, symbolizing the Kingdom’s sovereignty, independence, and diplomatic presence on the international stage. “In all parts of the world, Saudi embassies raise the Saudi flag high to be a witness to the Kingdom’s sovereignty and independence, reflecting its diplomatic presence and confirming its strength and standing on the international arena.”
Role in International Organizations: The Saudi flag is present at international conferences and summits of organizations such as the United Nations (UN), G20, Organization of Islamic Cooperation (OIC), and the Gulf Cooperation Council (GCC), highlighting the Kingdom’s influence as a political and economic power.
Humanitarian Aid: The Saudi flag serves as a symbol of hope and trust in humanitarian aid efforts. Dr. Khalid Al-Subaan of the “Amal” volunteer program noted that the presence of the Saudi flag reassures beneficiaries and enhances the sense of responsibility among Saudi volunteers, projecting a positive national image internationally.
3. Cultural Significance and the Arts
Theme: The excerpts explore the deep cultural significance of the national flag and the role of art, particularly poetry and cinema, in reflecting and promoting national identity.
Key Ideas/Facts:Flag in Poetry: Saudi poetry is rich with verses expressing national belonging, pride, and loyalty, often featuring the flag as a central symbol. “The Saudi flag is considered a symbol of identity and homeland, while poetry is a mirror for expressing feelings, and therefore we find that national poems carry many verses that express feelings of belonging, pride, and many deep meanings…”
Cinema and National Identity: The Saudi film festival, with the theme “Stories Seen and Told,” aims to showcase cinematic creations and highlight Saudi stories, reflecting the Kingdom’s cultural identity. The festival saw significant participation, indicating a growing interest in filmmaking.
Preservation of Heritage: There is a recognition of the importance of preserving and showcasing Saudi Arabia’s rich history and heritage through various initiatives, including the development of historical mosques and the focus on authentic details in Ramadan cultural events.
4. Economic Updates and Investment
Theme: The excerpts provide a glimpse into economic trends, investment strategies, and the oil market.
Key Ideas/Facts:Stock Market Performance: The Saudi stock market experienced a decline, marking its lowest closing since the beginning of December 2024. Analysts advise investors to stay informed, diversify their portfolios, and focus on companies with strong fundamentals.
Oil Market Volatility: Oil prices declined due to concerns about slowing global demand, the impact of US customs duties on China, and increased production from OPEC+. There is continued volatility expected in the oil market.
Investment in Qassim Region: A meeting reviewed the investment strategy in the Qassim region, emphasizing partnerships between the public and private sectors to achieve sustainable development in line with Vision 2030.
“Alec” Company’s Growth: The construction and contracting company “Alec” reported significant annual growth in revenue and workforce, reflecting its strategic expansion in Saudi Arabia and the UAE.
5. Social Initiatives and Community Development
Theme: The excerpts highlight various social initiatives focused on community service, supporting people with disabilities, and preserving Islamic values.
Key Ideas/Facts:Philanthropic Efforts: The Al-Fawzan family is recognized for their extensive charitable work, including supporting social programs, mosque architecture, and establishing centers for autism and comprehensive rehabilitation.
Support for People with Visual Impairments: A successful conclusion of the Ramadan Games for the Visually Impaired was reported, demonstrating the Kingdom’s commitment to inclusivity and sports for all.
Distribution of Quran Copies: Thousands of copies of the Holy Quran were distributed to Umrah pilgrims at King Abdulaziz International Airport in Jeddah, reflecting the Kingdom’s dedication to serving pilgrims.
Development of Historical Mosques: A project led by Crown Prince Mohammed bin Salman aims to develop historical mosques across the Kingdom, preserving their architectural heritage and religious significance in line with Vision 2030’s focus on cultural heritage.
6. Sports News
Theme: The sports section covers local and international football competitions, rallying events, and achievements of Saudi athletes.
Key Ideas/Facts:AFC Champions League: Al-Nassr secured a spot in the quarter-finals of the AFC Champions League, while Al-Taawoun is aiming to advance. Al-Ahli Jeddah is also competing.
European Football: Liverpool gained an advantage in their Champions League tie against Paris Saint-Germain, while Bayern Leverkusen faces a tough challenge against Bayern Munich.
Rally Dakar: Saudi rally champion Yazeed Al-Rajhi proudly raised the national flag at the Dakar Rally 2025.
Achievements of Saudi Athletes: Saudi athletes have achieved significant success in various Asian Games, winning numerous medals and raising the national flag on international podiums. The flag is seen as a symbol and motivator for all Saudi athletes. “The flag is in the core of every athlete, but it is a symbol and a great motivator for all athletes during their participation in international and continental championships, and at all levels and in all sports, for raising the flag on the podium remains the dream of every athlete and a goal that everyone aspires to…”
7. Other Notable Points:
Kafrit Agreement: A proposed bill in the Israeli Knesset aims to cancel the Oslo Accords, a development with potential implications for the Palestinian territories.
Humanitarian Situation in Gaza: Concerns are raised about the dire humanitarian situation in Gaza following the cutting of electricity supply by Israel.
Increased European Reliance on US Arms: European countries are increasingly relying on US arms imports, driven by the desire to strengthen their defense capabilities.
Critique of Ramadan Drama Series: An opinion piece critiques some Saudi Ramadan drama series for prioritizing visual spectacle over strong narratives and historical accuracy, potentially alienating discerning viewers.
Conclusion:
The selected excerpts from “Al Riyadh” on March 11, 2025, present a snapshot of a dynamic Saudi Arabia actively engaged on multiple fronts. The commemoration of “Flag Day” underscores the deep significance of national identity and unity. The Kingdom continues to play a notable role in regional and international affairs, while also focusing on cultural preservation, economic diversification, and social development in line with its Vision 2030. The sports section highlights the achievements and aspirations of Saudi athletes, further contributing to national pride. Overall, the newspaper conveys a sense of national pride, progress, and engagement with both domestic and global issues.
Saudi National Flag: Symbolism and Significance
Frequently Asked Questions about the Saudi National Flag
What is the significance of the Saudi National Flag and when did its origins begin? The Saudi National Flag is a deeply significant symbol of national identity, sovereignty, historical depth, and religious commitment for the Kingdom of Saudi Arabia. Its origins trace back to 1727, coinciding with the establishment of the first Saudi state. The flag is rooted in the value of the nation’s knowledge and commemorates the unity and Islamic principles upon which the state was founded.
How has the design of the Saudi National Flag evolved throughout history? The flag’s design has evolved over the centuries alongside the establishment and unification of the Saudi states. The first Saudi flag during the first Saudi state was green with the words “There is no god but Allah; Muhammad is the messenger of Allah” inscribed on it. Later, during the reign of King Abdulaziz Al Saud, a sword was added below the inscription, symbolizing strength and justice. The flag became rectangular and its dimensions were standardized in 1973.
What are the key elements of the current Saudi National Flag and what do they symbolize? The current Saudi National Flag is a green rectangle with a width equal to two-thirds of its length. Across the center is the Islamic creed, the shahada (“There is no god but Allah; Muhammad is the messenger of Allah”) written in white in Thuluth script. Below the shahada is a white, unsheathed sword pointing towards the hoist (flagpole) with its hilt facing downwards. The green color symbolizes growth, vitality, and prosperity. The shahada represents the foundational Islamic belief of the kingdom. The sword embodies justice, strength, and the sacrifices made to unify and defend the nation.
Why is the Saudi National Flag treated with such high respect, and what are some prohibitions regarding its use? The Saudi National Flag holds a sacred status due to its bearing of the shahada, a fundamental tenet of Islam. This religious significance, combined with its representation of national unity and sovereignty, necessitates utmost respect. Prohibitions include lowering the flag to half-mast (as a sign of mourning), allowing it to touch the ground or water, using it in a worn or faded condition, or any use deemed disrespectful to its symbolic value.
Beyond its national symbolism, how does the Saudi Flag function in international contexts such as diplomacy and global organizations? The Saudi National Flag is a powerful tool in Saudi Arabia’s diplomacy and its presence in global organizations. Flown at Saudi embassies worldwide, it signifies the Kingdom’s sovereignty, independence, and diplomatic presence, fostering a sense of belonging for Saudi citizens abroad. Its raising at international conferences and summits, such as the United Nations and the G20, underscores Saudi Arabia’s political and economic influence and its commitment to global issues.
What is the significance of ” يوم العلم ” (National Flag Day) in Saudi Arabia? “يوم العلم” (National Flag Day), celebrated on March 11th, marks the historical significance of the Saudi flag and its enduring value to the nation. It commemorates the day the first Saudi state was founded in 1727, highlighting the flag as a symbol of unity, pride in national identity, and the strong bond between the leadership and the people, rooted in loyalty and belonging.
How is the Saudi National Flag reflected in Saudi culture, arts, and sports? The Saudi National Flag is deeply ingrained in Saudi culture and serves as a source of inspiration across various domains. In poetry, it evokes strong nationalistic sentiments, loyalty, pride, and belonging. In sports, raising the flag at international competitions is the ultimate aspiration of Saudi athletes, representing national achievement and unity. Even in volunteer work and humanitarian aid, the presence of the Saudi flag fosters a sense of responsibility and pride, reinforcing the Kingdom’s values on a global scale.
Can the Saudi National Flag ever be lowered to half-mast, and what is the significance of it being raised in space? Due to the sacred inscription of the shahada, the Saudi National Flag is unique in that it is never lowered to half-mast as a sign of mourning. This unwavering display reflects the enduring principles it represents. The raising of the Saudi flag in outer space in 2023 by Saudi astronauts Ali Al-Qarni and Rayanah Barnawi was a historic moment symbolizing the nation’s ambitions, achievements, and the dedication of its citizens to reach new heights, while still holding their national identity aloft.
Saudi National Flag and Day of the Flag
The sources provided discuss the significance of the Saudi national flag and the designation of March 11th as ‘يوم العلم’ (Day of the Flag) in Saudi Arabia. This day, corresponding to the 27th of Dhul Hijjah, 1355 AH (March 11th, 1937 AD), marks the day when the flag was adopted during the reign of King Abdulaziz Al Saud.
The establishment of ‘يوم العلم’ (Day of the Flag) by a royal decree issued on the 9th of Sha’ban, 1444 AH (March 1st, 2023), emphasizes the profound importance of the national flag. It is seen as a manifestation of the state, its power, unity, national cohesion, and sovereignty. The flag serves as a symbol of the Kingdom’s history, which extends back to its foundation in 1139 AH (1727 AD).
The Saudi national flag is unique and highly revered:
It bears the Shahada (Islamic declaration of faith): ‘لا إله إلا الله محمد رسول الله’ (There is no god but Allah; Muhammad is the messenger of Allah). This central tenet of Islam, symbolizing Tawhid (Islamic monotheism), is fundamental to the Kingdom’s foundation.
Out of deep respect for the Shahada, the Saudi flag is never lowered, even during times of mourning. This distinguishes it from most other national flags.
Its use for commercial or decorative purposes is prohibited to prevent any unintended disrespect towards the sacred inscription. This underscores its esteemed position and sanctity within Saudi society.
The flag embodies enduring national values, stands as a testament to the unification of the nation, and reflects its Islamic identity. Every Saudi citizen holds the flag with the Shahada in their hearts with pride in their national identity.
In recognition of the flag’s significance, the Ministry of Culture issued a guidance manual for its use following the royal decree that designated March 11th as ‘يوم العلم’ (Day of the Flag). This manual details the history of the flag, its applications, protocol, the ‘Saudi Dress of Honor’, and information on other Saudi flags. The establishment of this day and the guidelines for its use highlight the flag’s role as a crucial symbol of national identity and sovereignty.
The Symbolism of the Saudi National Flag
The Saudi national flag holds profound symbolism deeply rooted in the Kingdom’s history, values, and identity. It is far more than just a piece of cloth; it represents the core tenets and aspirations of the Saudi nation.
Key aspects of the flag’s symbolism include:
The Shahada (Islamic declaration of faith): The inscription “لا إله إلا الله محمد رسول الله” (There is no god but Allah; Muhammad is the messenger of Allah) is central to the flag’s meaning [previous turn, 4]. It symbolizes Tawhid (Islamic monotheism), the fundamental principle upon which the Kingdom is founded [previous turn, 6]. The deep respect for this sacred inscription is evident in the fact that the flag is never lowered, even in mourning, a unique characteristic among national flags [4, previous turn].
Unity: The flag is a “منارة الهوية ورسالة الوحدة” (beacon of identity and message of unity). It has been a “شاهداً على تاريخ المملكة” (witness to the history of the Kingdom) and carries the “مسيرة توحيدها” (journey of its unification). It remains a “رمزاً لوحدة الوطن” (symbol of the nation’s unity).
National Identity: The Saudi flag “يعد رمزاً أساسياً للهوية الوطنية” (is considered a fundamental symbol of national identity). It reflects the “الهوية السعودية” (Saudi identity) and the meanings of “االنتماء” (belonging) and “الوالء” (loyalty). Seeing the flag abroad evokes a sense of pride and connection to their homeland for Saudi citizens.
Sovereignty and Power: The flag is a “manifestation of the state, its power, … and sovereignty” [previous turn]. It enjoys “احترام واسع” (wide respect) due to its “رمزيتها الدينية والسيادية” (religious and sovereign symbolism). It also signifies “تحقيق القوة، والعزة، والأنفة” (the achievement of power, honor, and pride).
Historical Continuity: The flag’s design has remained consistent over the ages, reflecting the “ثبات مبادئها في المملكة” (steadfastness of its principles in the Kingdom) since its establishment. King Abdulaziz’s adoption of his ancestors’ banner as the national flag further underscores this historical link.
Core Values: The flag embodies enduring national values and the lofty principles upon which the state was founded, including “العدل والتوحيد والقوة” (justice, monotheism, and power), as well as “السلام، والإسلام، والعدل” (peace, Islam, and justice).
Religious Significance: Beyond the Shahada, the flag’s religious symbolism contributes to its high level of respect, making it “أحد أكثر الأعلام الموقرة عالمياً” (one of the most respected flags globally). It represents the “الوحدة الأساسية للعقيدة” (fundamental unity of creed).
Respect and Protocol: The prohibition of using the flag for commercial or decorative purposes highlights its sanctity. The traditions surrounding its use, such as never being lowered and swords being brandished under it in military parades, further emphasize its revered status.
Aspiration and Future: On ‘يوم العلم’ (Day of the Flag), the hopes of the nation’s sons for the continuation of the march towards platforms of honor are renewed under the banner of the flag. It is a “راية التوحيد العزة والعلو والمكانة” (banner of monotheism, honor, highness, and status).
In essence, the Saudi national flag serves as a powerful and multifaceted symbol that encapsulates the Kingdom’s foundational religious beliefs, its journey towards unification, its enduring national identity and values, its sovereignty and strength, and its aspirations for the future.
History of the Saudi Arabian Flag
The history of the Saudi flag is deeply intertwined with the establishment and evolution of the Saudi state, dating back several centuries [previous turn, 5].
Early Origins (circa 1727 AD): The first Saudi flag emerged with the foundation of the first Saudi state in 1139 AH (1727 AD) during the reign of Imam Muhammad bin Saud [5, 29, previous turn]. This early flag was green with a white area near the hoist and a component included. This basic design reportedly carried through the first and second Saudi states.
The Unification Era under King Abdulaziz: During the period of the modern Kingdom’s establishment, King Abdulaziz Al Saud initially carried a square-shaped green flag. This flag featured the phrase “نصر من الله وفتح قريب” (Victory from Allah and an imminent conquest) inscribed on it, along with a sword underneath. Later, this design was modified to include two crossed swords beneath the Shahada (“لا إله إلا الله محمد رسول الله“), which was positioned in the center of the green flag. This design persisted until 1926 AD.
Post-Hijaz Unification (1926 AD): Following the unification of Hijaz, the flag returned to a rectangular shape. It became solid green with the Shahada written in white across its center, without any additional symbols.
The Modern Flag (1938 AD): The flag underwent its final modification in 1357 AH (1938 AD) during the reign of King Abdulaziz. This is the flag that remains in use today. Its dimensions were set with the width equaling two-thirds of its length. The white Shahada remains in the center, and a white, unsheathed sword is placed below it, with its tip pointing towards the left (fly side) and its hilt towards the bottom (hoist side). This addition of the sword symbolized strength and justice. It’s also noted that King Abdulaziz’s flag was based on the banner of his ancestors.
Throughout these historical developments, the green color has been a constant feature of the Saudi flag. The inclusion of the Shahada from the early stages highlights the foundational Islamic identity of the state [5, 29, previous turn]. The enduring nature of the flag’s core elements reflects the “ثبات مبادئها في المملكة” (steadfastness of its principles in the Kingdom) [8, previous turn]. The current design, finalized in 1938, stands as a powerful “رمزاً لوحدة الوطن وهويته وتاريخه العريق” (symbol of the nation’s unity, identity, and ancient history).
Saudi National Flag Regulations
The Saudi national flag is subject to strict regulations that underscore its sanctity and significance. These regulations aim to prevent any disrespect or misuse of the Kingdom’s most important symbol.
Key regulations regarding the Saudi flag include:
Prohibition of Lowering (Never Half-Mast): The Saudi flag is never lowered to half-mast, even during periods of mourning [6, 8, 12, previous turn]. This unique regulation is a mark of deep respect for the Shahada (Islamic declaration of faith) it bears.
Ban on Commercial and Decorative Use: The use of the Saudi flag for commercial, decorative, or advertising purposes is strictly prohibited [6, 9, 12, previous turn]. This is to avoid any unintended disrespect towards the sacred inscription and to maintain the flag’s dignified status. This includes not printing the flag on merchandise such as shoes or carpets.
Respect in International Settings: The Saudi flag is treated with great care and respect in international events and forums. When displayed alongside other national flags, meticulous attention is paid to ensure no unintended offense occurs.
Legal Protection: The Saudi flag is protected by strict laws within the Kingdom. Additionally, other nations also adhere to protocols to protect the Saudi flag during official events.
Proper Display: The national flag is hoisted at all times on all government buildings and public institutions within the Kingdom and at its diplomatic missions abroad, including during official holidays. Considerations of international courtesy are taken into account regarding its use.
Disposal of Damaged Flags: If a flag becomes faded or is in poor condition, it is not simply discarded. Instead, it must be sent to the official authorities for proper disposal, which involves burning it in a specific procedural manner.
Guidance Manual for Use: The Ministry of Culture has issued a guidance manual for the proper use of the national flag [11, previous turn]. This manual was created following the royal decree designating March 11th as ‘يوم العلم’ (Day of the Flag) and provides comprehensive information on the flag’s history, applications, and protocol.
Avoiding Disrespectful Contact: It is ensured that the flag does not touch the ground during official events. Similarly, it should not be printed on clothing or products in an inappropriate manner.
These regulations collectively emphasize the profound respect and reverence accorded to the Saudi national flag as a symbol of the nation’s core identity, unity, and faith. The detailed guidelines and legal protections underscore its unique and esteemed position.
The Profound Significance of the Saudi National Flag
The علم (flag), specifically the Saudi national flag, holds immense significance for several profound reasons, as discussed in the sources and our conversation history.
Firstly, the flag’s most prominent feature, the Shahada (“لا إله إلا الله محمد رسول الله”), imbues it with deep religious significance [previous turn, 4]. This declaration of faith is the cornerstone of Islam and symbolizes Tawhid (Islamic monotheism), the fundamental principle upon which the Kingdom of Saudi Arabia was founded [previous turn, 6]. The profound respect for this sacred inscription is underscored by the unique regulation that the Saudi flag is never lowered to half-mast, even in times of mourning [6, 8, 12, previous turn].
Secondly, the flag serves as a powerful symbol of national unity. It is referred to as a “منارة الهوية ورسالة الوحدة” (beacon of identity and message of unity) and a “رمزاً لوحدة الوطن” (symbol of the nation’s unity) [previous turn]. The flag’s history is intertwined with the “مسيرة توحيدها” (journey of its unification), and it stands as a constant reminder of the Kingdom’s cohesion [previous turn].
Thirdly, the علم is a fundamental symbol of Saudi national identity. It “يعد رمزاً أساسياً للهوية الوطنية” (is considered a fundamental symbol of national identity) [previous turn]. For Saudi citizens, seeing the flag, especially abroad, evokes a strong sense of pride (“فخر واعتزاز”) and belonging (“انتماء”) to their homeland [previous turn, 22].
Furthermore, the flag represents the sovereignty and power of the Saudi state [previous turn, previous turn]. It is seen as a “manifestation of the state, its power, … and sovereignty” [previous turn]. Its religious and sovereign symbolism grants it “احترام واسع” (wide respect), both domestically and internationally, making it “أحد أكثر الأعلام الموقرة عالمياً” (one of the most respected flags globally) [7, previous turn].
The history of the Saudi flag reflects the Kingdom’s evolution and the steadfastness of its core principles [6, 8, previous turn]. From its early origins to the final design adopted during the reign of King Abdulaziz in 1938 AD, the consistent presence of the color green and the Shahada highlights the enduring Islamic identity and historical continuity of the nation [previous turn]. King Abdulaziz’s choice to base the flag on the banner of his ancestors further emphasizes this historical connection [8, previous turn].
The strict regulations governing the use of the flag further underscore its significance [previous turn]. The prohibition of commercial and decorative use [6, 9, 12, previous turn], the protocols for its proper display [8, previous turn], and the specific procedures for disposing of damaged flags [8, previous turn] all demonstrate the profound respect and sanctity accorded to it. The issuance of a guidance manual for its use following the designation of March 11th as ‘يوم العلم’ (Day of the Flag) highlights its crucial role as a symbol of national identity and sovereignty [11, previous turn]. ‘يوم العلم’ itself serves as a dedicated occasion to celebrate the flag’s value and its representation of the Kingdom’s long history since its foundation in 1727 AD [9, previous turn].
In essence, the Saudi national flag is far more than a mere emblem. It is a deeply revered symbol that encapsulates the Kingdom’s foundational religious beliefs, its journey toward unity, its enduring national identity and values, its sovereignty and strength, its rich history, and its aspirations for the future. It is a powerful representation of what it means to be Saudi.
Affiliate Disclosure: This blog may contain affiliate links, which means I may earn a small commission if you click on the link and make a purchase. This comes at no additional cost to you. I only recommend products or services that I believe will add value to my readers. Your support helps keep this blog running and allows me to continue providing you with quality content. Thank you for your support!
The text is a comprehensive C# programming course designed for beginners. It initiates with Visual Studio installation, progresses to basic “Hello World” applications, and incrementally introduces concepts like variables, data types, operators, and control flow statements. The course then transitions to object-oriented programming, covering classes, objects, methods, properties, and their lifecycle management. Furthermore, it explores the .NET Framework Class Library, assemblies, namespaces, and external libraries, providing guidance on accessing and utilizing them in projects. The course also covers data structures like arrays, lists, and dictionaries, as well as LINQ for data manipulation and querying. Finally, it concludes with an overview of important software development concepts like debugging and design patterns.
C# Programming Fundamentals Study Guide
Quiz
Answer each question in 2-3 sentences.
What is the purpose of the Console.ReadLine() method in C#?
Explain the difference between a variable declaration and variable initialization.
What is the difference between the Console.Write() and Console.WriteLine() methods?
What is an integer data type, and what range of values can it typically hold in C#?
Describe the purpose of an if statement in C# and how it controls the flow of execution.
What is the difference between a single equal sign (=) and a double equal sign (==) in C#?
What is a “for loop” used for in programming and explain the three parts of a for loop.
What is an array in C#?
What is a class in C#?
What is a method in C#?
Quiz Answer Key
The Console.ReadLine() method is used to read a line of text entered by the user from the console window. It pauses the program’s execution until the user enters text and presses the Enter key, then returns the entered text as a string.
Variable declaration is the process of naming a variable and defining its data type, while variable initialization is the process of assigning an initial value to a previously declared variable.
Both methods output text to the console, but Console.Write() leaves the cursor at the end of the output, while Console.WriteLine() moves the cursor to the beginning of the next line after the output.
An integer is a whole number (without any decimal or fractional part). In C#, the int data type can typically hold values between approximately -2.147 billion and +2.147 billion.
An if statement is a conditional statement that executes a block of code only if a specified condition is true. If the condition evaluates to false, the code block is skipped.
A single equal sign (=) is an assignment operator used to assign a value to a variable. A double equal sign (==) is a comparison operator used to check if two values are equal, returning a Boolean value (true or false).
A for loop iterates through a code block for a specified number of times. 1: Initialization(where you declare and initialize variables), 2: Condition(the condition for the loop to continue running), and 3: Increment(where you increment or decrement the loop variables).
An array in C# is a data structure that stores a fixed-size, sequential collection of elements of the same data type. Each element in the array can be accessed by its index.
A class is a blueprint for creating objects that define data (fields) and actions (methods). Classes define the properties and behaviors of the objects that will be created from them.
A method is a block of code that performs a specific task and can be called by name from other parts of the program. Methods can accept input parameters and return a value.
Essay Questions
Explain the importance of understanding data types in C# and how choosing the correct data type can impact the performance and accuracy of a program. Provide examples of common data types and scenarios where each would be most appropriate.
Discuss the role of decision statements (if, else if, else) in programming logic. Provide a detailed example of a scenario where multiple decision statements are used to handle different conditions, and explain how these statements control the flow of execution.
Describe the concept of variable scope in C# and explain how it affects the accessibility and lifetime of variables. Provide examples of local, class-level, and global variables, and discuss the implications of each scope.
Explain the purpose and benefits of using loops (for, while, do-while) in programming. Provide a detailed example of a scenario where a loop is used to iterate through a data structure, perform calculations, and output results, and discuss the different types of loops that could be used in that scenario.
Discuss the key principles of Object-Oriented Programming (OOP) such as encapsulation. Explain how using classes and objects can improve code organization and maintainability.
Glossary of Key Terms
Class: A blueprint or template for creating objects, defining their properties (fields) and behaviors (methods).
Console: A system output window where a program can display text and receive input from the user.
Data Type: Specifies the type of data a variable can hold (e.g., integer, string, boolean).
Declaration: The process of naming a variable and defining its data type.
Expression: A combination of operands and operators that evaluates to a single value.
Field: A variable that is a member of a class or struct.
For Loop: A control flow statement for specifying iteration, which allows code to be executed repeatedly.
If Statement: A conditional statement that executes a block of code if a specified condition is true.
Initialization: Assigning an initial value to a variable when it is declared.
Integer: A whole number without any fractional or decimal parts.
IntelliSense: A code completion feature in Visual Studio that suggests code elements as you type.
LINQ: Language Integrated Query, is a feature in C# that provides a unified way to query data from various sources.
Method: A block of code that performs a specific task and can be called by name.
Object: An instance of a class, created from a blueprint or template.
Operand: A value or variable on which an operator acts.
Operator: A symbol that performs an operation on one or more operands.
Property: A member of a class that provides a flexible mechanism to read, write, or compute the value of a private field.
Scope: The region of a program where a variable is accessible.
Semicolon: A character (;) used at the end of a C# statement to indicate the end of the statement.
Statement: A complete instruction in C#, often ending with a semicolon.
String: A sequence of characters.
Variable: A named storage location in memory that can hold a value of a specific data type.
C# Programming Fundamentals with Visual Studio: A Beginner’s Guide
Here’s a detailed briefing document summarizing the key themes and ideas from the provided source excerpts.
Briefing Document: C# Programming Fundamentals with Visual Studio
Overview:
This document summarizes a series of instructional excerpts focused on teaching fundamental C# programming concepts using Visual Studio. The excerpts cover a range of topics, from setting up Visual Studio and creating basic “Hello World” applications, to declaring variables, using decision statements (if/else), understanding operators, working with iteration (for loops), arrays, LINQ, enums and switch statements, methods and debugging techniques. The material is designed for beginners with little to no prior programming experience. The instructor, Bob Tabor, emphasizes hands-on learning and provides practical examples.
Key Themes and Ideas:
Basic Workflow and Environment Setup:
Custom Installation of Visual Studio: The course begins by guiding users through a custom installation of Visual Studio. The narrator explains the selection and installation process, including the option to review license terms for each software component.
Creating a New Project: The initial lessons focus on establishing a basic workflow in Visual Studio. This includes creating new projects using templates (e.g., Console Application). “To begin, we’re going to create a new project. There are a number of different ways to do this, but I’m going to keep it simple and go to ”File” ”New” ”Project””.
“Hello World” Application: The course begins with building a “Hello World” application. The purpose is to familiarize the student with the basic steps of coding such as creating a new project, writing code, testing the application, saving the project, and debugging any errors.
Variables and Data Types:
Variables as Buckets: Variables are explained as “buckets” in computer memory that hold data. “A variable is simply under the hood, a bucket. I guess you could call it in the computer’s memory and you put things in buckets and you dump things out of buckets”.
Declaring Variables: The importance of declaring variables and specifying the correct data type is emphasized. “We have to declare our variables, we have to create those buckets and then give them some label that we can refer to them with from that point on.”
Data Types Covered: int (integers), string (text), and others. The course highlights the importance of choosing the right data type for the data being stored.
Variable Initialization: The course explains how to initialize the value to variables upon declaration. “What I’m doing here is not only declaring the variable, but then I’m initializing its value to whatever we retrieve when we call Read Line. This is called initialization, and initialization is important because you want to give your variables a value as quickly as possible”.
Decision Statements (if/else):
Conditional Logic: The course covers the use of if statements to execute code blocks based on conditions. “The if statement is called the decision statement because we will decide whether to execute any of the code inside of this inner code block based on this evaluation that we’re going to do after the if keyword”.
Comparison Operators: The use of double equal signs (==) for evaluation of “true or false” is covered and how to differentiate this from assignment operator using only one equal sign.
else if and else: The course includes the use of else if for additional conditions and else for “catch-all” scenarios.
Nested Decisions: The course also discusses how to nest ‘if’ statements within each other.
Operators, Expressions, and Statements:
Building Blocks of Code: The course defines statements as complete thoughts, expressions as parts of statements, and operators and operands as components of expressions. “Statements are what you call complete thoughts in C#. Typically, one line of code. A statement is made up of one or more expressions and expressions are made up of one or more operators and operands”.
Types of Operators: Mathematical operators (+, -, *, /), assignment operators (=), equality operators (==), comparison operators (>, <, >=, <=), logical operators (&&, ||), and the inline conditional operator (?:) are described.
Iteration (for Loops):
Looping Constructs: The course introduces for loops for iterating through code blocks a specified number of times.
Increment Operator: The i++ increment operator is explained.
break Statement: The break statement is introduced as a way to exit a loop prematurely.
Code Snippets: Code snippets are a pre-built piece of code to help users construct more complex code without needing to remember syntax perfectly. “To do this it’s real easy. If you can remember I need a for iteration statement just type in the word for. You’ll see that it pops up in the IntelliSense”.
Debugging Techniques:
Breakpoints: The use of breakpoints to pause code execution and examine variables is explained. “To make this work, what I’m going to do is actually set a breakpoint here on this line of code.”
Stepping Through Code: The process of stepping through code line by line to understand the flow of execution is also covered.
Variable Monitoring: The ability to monitor variable values during debugging is highlighted.
Arrays:
Storing Collections of Data: The course addresses how to create arrays to store multiple related values.
Array Declaration and Initialization: The method of declaring the array and initializing the array elements are both discussed.
Array Operations: The discussion covers how to retrieve values from an array using indexing, as well as how to utilize a for loop to print out the values in an array.
LINQ (Language Integrated Query):
Querying Collections: The course discusses using LINQ to query and manipulate collections of data (like arrays or lists).
Extension Methods: Discusses method syntax and query syntax available in LINQ.
var Keyword: The use of the var keyword to allow the compiler to infer the data type of a variable is explained, especially in the context of LINQ queries. “The var keyword is essential to help us to be able to create these very complex queries, and not have to worry about what the data type of it is that’s returned.”
Anonymous types: The course discusses anonymous types and how those get used as part of a LINQ query.
Enums and Switch Statements
Enumerations: Enumerations are a data type that limits and constrains all possible values to only those that are valid and have meaning within the system.
Switch Statement:. The course includes the introduction of a switch statement as an easier decision statement to read for situations with many different potential cases.
Methods:
Defining Reusable Code Blocks: Methods are introduced as reusable blocks of code. “What we’re going to do now is create and talk about methods. Methods are essentially a way to take a series of code statements, give them a name, and then re-execute them over and over again throughout our applications.”
Method Parameters and Return Types: The course explains the use of input parameters and return types for methods.
Overloading Methods: Overloading is the method of creating two methods with the same name but different methods signatures is discussed.
Understanding Scope
Scope of Variables: This section explains how the location where a variable is declared impacts its accessibility within the code. “Whenever you declare variable inside of a block of code, that variable is only alive for the life of that code block and any of the interior code blocks or code blocks inside of that code block.”
Accessibility Modifiers: The difference between public and private is touched upon.
Assemblies and Namespaces
.NET Assemblies: A discussion about how the .NET framework splits the code into multiple files. “These code files are called.NET assemblies. In fact, even the applications that we build, they’re ultimately compiling into.NET assemblies.”
.Namespaces: Used to be able to tell one class from a different class. “The creators needed a way to be able to tell one class from a different class and so they introduced the notion of name spaces and name spaces are like last names for your classes”.
Working with Strings
.Formatting Strings: How to use formatting codes to represent strings (e.g. as a percentage) are discussed.
.Manipulating Strings: Several functions are described such as Substring, ToUpper, Replace, and Trim.
Creating Classes
.Properties: A class contains the data and functionality related to one “thing” in your code and a property is the actual data point.
.Objects: After defining the class, you can create instances of that class.
Events
.Events Handlers: Used for the situations when you want more than one event handler to execute whenever the event is raised
Instructor’s Style:
Beginner-Friendly: The instructor aims to simplify complex topics and use analogies to make them more understandable.
Emphasis on Practice: The course encourages viewers to type code along with the instructor to reinforce learning.
Problem-Solving Focus: The instructor emphasizes the importance of developing problem-solving skills by identifying differences in code and debugging errors.
IntelliSense Reliance: The course also highlights how software developers rely on IntelliSense (Visual Studio’s code completion feature) to help them write code quicker.
Target Audience:
The material is designed for individuals who are new to programming and want to learn C# using Visual Studio. No prior programming experience is assumed.
Conclusion:
These excerpts provide a structured introduction to C# programming, covering essential concepts and practical skills for beginners. The instructor’s clear explanations and hands-on approach make the material accessible and engaging for novice programmers.
Visual Studio and C# Development: FAQ
Visual Studio and C# FAQ
How do I perform a custom installation of Visual Studio 2015?
After clicking the “Next” button in the installer, you will see a screen displaying all the different items you have selected. By clicking “Install”, you agree to the license terms for all selected software components. You can click on each item to view its specific license terms. Once you are satisfied, clicking the “Install” button begins the installation of all chosen components.
What is the basic workflow for creating a simple C# application (like “Hello World”) in Visual Studio?
The basic workflow involves creating a new project by going to File -> New -> Project and selecting “Templates” then “C#”. Choose “Console Application,” rename the project (e.g., “HelloWorld” using CamelCase). Type your C# code within the innermost set of curly braces in the program.cs file. Test your application by running it, and save your project frequently.
What are variables in C# and how do I declare them?
Variables are like “buckets” in the computer’s memory that hold data. To declare a variable, you must specify its data type (e.g., int for integers, string for text) and give it a name (e.g., int x;, string myName;). This tells the compiler to allocate the appropriate amount of memory for that type of data.
What is the difference between the assignment operator (=) and the equality operator (==) in C#?
The single equal sign (=) is the assignment operator. It assigns the value on the right-hand side to the variable on the left-hand side (e.g., x = 7;). The double equal sign (==) is the equality operator. It compares the values on both sides and returns a boolean value (true if they are equal, false otherwise) (e.g. if (userValue == “1”)).
What is an “if” statement and how does it work?
An if statement is a decision statement that allows you to execute a block of code only if a certain condition is true. It can be followed by else if statements to check additional conditions, and an else statement to execute a block of code if none of the conditions are true.
What are operators and operands in C#?
Operators are symbols that perform actions on operands. Operands are the data that operators act upon. For example, in x = 7 + 3;, + is the addition operator, = is the assignment operator, and x, 7, and 3 are operands. Operators can perform arithmetic, comparisons, logical operations, and more.
What is a “for” loop and how do I use it?
A for loop is an iteration statement that allows you to repeatedly execute a block of code a specific number of times. Its syntax typically includes an initialization (declaring a variable), a condition (determining when to stop looping), and an increment/decrement (modifying the variable after each iteration). (e.g. for (int i = 0; i < 10; i++) { Console.WriteLine(i); })
What are classes and objects in C# and why are they important?
A class is a blueprint for creating objects. It defines the properties (data) and methods (actions) that an object of that class will have. An object is an instance of a class. Classes and objects are fundamental to object-oriented programming, allowing you to organize code into reusable and manageable units. They help in modelling real-world entities and their interactions within your program.
Customizing Your Visual Studio Installation
A custom installation of Visual Studio allows you to select the specific features, languages, and tools that you want to install, rather than installing everything by default. The source recommends selecting the “Custom Option” during installation to ensure you obtain the necessary packages and libraries for your desired applications.
Key aspects of a custom Visual Studio installation:
Programming Languages You can choose to install additional programming languages such as Visual C++, Visual F#, and Python Tools for Visual Studio. By default, only C# and Visual Basic templates are installed.
Windows and Web Development Options include ClickOnce Publishing Tools, SQL Server Data Tools, PowerShell Tools for Visual Studio, and Silverlight Development.
Universal Windows App Development To develop universal windows applications, you must ensure that you have the tools, emulators, and SDK. While you can install the Windows 10 SDK later, it is easier to install these during the initial Visual Studio installation.
Cross-Platform Mobile Development If you want to develop applications for Windows Phone, iOS, and Android using C#, you can select the cross-platform mobile development tools for the Xamarin platform. This option includes all the emulators as well.
Additional Tools You can install Git for Windows and the GitHub extension for Visual Studio to integrate with GitHub source control projects.
Keep in mind that selecting additional options will increase the installation size of Visual Studio. Before installing, you can review each item to ensure you have all the components, tools, and SDKs necessary for your development platforms of choice.
C# Hello World Application: A Beginner’s Guide
The “Hello World” application is a simple program designed to demonstrate the basic workflow of writing code in C#. It involves printing the words “Hello World” to a console window. The purpose of this exercise is to familiarize you with the steps involved in creating a new project, writing code, testing the application, handling errors, and saving the project.
Key aspects of creating a “Hello World” application in C#:
Creating a New Project To begin, navigate to “File,” then “New,” and then “Project” in Visual Studio to open the new project dialog. Select “Templates,” then “C#,” and choose the “Console Application” option. Rename the project to “HelloWorld” (using the naming convention of capitalizing the first letter of each word without spaces) and click “Okay”.
Writing the Code Locate the innermost set of curly braces {} within the program.cs file. Inside these braces (approximately lines 13 and 14), type the following code:
Console.WriteLine(“Hello World”);
Console.ReadLine();
Console.WriteLine is code from the .NET Framework class library that displays text in the console window. Console.ReadLine() tells the application to wait for user input before continuing execution. Commenting out the Console.ReadLine() will cause the application to execute the WriteLine command and then exit immediately.
Running the Application After writing the code, run the application to ensure it prints “Hello World” to the console window. If there are any errors, Visual Studio will typically indicate them with red squiggly lines, providing clues to identify and fix the issues.
Precision in Syntax Writing C# code requires precision. Visual Studio provides assistance by highlighting potential issues and offering suggestions. If you encounter problems, compare your code character by character with a known correct version to identify any discrepancies.
Understanding the Code The code needs to be written in the correct place, specifically within the opening and closing curly braces of the innermost set as defined by the boilerplate code. These curly braces define code blocks, which have names and purposes. The first code block is named “Main”, also known as a method. The main method lives inside another set of curly braces with the name “program”, which is a class. A class is a container for all the methods of the application. There is another set of curly braces with the name “HelloWorld”, which is a namespace. A namespace is another way of organizing code.
By following these steps, you can create a simple “Hello World” application and familiarize yourself with the basic C# development workflow.
Visual Studio: Creating C# Console Applications
When creating a new project in Visual Studio, there are several key steps and considerations. The following points provide an overview of the process:
Initiating a New Project Start by navigating to File > New > Project in Visual Studio to open the new project dialog.
Selecting a Template In the new project dialog, select Templates > C#, and then choose the Console Application option. The number of items displayed may vary based on the Visual Studio version and edition.
Naming the Project Rename the project to a desired name, such as “HelloWorld”. A common naming convention is to capitalize the first letter of each word without spaces (e.g., HelloWorld).
Understanding the Project Structure Visual Studio creates a starting point for a Console Window Application, including a file named program.cs with boilerplate code. The code is written inside the innermost set of curly braces {} within the program.cs file. These curly braces define code blocks, such as methods, classes and namespaces.
Project Files and Location By default, Visual Studio stores projects in your Documents folder, under a folder corresponding to your Visual Studio version (e.g., Visual Studio 2015), and then in a Projects folder. This location can be customized.
Projects and SolutionsFiles and settings are organized into projects. A project is compiled into a single .Net assembly.
One or more projects are organized into solutions. Solutions can contain multiple related projects.
Opening Existing Projects Existing projects can be opened from any location on your computer via File > Open > Project/Solution.
Project Files on Disk The projects and solutions are stored as files on your hard drive:
The .sln file (solution file) contains information about all projects under the solution.
The .csproj file (C# project file) contains references to project files, settings, and metadata.
The bin folder stores the binary (compiled) version of the application.
References Creating a new project using a project template automatically creates references to files in the .NET Framework Class Library. These references can be viewed under the references node of the project in the Solution Explorer.
Troubleshooting and Resolving C# Errors in Visual Studio
When learning C#, it’s common to encounter errors, and understanding how to identify and fix them is crucial. Visual Studio provides several tools and visual cues to help in this process.
Common types of C# errors and how to address them:
Build Errors: These occur when Visual Studio is unable to compile your code. A dialog box may appear, stating, “There were build errors. Would you like to continue and run the last successful build?” Always select “no” to review the errors.
Error List: After a build error, a list of errors is displayed in the error list. Double-clicking on an error will typically put the mouse cursor on the line where the problem exists.
Red Squiggly Lines: Visual Studio uses red squiggly lines to highlight areas of code that likely contain errors. These lines serve as visual cues, indicating where you should focus your attention to identify and correct issues. Blue squiggly lines are similar, also indicating areas that may need attention.
Incorrect Code Block: C# commands must be placed within the correct code blocks, typically defined by curly braces {}. An error may occur if code is not placed between the innermost opening and closing curly braces.
Missing Semicolon: Just like a sentence needs a period, each C# statement needs a semicolon ; to mark the end of a complete instruction. The error message “Semicolon expected” indicates that a semicolon is missing at the end of a line of code.
Syntax Errors: These occur when the code doesn’t follow the rules of the C# language. The error message might say, “Syntax error, something expected”.
Case Sensitivity: C# is case-sensitive, meaning that Console is different from console. Ensure that you match the capitalization exactly.
Incorrect Spelling: Ensure that all words are spelled correctly, such as WriteLine instead of Writeline.
“The name ‘X’ does not exist in the current context”: This error indicates that a variable or name has not been declared or is not accessible in the current scope. This can happen if a variable is used before it is declared or if it is declared within a scope that is not accessible from the current location.
Undeclared Variables: If you comment out the declaration of a variable (e.g., int x;), the compiler will not recognize subsequent uses of that variable. Ensure that all variables are properly declared before use.
Missing References: If you are utilizing a class that is not recognized, it may be because you are missing a reference to the assembly in your project, or you are missing a “using statement”. Use “Control period” to have Visual Studio suggest the appropriate “using statement”.
Compare With Correct Code: If you are struggling to identify the issue, compare your code, character by character, with a known correct version. The source code from the lessons can be downloaded and opened in a separate copy of Visual Studio to compare.
Utilize Visual Studio: Take advantage of the tools in Visual Studio to help identify and resolve errors.
Scope: Variables declared inside a code block (defined by curly braces) are only accessible within that scope. If you need to use a variable outside of the code block, it must be declared outside of it.
Defensive Coding: Code defensively to anticipate and handle potential issues, especially those related to user input, file access, and network connections. Use try-catch blocks to manage exceptions and prevent application crashes.
By paying close attention to error messages, utilizing the tools available in Visual Studio, and understanding the syntax rules of C#, you can effectively troubleshoot and resolve errors in your code.
C# String Manipulation Techniques and Best Practices
String manipulation involves modifying and formatting strings in C#. There are various techniques available for inserting special characters, formatting numbers and dates, changing aspects of strings, and searching or replacing items within strings.
Key techniques for string manipulation in C#:
Inserting Special Characters The backslash character \ is used to escape or insert special characters into literal strings. For example, to include double quotes within a string, use \” before each double quote. To insert a new line, use \n.
To use a backslash character itself, either use another backslash character to escape it \\, or add the @ symbol in front of the literal string. This tells C# to use backslash characters as true backslash characters and not as escape sequences.
String Formatting with string.Format This method is used to insert values into a string template using replacement codes. The number inside the replacement code {} corresponds to the argument passed into the string.Format method.
Replacement codes can be reused multiple times or used in a different order.
Special formatting can be applied within the replacement code. For example, {0:C} formats the value as currency, and {0:N} adds commas and decimal points for large numbers. Using {0:P} will display a value as a percentage.
Custom formats can be created using pound symbols # to represent each digit.
String Manipulation Methods Strings have built-in helper methods for manipulation.
Substring(): Extracts a portion of the string, starting at a specified position. You can specify the starting position and the number of characters to extract.
ToUpper(): Converts the entire string to uppercase.
Replace(): Replaces all occurrences of a specified character with another character. For example, spaces can be replaced with double dashes.
Remove(): Removes a specified number of characters from the string.
Trim(): Removes leading and trailing spaces from a string. There are also methods to trim only the beginning or ending spaces.
String Length The Length property can be used to determine the number of characters in a string.
Efficient String Building with StringBuilder When performing a lot of string concatenation or manipulation, using the StringBuilder class is more efficient than repeatedly concatenating strings directly. Strings are immutable, so each concatenation creates a new string object in memory. StringBuilder avoids this by providing an Append method to modify the string in place.
By using these techniques, you can effectively manipulate strings in C# for formatting, data processing, and display purposes.
C# Fundamentals for Beginners
The Original Text
>> I am Bob Tabor with Developer University, and I welcome you to this course covering the fundamentals of the C-sharp Programming Language and Programming topics in general. Designed specifically for absolute beginners to programming. Now, if you’re already an experienced Software Developer coming from another Software development platform programming language, then frankly, this series of lessons will move much too slowly for you. You might be better served to find another resource to use as a starting point. One with you the experienced beginner to C-sharp in mind and Microsoft Virtual Academy has many great courses designed for people and all skill levels, so I recommend that you start your search there. However, if you are completely new to programming and you’re new to the C-sharp programming language and you’re new to building applications on the Windows platform, then this perhaps is the best place for you to start. Not only will you and I work together to learn the syntax of C-sharp, but I’m going to take the time to walk through everything that we do together. I’ll explain what we’re doing, but more importantly, I want to explain why we’re doing it, the thought process behind it. I’m going to try to anticipate the questions you might have anticipate the problems that you might run into as you’re typing your very first lines of code into the code window or as you’re working through some of the exercises that we’ll work through together. I’ve literally taught hundreds of thousands people, maybe even millions of people C-sharp over the past 14 years. That’s no exaggeration. This includes children as old as eight years old and as young as eight years old from virtually every corner of the world. They’ve all learned from a version of this course, and I know you can learn too. In fact, this is the sixth generation of this very course, dating all the way back to 2005. And over the years, I’ve incorporated feedback from thousands of students feedback and suggestions on how to improve the course. I’ve incorporated those in an effort to make sure that this is the very best effort that I can put forward to help you get your feet well with C-sharp. Now I’ll only make one real assumption as we begin this course, and that’s that you already have some version in addition to Visual Studio already installed on your local computer and you’re ready to write your very first lines of code. Now, if you don’t already have Visual Studio installed, then please by all means, visit Visual Studio.com, where you’ll learn about the many free and commercial editions of Visual Studio that are available. What the differences are. Now, personally, I used Visual Studio 2015 Community Edition, one of the free versions of Visual Studio that are available on Visual Studio.com, and I want to emphasize that you can use any additions and version of Visual Studio with these lessons. Now, there might be tiny User Interface differences between what you see on my screen and what you see on your screen as you work through the videos. However, I’m not going to be focusing on any specific features of Visual Studio, so hopefully that won’t prevent you from following along, no matter what. There will be other courses on Microsoft Virtual Academy that will demonstrate the power of Visual Studio, all the features that Visual Studio has to offer and explain the differences between editions and versions of Visual Studio. But I won’t be focusing on that in this course. I’m going to focus specifically on the basics of the C-sharp Programming Language itself. And what I will demonstrate will be true no matter which version or edition of Visual Studio that you choose to use, and that’s great news, because as long as C-sharp exists, these lessons should still be valid and useful to you, no matter what. So to get the most out of this course or any course that you find online, you really should become an active learner and that takes several different forms. First of all, you should attempt to follow along closely and do what I call getting your hands dirty in the code. Actually writing the code that I’m writing on screen, you’re writing it along with me. All right, there’s no better way to learn how to code than actually write code yourself. It’s like suggesting that somebody learn how to play the guitar without ever touching a guitar. You’d think, Well, that’s virtually impossible. Typing in the code yourself will give you insights that merely watching videos won’t, so pause the video, rewind the video, re-watch portions of the videos as you need to. I’m going to make the code available for download, and you’re welcome to it, and you can use that to compare the code that you write versus the code that I’ve written in the videos. But you really should be typing in everything on your own, in your own local copy of Visual Studio running on your desktop. Also, don’t rush through this course. If something doesn’t make sense again, pause the video. Rewind the video. Re-watch those portions that don’t make complete sense at first. Sometimes a second or third viewing, focusing more specifically on what’s going on around the screen and on the words that I’m saying can help. Being an active learner also means that you’re taking control of the process of learning so if I say something or do something that doesn’t completely make sense, by all means find a second or a third resource that can help you. Maybe it’s an article out on msdn.Microsoft.com or other videos on Channel nine or Microsoft Virtual Academy, but make sure you search out those resources that resonate well for you. If you’re interested in even more comprehensive version of a C-sharp training course that covers a lot more ground in more depth, complete with dozens of coding challenges and over 30 hours of video instruction, then please visit my own website. Devu.com Developer University You’ll also find many other training courses that I’ve created, designed specifically to help you become a professional C- sharp developer someday. Furthermore, over time, as we go through this course and as I begin to fill questions about it, I might add some study resources and additional free content related to the topics in the course that you’re currently watching right now. That’s another reason to be sure to visit me at devu.com, now like I said earlier, if you’re new to programming, I’m really excited for you. Learning to write applications is really one of my life’s passions. It’s extremely gratifying to breathe life into your imagination and watch your creations come to life and watch other people actually then use your applications. You’re embarking on a really exciting journey that’s immersive. It’s personally and professionally rewarding and best of all, I know you can do this again. I’ve seen so many people start off where you’re at right now, and they might even be working professionally, writing code for a living or building real applications that are being sold in App Stores like the Windows Store. If you’ve ever gotten stuck in the past when trying to learn how to program, I promise you that if you put in the time and you put in the effort and you work along with me as we work together, we’re going to build the knowledge of C-sharp that you need to be well equipped to move on to more advanced tutorials where you can learn how to build your own Web Applications, Windows Applications, Windows Store Applications, Cloud Services, Video Games and even applications that will run on iOS and Android using C-sharp. Now, assuming again that you have some version in addition to a Visual Studio already installed and you’re ready to go, then we’re going to begin writing C-sharp in the very next lesson. I hope you’re excited because I really am. This is so much fun. Let’s go ahead and get started. We’ll see you there. Thank you. >> Let’s take a look at how to install Visual Studio using the custom option for this example, we’ll use the Community Edition of Visual Studio 2015 in order to get it. Simply visit visualstudio.com and click on the ”Download Community 2015″ button. Once we’ve clicked on the download, it’ll download to our computer, and it’s a Web installer, so we click on the “Run” button and it will initiate the installation routine for Visual Studio Community 2015. Once we have the option screen available, it’s time to start looking at customizing the Installation of Visual Studio. For the most part, the default allows you to create web and desktop type applications. But if you want to create different styles of applications or include more languages then the ”Custom Option” is what you should be choosing. I always recommend selecting the ”Custom Option” for the Installation of Visual Studio 2017, to ensure that you’re getting the packages and libraries that you need to create the applications you may wish to use. By selecting ”Custom” and clicking on the ”Next” button, we are now brought to the screen and we can select the different features. The first option is programming languages, and if we click the ”Arrow” to expand it, we can see that we have Visual C++, Visual Fsharp and the Python Tools for Visual Studio that are additional programming languages that will get Installed if you select this option. Remember, by default, Visual Studio Community Edition will only Install C-sharp in Visual Basic Templates. Also notice under Visual C++, we have options for the common tools, the Microsoft Foundation Classes and then windows XP support for C++. For my purposes, I’d like to have all of my programming languages available to me, because I create projects using the different languages all the time. I’m going to select ”Checkbox” next to programming languages to install all of those programming types. Also, under windows and web development, we can choose various options here for things such as the ClickOnce Publishing Tools, SQL Server Data Tools, PowerShell Tools or Visual Studio, Silverlight Development, etc. Here’s a very important component, if you want to develop universal windows applications, we need to ensure that we have the tools, the emulators and the SDK. Now you can choose the “Default” Install a Visual Studio, and then come back and Install the Windows 10 SDK at a later time, and that will include the tools, the SDK and the Emulators for you. But it’s so much easier to install these during the Installation of Visual Studio. Please note that it will increase the Install size of the application, so the toolset will be much larger. Again, depending on what it is that you want to do, you may want to select ”Universal Windows App Development Tool Kit” PowerShell Tool for Visual Studio, rather if you want to be using PowerShell tools within your applications. If you need backward compatibility for Windows 8.1, and Windows Phone 8.0 And 8.1, you can select this option. Also, there are some common tools or Cross Platform Mobile Development Tools. These are important if you want to develop applications using the Xamarin platform. Xamarin is a cross platform tool that allows you to create applications for windows phone, for iOS devices and for android devices by using the C-sharp language in Visual Studio. All of these tools are available for the cross platform mobile development using Xamarin platform. It includes all of the emulators as well. Again remember, it will increase the size of the Install base for Visual Studio. You might also notice that because I selected the cross platform mobile development tools, we now have a little box inside the Windows 8.1 and Windows phone tools. If we expand that, we’ll see that it has included tools and Windows SDK, and the reason it does that, is because there’s a potential that you may want to target Windows phone 8.0 or 8.1 applications. The tools and SDKs will also get Installed. At the same time, the common tools checkbox includes a little square box, indicating that we have also added another component here, and that is the Git for windows. We can install Git, which is your source control, GitHub extension for Visual Studio, so that you can integrate with GitHub source control projects, and then an extensibility tools update three for Visual Studio as well. You’ll notice that by selecting all of these options, set up can require up to 48 gigabytes across all of the drives that you will Install it on. Again, review each of the items that you have selected to ensure you have all the necessary components tools SDK for your development tools of choice or platforms of choice, and then select the ”Next” button. Once you do, you basically see equipment or selected features screen that will tell you, all of the different items that you have selected, and by clicking ”Install” you agree to the license terms of all the software components. If you’re not sure what those are, each one of the items that has license terms allows you to click on it to view those. Once you’re satisfied with it, click the ”Install” button in Visual Studio starts installing all of the components that you have selected. This is a quick overview, of how to perform a custom installation of Visual Studio 2015. >> Hi, I’m Bob Tabor with Developer University. For more of my training videos for beginners, please visit me at devu.com. In this lesson, I want to build a super simple C-sharp Application. I want you to follow along. It’s a Hello World Application, meaning that we’re merely going to print out the words Hello World to a Console Window, and the point of this exercise is just to show you the basic workflow. I’m not going to attempt to even explain why we’re doing what we’re doing. The focus will be on, what I’m going to do next and how I’m doing it? In other words, I want you to focus on the basic workflow that’ll be the same for all the applications we will build in this course, and pretty much every Application you’ll ever build using C-sharp. Things like how to create a new project. Where do you type in your C-sharp code? How do you test your application to make sure that it’s running correctly, and what do you do whenever you have an error in your code? How do you save your project? Things of that nature. For now, just try to follow along. Don’t worry if something doesn’t make a lot of sense at this point, that’s really what the rest of this course is for. In the next few lessons. After this one, we’re going to dissect this tiny little application that we built, and I’m going to explain at that point why we did what we did and then what does the code mean and why it’s doing what it’s doing. Just a quick reminder, like I said in the previous video, the introduction of this course, I’m going to assume that you have some version and some additional Visual Studio already Installed, even if your Visual Studio looks a little bit different than mine does on camera here. Don’t be overly concerned about that. The basics are the same, no matter what, I promise. Let’s go and get started here. To begin, we’re going to create a new project. There are a number of different ways to do this, but I’m going to keep it simple and go to ”File” ”New” ”Project” and selecting that menu option will open up the new project dialog. Now, chances are, the number of items that you see here in the center part will be dramatically different than the items that I see based on which version, in addition, a Visual Studio that you have installed. However, you should be able to select ”Templates” and then select ”C-sharp”, and one of the options should be a Console Application. I want you to select that, and then we’re going to rename this project to HelloWorld. Now notice I use a little naming convention, where I use a capital H in Hello, and a capital W in world, and I don’t use a space in between the two words. Now that’s just the naming convention that I came up with to help me identify projects a little bit easier. Something I recommend that you follow. Shouldn’t have to make any other changes in this dialog. I’m going to go ahead and click the ”Okay” button, and Visual Studio will go off and now create the starting point of a Console Window Application for us, and so now you should see in this main area, in this text area, a file opened called program dot.cs and there’s some code here that is already generated for us boilerplate code. We’re going to ignore most of that, except we’re going to find this innermost set of Curly Braces. One of the first things you’re going to need to do when you’re learning how to develop Software is tell the difference between a parentheses, curly braces, square brackets, angle brackets and I don’t know that I left out any. But here we want the curly braces look like little mustaches turned on their side. These are important, and I want to go inside of those two that opening and closing curly brace and make some room for ourselves. This is where we’re going to type our code. It’s approximately Line 13 and 14, at least in my copy of Visual Studio. Then I’m going to type in the following, type in Console and you may notice now this little window pops up below what I’m typing. You can safely ignore that for now. Eventually, this becomes our best friend. But for now, it might be distracting and give it away to try to ignore it and type in everything by hand to the best of your ability. Console, and then I want to use the period on the keyboard, I’m going to call it the dot. Console dot, and then capital W Write capital L Line. Next, I’m going to use an opening and closing parentheses, so that’s not a curly brace. These are just the characters you’d use for a smiley face, and an emoticon. Then inside of there and use the arrow keys on my keyboard navigate around here. I’m going to go inside of the opening and closing parentheses, and I’m going to use two double quotes, so it should look like that. Make sure you don’t use single quotation marks like that. That’s not what we want. We want double quotation marks like that, and inside of there, we’re going to type in the words Hello and World. Make sure that you have an open parenthesis, a double quote, the words Hello World, then another double quote then another parenthesis, a closing parenthesis, and then at the very end of this line, I’m going to use a semicolon and it looks like that. It’s not a colon and it’s not a comma. It looks like that. Then I’m going to use the Enter key on the keyboard to go to the next line, I’m going to type in Console.ReadLine opening and closing parentheses. Now you may have noticed that as you type in the opening parentheses that Visual Studio will automatically type in a closing one for you. You don’t let that throw, you can continue just to type through that, but make sure that you have exactly what I’ve typed into my code window here for these two lines of code. Make sure that the capitalization is correct. Make sure that you’re using a period not a comma for the little mark that comes after the word Console. Make sure you’re using parentheses and not some other type of bracket, or brace, and then make sure that both lines of code end with a semicolon. The next thing that I want to do is save my project, and there are a number of different ways to do this in Visual Studio. Again, I keep it simple and go to File, Save all. Then the next thing I want to do is now see my application actually running. To do that, I can either find this little green triangle that has the word Start next to it or if I don’t see that by default in my little toolbar here at the top, I can go to Debug and select “Start Debugging.” Either way should work. I’m going to go ahead and click that, and you’ll notice that some windows pop up and Visual Studio changes its appearance a little bit. Now, off to the side of my screen, the console window popped up and we see the words, Hello World with a blinking cursor below it. I’m just going to hit the Enter key on my keyboard and then the console window disappears and I’m back and the Visual Studio resets itself and we’re successful. However, maybe your experience wasn’t successful. Maybe you saw an error message, so what I want to do is take a moment and look at some common errors that people that are new to C-Sharp might run into and how to remedy them and this is a good opportunity to learn some of the syntax rules of C-Sharp as we make mistakes. I’m going to pause the video, make a mistake, and then we’ll talk about it, pause it, and so on. When you attempted to start the application, you may have seen a little dialogue pop up from Visual Studio that says, “There were build errors. Would you like to continue and run the last successful build?” Always select no for that. What you’ll see next is a list of errors. Now, in some cases, the error messages will be obvious to you, and they’ll make a lot of sense. Sometimes they won’t. Like the verbiage might be something we’re just not familiar with yet. Invalid token and class structure interface, what does that mean? Typically what you can just do is double click on these and that’ll put your mouse cursor on the line where the problem is. Notice that Visual Studio also gives you another visual way to tell that there’s a problem with your code. Gives you these little red squiggly lines. Sometimes you see a blue squiggly line. They’re a little bit different but essentially this is an area of the code that probably deserves your attention, something you need to fix. Now, in this particular case, the problem is that we didn’t type our code in between the innermost opening and closing curly braces and so this is an issue with regards to defining a code block in C-Sharp or a block of code. Different C-Sharp commands belong in different kinds of code blocks, and I’m going to spend a lot of time in this course talking about the different types of code blocks and what belongs in each type of code block. But to remedy this issue, what you need to do is use your mouse and just drag and highlight these two lines, or you can use the Shift key on your keyboard in the arrow keys to highlight that area hit Control X, then move up in between the opening and closing curly brace and paste Control V that code in there, and then it should run correctly at that point. That teaches us the first thing about C-Sharp. It matters where we type our code. Or when you try to run the application you may have seen the same build error dialogue, except you see the message, “Semicolon expected.” Hopefully, this is an obvious remedy for you. If you double click on that error in the error list, it should take you to the end of the line of code where you forgot to add a semicolon. That’s the second thing about C-Sharp that we’re going to learn. Is that just like a properly formed English sentence has to end with a period or a question mark or an exclamation mark, a properly formed instruction in C-Sharp has to end with a semicolon, or maybe the error that you saw was something like a syntax error, something expected, the name Hello doesn’t exist in the current context. The name World doesn’t exist in the current context. If you were to double click these, you’ll get to the vicinity of the problem and you’ll also see that there’s red squiggly lines beneath the words Hello and World in between our parentheses. Now, remember, we needed to use double quotation marks around that string of characters Hello World and so alphanumeric characters that we want to literally write to screen, or present in some way, we need to surround them with characters that indicate that we want to use this string of literal characters. To do that, we use double-quotes. Or perhaps you see the error, something like, “The name console does not exist in the current context.” You look at the word and you say, well, looks spelled correctly. Remember that I told you you had to type exactly what I was typing and so C-Sharp is case sensitive, meaning that a lower case C and an upper case C mean that you’re typing two completely different things into C-Sharp and that is tricky because many of us are not used to that degree of precision whenever we’re communicating. But when communicating with a computer, you have to be precise. In this case, all we needed to do was change the capitalization of the word console and we’re back in business. Perhaps you see something like console does not contain a definition for either write line or read line, and again, you’re looking at it and you’re thinking it’s spelled correctly. Well, what could the problem be? Here again, capitalization is important. r read line is different than R read line, and l read line is different than L read line. Again, things have to be spelled correctly and have the correct capitalization in order to be processed correctly by the C-sharp compiler. We’ll talk about compilation in the next lesson. Now, if you’re not good at spelling and you’re not good at typing in capitalization and you’re just not as precise in the way that you would type a letter or an e-mail message or even a text message, fortunately Visual Studio can help you out. There are tools that will help you not only write your code more quickly, but also more accurately. If you utilize those tools, the chances that you will miss some of these really simple syntax things like capitalization, will almost be completely eliminated. We’ll talk about some of those tools in an upcoming lesson. All right, but assuming that you got all of this to work correctly, you’re really well on your way to building applications, you’ve already crossed over one of the big first steps. As you undoubtedly learned in this lesson, writing C-sharp code is an exercise in precision. Again fortunately, the Visual Studio IDE will help you out a lot when it comes to that. It will give you clues and maybe some of the phrases in the words that they use to explain the issue might not be familiar to you. Yet with experience, it will be. But generally, it’ll point you into the right direction and with the red squiggly lines in the message, you can typically figure out what the issue is. Now throughout this course if you run into a wall and you simply can’t figure out what the problem is, do this. Compare character by character, take your time until you develop a vision for the problems where your eye will jump to the problem and code. Compare what you wrote versus what I wrote. I’ll supply the source code to you. Open it up in a second copy of Visual Studio and then just look character by character. What did I do different than what Bob did? That will usually help you figure things out if you can’t do it on your own. In the following the lessons, we’re going to focus on two things. First of all, we’re going to talk about why we did what we did and what was going on behind the scenes that turned our code into a working application, all be it a small application. What happens whenever we create a new project? What happens whenever we choose to save our project? What happens whenever we choose to start or run our applications? Then secondly, we’re going to talk about the syntax of the C-sharp code that we wrote, and we’ll learn more syntax rules and more keywords as we go along. If precision is so very important in C-sharp, then you’re going to need to have some explanation as to what all those little words and symbols actually mean and some rules to guide you as you’re writing your own code. It’s really easy once you get a few of the basics under your belt. Being completely honest, many people learn how to write code in C-sharp. It’s a fairly easy language to learn, you can do this. Just you got to put a little bit of time, a little bit of effort to figure it out. We’ll begin that process in the very next lesson. Will see you there, thanks. Hi, I’m Bob Tabor with Developer University. For more of my training videos for beginners, please visit me at devu.com. Now in this lesson, we want to start the process of dissecting that little application that we wrote in the previous lesson. Now previously, I wanted you to focus on the workflow. What we did and how we did it. But now what I want to do is focus on why we did what we did. It’s really crucial at this point that we cement some really important ideas in your mind because they’re going to provide the basis, the foundation for everything that comes next. What I want to do is start on the inside and work our way out. I’ll start by talking about the nature of writing code. When you learn how to write applications with C-sharp, really any programming language, learning the syntax of C-sharp or in other words learning the nouns and the verbs and the punctuation of the programming language is really just half the battle. The other half of the battle is learning about related, pre-built functionality that’s available to the code that you write. Now in our case, Microsoft has created something called the .NET Framework, which sounds spooky and mysterious, but it’s really not that bad. It’s actually pretty large, but we’re only going to focus on two specific portions of it for our purposes. The first part that I want to focus on is something called the class library, which is simply just the library of code that Microsoft wrote to take care of difficult tasks so that, as software developers, we don’t have to worry about them. There’s library code to help with many common things that many applications will need, things like working with math, or working with strings and text, and working with dates, manipulating dates and times, maybe displaying things to the computer screen or transmitting information across a network. A lot of that foundational stuff that would be difficult for us to write and is utilized by many different applications. That’s really the first part. It’s taking advantage and understanding the class library of the.NET Framework. The second part of the.NET Framework is called the runtime. It’s also known as the common language runtime, you’ll see it called as the CLR as well. Really, it’s just this protective bubble that wraps around your application. Your application lives inside of it. It runs inside of that protective bubble and it essentially takes care of a lot of the low level details so that you, the software developer, you can focus on what your application is supposed to do, not worry so much about how it’s actually accomplishing it under the hood. You don’t have to worry about the computer’s operating system, interacting with it, and interacting with memory, and interacting with the hardware, the computer itself. Many of those things are abstracted away from you. You don’t have to worry about them. Furthermore, the seal are that runtime also provides a layer of protection for the end user so that you, the malicious evil software developer, you can’t do something really bad to somebody’s computer without them at least giving you permission to do it in the first place. So without their knowledge and their approval, you’re not going to be able to wipe out their entire hard drive, for example. For right now, it’s the.NET Framework class library that I really want to focus on because it’s what we used, whether you realize it or not, whenever we were writing our first application. For example in line 13 or 14 where we did our work, you see Console.WriteLine and then we used open parentheses, close parentheses, and so on. We were using code in the framework class library that knows how to display text into a console window. All we got to do is say hey, use this text, stick it in a window. We don’t really care how it does its job, we just care that it did it. The next line of code, this console.ReadLine. It was also really important. We’re telling the application to wait for input from the end user before continuing its execution. Again, here we’re calling code in the.NET Framework class library that knows how to accept user input. You recall that I use the “Enter” key in the keyboard, and then the application continued on. It exited and we were back in the Visual Studio. So in both of those lines of code, we were utilizing methods that were created by somebody in Microsoft to handle that interaction with displaying and retrieving data from the end user. What were to happen if we were to comment out that line of code? Here to comment out of line of code, I use two forward slashes on my computer, it’s over the question mark. Commenting our code simply means that I want those instructions to be ignored. Now, I could have just deleted that line of code completely, but I might want it later. Maybe I don’t want to remove it completely, I just don’t want to ignore it for now. I also might use code comments to write myself some notes to remind myself of something about the application in the future. We’ll talk about code comments a little bit later. But if we were to run the application now, watch what happens. It ran and it’s already done. What happened? Well, you may have seen a flicker on screen for a fraction of a second. The reason was because, hey, it executed this one line of code and it said looks like I’m done here, and it exits out of the application. By adding the read line, we’re now stopping execution, waiting for the end user to do something before exiting out. Hopefully that makes sense. All right, so next let’s talk about the position of the code that we wrote. I made sure to emphasize that you have to write the code in the correct place, and the correct place was in between the opening and closing curly braces of that innermost set of curly braces as defined by the level of indentation that we saw in the boilerplate code. If you don’t add the code there, we saw what the ramification of that was. The application, you’ll try to run it. It’ll give you a runtime error. The correct place for that code was where we have it right now, in between that opening and closing curly brace that you see on screen. Now as you can see, there are several sets of curly braces, and so it’s important that we talk about what these do. I need to oversimplify things here. We will come back and fill in some of the details later. But essentially, you have an opening and closing set of curly braces and those define the code block. Code blocks typically have names and they have purposes. In this particular case, we have a first code block and this code block has the name Main. This particular code block is known as a method, and this particular method by convention is the very first method that’s called whenever your application is executed. I don’t want you to worry about these other words static and void and even the string and the args for right now, we’ll talk about those later on. But this entire code block here, as well as the line above it, they define something called a method. A method is simply a block of code that has a name. Now, later on, you’re going to come to realize that a method is so much more than that. But I want to use that as a working definition as we’re getting started here. The method has a name, and when you have a name, you can call a name and say, I want you to execute. We’ll talk about methods again a little bit more a little while. This main method lives inside of another set of curly braces, and that set of curly braces also have a name. The name is program, it’s a class called program. You can think of a class as simply a container for all of the methods of your application. You can keep the methods that are related to each other in separate classes. Now what do I mean by related to each other? Well, that’s really for you the developer to decide as you get deeper into programming, you’re going to come to understand the thought process behind organizing your code. But that’s a little ways off for now. Just trust me on that. Now I said that a class was merely a way to organize your methods. It is so much more than that. Again, I’m way over simplifying this as we’re getting started here. But the main takeaway for right now is that code is organized in curly brace containers and you have some blocks of code that reside inside of other blocks of code. To emphasize that again, here we have another set of curly braces, and this set of curly braces has a name as well. In fact, it’s a namespace called HelloWorld, which happens to be the name of the application that we gave it. Again, key things extremely simple here a namespace is just another way of organizing code again. At some point it becomes so much more than that, but let’s keep it simple for now. Let’s take a look at these lines of code and illustrate these ideas about classes that contain methods. Here what we’re doing whenever we’re calling Console.WriteLine is we’re actually making a call into the dotNet Framework class library. Remember, it’s that library of code supplied by Microsoft. We’re saying in that entire library there’s a book and there’s a chapter inside of that book that I want to reference. In this case, we’re saying, that book is the console book, the class. I want you to look at the chapter named WriteLine that has the definition for this method. Hopefully, that analogy works for you, but we’re looking inside of a library to find a class, and we’re going to call a particular method inside of that class. By using its name, we can execute all the code that was written inside of that method. Same with the method that we’re calling below it as well. Notice that there is a period that we use between the name of the class and the method name, and we use that. It’s called a member accessor it allows us to access a member of the class or in other words, now that we know what the book is, we can find out what chapter we want to reference. Hopefully, that analogy works for you. Now, notice also that both whenever we call the WriteLine method and the ReadLine method, that they both have parentheses following them. Now, in the case of the WriteLine method, we’re actually sticking something in between the opening and closing parentheses, whereas in the ReadLine method, we’re not. But essentially, those parentheses are saying, not only do we want to reference that particular class method name, but the parentheses mean I want you to actually invoke it, execute it, do it now, so that’s the purpose for those parentheses. Now we can say, do it now and pass in information. Do it now with this stuff, with this argument. We’re passing in an argument to the WriteLine method and saying, we want you to do it right to screen and here’s what we want you to write. It’s an input parameter to the method named WriteLine. Now don’t worry, we’re going to come back to the notion of methods in the future, as well as passing values into a method like we did and as we passed in the literal string HelloWorld into our method here. Just know that whenever you see parentheses after a given word in your code, you should be thinking that code is being called right now as we step through the execution of the code. Next up, let’s talk about the semicolon. We’ve already explained it in the previous video, but just to emphasize it. Notice that almost everything, even these statements at the very top have semicolons, with the exception or whenever we’re defining a namespace, a class, or a method. We said at the time that the semicolons are actually similar to the period or exclamation mark or question mark at the end of an English sentence. It completes a thought in C-Sharp. Now, some programming languages like Visual Basic, for example, they don’t really have this idea. They only allow one complete thought per line of code. However, with C-Sharp, you could do this, what I’m about to do watch. Now I have both of those lines of code on a single line. If we run the application will work just as it did before. The way that you separate or indicate that you have two different complete thoughts, is through the use of a semicolon. Furthermore, we could put lines of code on separate lines like this. Now, it wouldn’t make sense in this case because the line of code is so short, it actually makes it difficult to read. But sometimes when you have a very long line of code, you’ll see me split that line of code into multiple lines, and still the application will execute. Now, in other programming languages, you wouldn’t have that behavior. Because really whitespace in line feeds and things of that nature, they don’t matter to C-Sharp. The only thing that really matters is to indicate a complete thought is a semicolon at the end of the line. Let me go ahead and get rid of all that. The other thing that I want to mention here that you may have noticed is the level of indentation that you get automatically from Visual Studio. Now that’s completely optional, and Visual Studio nudges you in the right direction. But essentially, even if you were to come out here and we’ll use the tab key several times and write the word Console.WriteLine, something like that. Notice that Visual Studio reinvented it for us. Why do you suppose it did that? Well, many people believe that indentation helps the readability of the code so that you can see what code container where code resides inside of the other curly braces inside of your application. Along those same lines for readability sake notice that there are many different colors that are used as text inside of this text editor window. You have these royal blue colors, and these are my default colors. Yours might look a little bit different, but by default, I think you have some royal blue, some black. You have aqua color here. This is a dark red. You have some light gray and light blue. All of those are used to help you identify the parts of speech, I guess you can say inside of the code that you write. We’ll talk more about that as we talk more about the syntax of C-Sharp in an upcoming lesson. Now that we’ve talked about the code that we wrote and its position and formatting and whitespace and tabs and all that stuff, what I want to do is stop right now for this video and in the next one, I want to talk about the files themselves. The file that we typed our code into, how that relates to projects and even solutions. What happened when we saved our project? What happened when we actually ran our project? We’ll do that in the very next lesson. We’ll see you there. Thanks. Hi, I’m Bob Tabor with Developer University. For more my training videos for beginners, please visit me at EW.com. Next, we’re going to talk about how code files are organized into projects and solutions. Then where you can actually find these projects and solutions on your hard drive. Whenever we created a new console project, the program does see a file was opened for us automatically in the main area in Visual Studio. That’s one of the things that project templates do for us whenever we choose File, New Project and we see the new project dialog and we choose a project template. They provide a great starting point for the type of application that we want to build. It includes files with boilerplate code, important settings and other resources that we might need whenever we’re building that type of application. As you can see, we’re working inside of this file in the main area and there’s a series of tabs. Again, this is an intended to be an overview of Visual Studio, but it’s important to note that the names of the code files that you’re working on are contained inside of those tabs. If you take a look over to the right hand side, here is the solution explorer window. It has a tree like structure of all the items that are contained inside of our projects. Now again, as I said at the outset of this course, this isn’t intended to provide a tour of Visual Studio per say. There are other resources on Microsoft Virtual Academy that can really help orient you to using Visual Studio in the various Windows and functionality that it contains, but the solution explorer is probably the most important part of Visual Studio next to this main area, where you’ll usually see the text editor and other designer windows. Simply put, the solution explorer is our main navigational device to the other files and settings that comprise our program. You can see here that there is a program, that cs file. Now, if I were to close the program, that see us tab in the main area, I can always get back to it and open it up again by double clicking it inside the Solution Explorer. You can see it’s open once again. The files and important settings are organized into a concept called projects, so you can see here this word “Hello World” is actually a project. You can see there’s a little C-sharp icon next to it, letting us know that this is a C-sharp project specifically. Projects get compiled into a single.Net assembly, which we’ll talk about later. Furthermore, one or more projects are organized into solutions, and you can see in the Solution Explorer, we have one solution here at the very top solution also named HelloWorld that contains one project. Now, in many cases, as you’re getting started, you’re only going to have one project inside of one solution. But as you come to build more complex applications over time, it’s highly likely that you’re going to need to manage multiple projects that are somehow related. Now again, the reason might not be obvious at this point, but as you continue to learn C-sharp and how to build more complex applications for large companies or for yourself, this becomes a crucial code management strategy. But just for now, accept the fact that there’s this extra layer of a solution and one solution can contain one or more projects and the projects will contain then all of the code files in the settings and the like that will be used to create an actual executable program. Trust me, these concepts will become more important after we get past the basics. Now, the big question at this point should be, where are all these projects and solutions and files actually stored on your hard drive? I mean, can we see them? We can see them in the Solution Explorer. Where are they actually on your hard drive? Well, when we created this “Hello World” project, we merely provided the name of the project, you’ll recall. Then I said, go ahead and accept the other defaults. By default, Visual Studio will put your projects into your documents folder. If you take a look here and we navigate into the documents folder, it will put your projects into whatever version of Visual Studio you’re currently using, so you can say I have side by side Visual Studio 2013 and 2015. We’re using 2015 for this series, but it could be a future version of Visual Studio. You’ll look in that particular folder for your version of Visual Studio. As we drill in, there will be a projects folder and as you drill and you can see that by default, when we created a new project, it put it here in our document slash visual studio, whatever version slash projects folder. As I add more projects, this obviously will be filled up with folder names for those projects. It’s important to note that whenever you create a new project, you don’t have to create it and put it right here. You can put it anywhere. To keep things organized, you’re typically going to keep them in the same place. Now, furthermore, you can actually open up a project that saved anywhere on your computer as well. For example, in this course, I’ll supply the projects. After I record the video, I’ll zip them up and you’ll be able to download them and then open them up on your own hard drive and then walk through them and to better understand them. Just to demonstrate how you do this, I have this project zipped up into a file called example.zip. What I’m going to do is actually right click this and select Extract all, and then in the extract compressed zip folders. I’m just going to put this on my route. C;/Example, and then click “Extract”. Now you can see that on my local hard drive I have an example folder. Inside of that folder there is another folder with a file called Example.sln, I’ll talk about that in just a moment. But I can either double click this.sln file to open up the the solution inside a visual studio, like so. Or I could go to “File”, “Open”, “Project/Solution”, and then navigate to that directory using the open project dialog if it’ll let me. Unfortunately, it’s a little bit too large for the recording area, but then I would just simply select the solution that I wanted to open from this dialogue and then click the “Open” button. Let’s go ahead and close that, and let’s shut down this copy of Visual Studio. I want to get back to where we were just a moment ago in our Documents, Visual Studio, in my case Visual Studio 2015 Project folder. Here are a list of all the solutions in our Project folder. Just want to walk our way through this. This first folder here will contain our solution files. There’s this.sln file, which is a solution file that contains information about all the projects, that are under this umbrella solution. We could actually open this up and look at it inside of notepad, and it’s simply just a configuration file. There’s nothing all that special about it. You certainly don’t want to make any changes to it, but it’s going to have information about all of the locations for the various projects that are associated with this solution, any global settings and some of these things won’t really be useful to us until we get deeper into our understanding of compilation and things of that nature. But inside of the solution folder, is a second folder which is actually going to contain the project files. Here we have a HelloWorld.csproj, which is the C-sharp project file. Let’s open that up as well with notepad. It’ll contain references to things like all the files that are associated with this project, any of the settings and any other metadata. Again, information in here that you certainly don’t want to edit, you don’t want to accidentally make any changes to it whatsoever. But I just wanted to make you aware of it, that there’s really nothing magical going on. There’s just these configuration files that contain information about your project. As you make settings on the project level, those will be saved inside of that cs project. Then finally, there’s this bin folder here. The “Word” bin typically is short for binary, which denotes that this is where a binary version of your application will be stored. The process of compilation it takes your source code, which is human readable, and it’s going to convert it into a format that is machine readable, or understood by a machine, your computer. If we were to take a look inside of this folder, we would see that there is a debug folder. This folders created for us whenever we started debugging our application, it creates a temporary version of our application for debugging purposes, which we’ll talk about later. If we drill into that you’ll see that there is actually an executable file and several other helper files for the purpose of debugging. We’ll talk about these later. If I were to double click the HelloWorld.exe, it actually executes our application. Compiling your code into a working application is the end goal. But I don’t want to talk about compilation just too much yet or about creating a debug version versus a release version of your dotnet assembly of your compile code. I think you’re going to get a better appreciation for those ideas, after we get past some more of the basics. What I want to do is stop our conversation about the directory right now. We’ll come back to that a little bit later. But you’re doing great. Let’s continue on. We’ll start learning more C-sharp now that we have some of these tangential topics all the way. I’ll see in the next lesson. Thanks. Hey. I’m Bob Tabor with Developer University. For more my videos for beginners, please visit me at Devu.com. Now in this lesson, I want to get back in to talking about C-sharp the syntax. We’re going to talk about declaring variables, how to choose the right data type, for your new variable and then also how to initialize variables with values. To begin, let’s take a look on screen. If you’ve ever taken an algebra course, hopefully you’ve seen something like this, where you’re asked to solve for the value of x, and hopefully without a lot of thought, you’re able to see that x equals 7. Using that same thought process, take a look at this little snippet of code on screen. x equals 7, y equals x plus 3. Then we’re going to do a console.WriteLine with the value of y. Hopefully you look at it for a moment, using your existing knowledge of algebra and you think to yourself, then it’s going to output the value of 10 to a console window, right? Exactly. So my point is that C-sharp, first of all, is human readable. It’s got a few things that might be a little formed like the semicolon at the end of the line. However, for the most part, I’m willing to bet that as we go through the series of lessons, you’ll be able to understand what the code is doing for the most part. Even before I explain it to you. It’s really not that hard. Then secondly, it’s probably very similar to things that you’ve done in the past like working with math and algebra and things of that nature. If we’re looking at the C-sharp code, the x and the y in this context, are referred to as variables. A variable is simply under the hood, a bucket. I guess you could call it in the computer’s memory and you put things in buckets and you dump things out of buckets. We can put values into a given bucket in the computer’s memory, and we can retrieve the value out of that bucket. We can even replace what’s in the bucket with something different. That is what you use a variable for. This particular situation that we see on screen, these buckets are just holding numeric values. However, we could create buckets that are just the right size for almost any type of information, whether it be individual alphanumeric characters, or strings of character, strings of alphanumeric characters like even entire sentences and paragraphs, and even books. We can also create buckets that are just the right size for dates and times, buckets that are just the right size for really, really, really massive numbers, or even create buckets that are just the right size for numbers that have a lot of values after a decimal point. Now, in this case, what we would expect to see here is that these two buckets, the bucket that’s labeled x and the bucket that’s labeled y, would hold numeric values. Because we want to add numeric values together. We know that. But how do we express that intent in C-sharp? The instructions that we write in C-sharp were ultimately after a compilation step, they will ultimately be executed by the dotnet runtime that we learned about in a previous lesson. In part its responsibilities are to make sure to allocate memory for our variables in memory to hold the right kind of data. Here we have two data items, are x and y, and we have to tell the runtime that we want to allocate some space in memory that sufficiently large enough, to hold numeric data, like the type of data that we want to work with here and our application. But how do we do that? Well, that’s the topic of this lesson. To get started, what we want to do is create a new project. Here again, I’m going to go to file, new project. We will go to the new project dialogue and make sure we select the Console Application project template. Here we’re going to rename this and call this project variables and then click the “Okay” button. Visual Studio goes to work, uses that template and creates a new solution with a project and as you can see on screen here we are back in our familiar program.cs. Obviously we want to work inside of our static void main in between the opening and the closing curly brace just like we learned about in our previous lesson. Before we get started, there’s one big takeaway from this lesson and that is that a variable is simply a bucket in memory that you can put data into and retrieve data out of but we have to tell the compiler, we have to tell the.Net runtime what size of buckets that we want to create. We have to declare our variables, we have to create those buckets and then give them some label that we can refer to them with from that point on. Now, before we get started here typing some code, all the same rules apply in this video that applied previously. You have to type the code exactly the way that I type it. Take the time, develop the skill of identifying even small differences, different in capitalization, or spacing, or the various special punctuation marks that we use while we’re writing code. Develop that skill to identify the differences between what I write on screen and what you’re writing in your copy of Visual Studio. If you see a little red squiggly line, you already know that there’s going to be a problem there. That gives you the clue necessary to go and focus either on that exact character or in that vicinity and use your detective skills to figure out what it is that went wrong. Now let’s go ahead and we’re going to create two buckets, two variables and define them in such a way that they’re going to hold numeric values. We’ll start with int x and int y as simple as that. Here to borrow the explanation that we used earlier, we are asking the.Net runtime to allocate space in our computer’s memory sufficiently large enough to hold numeric values. Now, we’re asking it to create these two buckets and eventually we’re going to put values into them and read values out of them but at this point we’re just declaring their existence and saying, here’s what we need to work with. Then after we’ve declared that, after we create them in this manner, then we can begin to work with them and assign values, retrieve values from them. But most importantly here I’m telling the computer that I want to assign integer values into those variables. An integer is really just a mathematical term that refers to a whole number that’s within a certain range. No values after the decimal point and as far as C-sharp is concerned, the values have to be between a negative two billion, 147 million and a positive two billion,147 million. That’s the size of the bucket that we have to work with. If you need to work with much larger numbers then the in data type is not the correct data type for you, there are other data types to choose from and we’ll learn about some of those a little bit later. If we need to work with like money values where you have dollars and cents or pounds and pence, then the integer is not the right data type to work with. Let’s continue in our application and this is basically just to continue what we did in Notepad a few moments ago. X equals seven, y equals x plus 3, and then we want to do a console.write line with the value of y. Then remember we want to do a console that read line so we can actually see it on screen without it just flashing and going away immediately. Let’s run the application and make sure we get the value that we’re expecting. Hopefully you got the value 10 in your copy of Visual Studio just like I got in mine. If not, again, make sure you double check your work against mine. After we declared the variables in lines 13 and 14, then in lines 16 and 17 I’m doing assignment using the equals sign. Now, in this case we don’t really call it the equal sign, we call it the assignment operator. We’ll learn about operators in the next couple of lessons. This particular operator, the equal sign means take whatever is on the right hand side and assign that into whatever is on the left hand side. We’re going to say give me the value of seven and assign that to a variable, a bucket named x. The same thing would be true here with y, we’re assigning a value into the bucket named y but we have to do something interesting here. We have to actually retrieve the value of x from memory. Where’s that bucket again? Oh, there’s the bucket, dumped the value in the bucket out, you’re holding the value seven. Add that to three and then we assign that to the value of y. Now, here we’re retrieving the value of y saying, give me the bucket with y in it and you dump it out into the console.write line which we know, then we’ll print that to screen and that’s essentially assignment and retrieval of variables. This is a very simple case. What I want to do now is comment out this code. If I were to begin commenting out the code like we learned about in a previous lesson, I could use the two forward slashes. I’m going to show you a different method in just a moment but notice when I do that, something interesting happens. I commented out the declaration for the variable named x. When I do that, notice these little red squiggly lines underneath x’s and if I hold my mouse cursor there, it says the name x doesn’t exist in the current context. We might say, well, there it is right there. It’s in our code comment, but remember we’re telling the the C-sharp compiler and ultimately the .Net runtime to ignore that instruction. The compiler is looking at our code and saying, I have no idea what you’re talking about X. I’ve never heard of X before, I don’t know what you want me to do with X, and so it raises the red flag and say, I can’t continue on under these conditions, you have to give me more information. Obviously we can fix it by removing the code comments. Now, what I want to do is comment out several lines of code and instead of just doing two forward slashes in front of every line which can be laborious, I’m going to comment out multiple lines at the same time using a forward slash and a star character over the number 8 on your keyboard to begin a lengthy comment. Then right here before that read line, actually let’s go ahead and keep it all together. After that I’m going to do a star forward slash. Now we’re going to type another code example beneath that. This will be a little bit more interesting. Follow along. Pause the video if you need to catch up with me, I’m going to try and type fast just to save time. Before I forget, let’s go “File”, “Save All”. Now let’s begin here at the top. You can see that this is a different style application with some different commands or different uses of commands that we’re familiar with. We’re just going to play a little name game and we’re going to ask, what is your name, and we output type your first name. Now notice in the first case, I’m using a right line, which will print what is your name to screen and then use a line feed character to go to the next line. However, I’m using yet a third method from the console object, the console class, which we’ll talk about classes and methods later. But this method is different than write line. This will just write out the statement type your first name, whatever’s in between our double quotes there, and it won’t go to the next line. It’ll just wait there on that line. Then what we’re going to do is create a new variable using a different data type, a string data type. We’re not interested in individual alphanumeric characters, so a-z, 1-0, and the special characters. We’re interested in a string of them or a collection of those characters. So not just the individual character B, the individual character O, and the individual character B, we’re interested in them as a string or a
collection as Bob B-O-B. That’s what we’re declaring a bucket in the computer’s memory sufficiently large enough to hold a string of characters, however long that it is. Then what I’m doing is calling our Console.ReadLine method that we’re already familiar with. But there’s a twist on this. Up to this point, we said we’re using the ReadLine method in order to stop the execution of the application to wait for the user to hit the “Enter” key in their keyboard, then to resume. However, now we’re using it for its real intent, which is to retrieve data from the end-user. In this case, we’re asking the question, what’s your name? The user types it in and hits “Enter”, and then whatever they typed in is assigned using the Assignment operator to the variable we created called first name. Hopefully, that makes sense. Now we’re going to create a second variable of type string called my last name. We’re going to do the same thing here, Console.Write, and then we’re going to allow the user to type in their last name, and then whatever they type in when they hit the “Enter” key on their keyboard will be saved or assigned to the variable called myLastName. Now that we’ve done assignment to my first name and my last name, I’m merely going to concatenate the values together. Let me point something out. There are several things we need to talk about here. Notice here we were doing actual math where we were adding values together. This is the arithmetic or arithmetic operator. We’re basically adding things together right here. We’re kind of adding things together, but that connotation is different. We’re not adding Hello and Bob and Tabor with some spaces in there. From a mathematical perspective, we’re concatenating strings of characters together to make one really long string. It’s the same operator, but it’s used in two slightly different contexts. Kind of does the same thing, but we need to understand that there’s a fundamental difference in how operators interact with different data types. You’ll see why this is important as we continue on through this course. But at the very end here we’re expecting to see, Hello, Bob, notice that I have in here a additional double quote with a space in between to give some space in between the first name in the last name, and then obviously, my last name here. We’ll get that one more line of code to write because we need to do another read line so we can see the value on screen. Let’s run the application. Then we have some things we want to talk about here. What is your name? Type first name Bob, Enter, type your last name, Tabor, Enter. Hello, Bob Tabor. Awesome, so very simple application, but hopefully, now we’re pushing the envelope a little bit more, learning a little bit more about additional data types that we can use for our variables and learning about assignment that it works with all kinds of variables, and then also learning about operators that work differently with different data types. Now, before we get too far, in the previous example, we used merely X and Y, which we might expect to see in some mathematical context because we’re used to seeing those characters used in algebra problems. But whenever we start writing business applications or even games, we need to give our variables names that are meaningful inside of the program that we’re writing. I could have just called this x and then done something like this x and then done something like this x, and you’d look at this and you’d say, I have no idea what x is supposed to do. It’s because we used a very vague description of the bucket in the computer’s memory. Instead, you don’t have to worry about keystrokes, make it human-readable. Write your code in such a way that somebody can read through it and understand exactly what the variables are doing and what the logic of the application is doing. Then also notice as I go and change some of these things back here to my first name, I used a little feature of Visual Studio that allowed me to say, now that I have changed the name of x to first name, let me rename it everywhere that I’ve used the word x. Did you notice I did that? I hit the “Control” and the period on the keyboard here. Let’s do that one more time. I’m going to change this back to x. Notice that I get a little light bulb here off to the left-hand side, which is Quick actions, and then I hit “Control” period on my keyboard. Now it gives me the option to rename my first name to x and it even off to the right-hand side shows me all of these changes. This is called a refactoring. I’m changing the code just ever so slightly by renaming things to give them more meaning. In this case, I’m doing the exact opposite, but we’ll come back to that. Do I want to rename every time I use the variable name, my first name to x? Yes, so let’s rename everything bam just like that. Let’s rename it one more time. So myFirstName, Control, period, and now I want to rename everything from x to myFirstName. You might look at that phrase, myFirstName, and then again, I used it down here, myLastName, and you’re thinking yourself, that’s a crazy naming convention. Well, it’s a naming convention that’s called camel casing, where you start with the first letter in a list of words that you’re munching together to describe a variable or something along those lines. Use a lowercase for the first letter of the first word and then an uppercase letter for the second and subsequent words in variable name. Ideally, it makes it human readable. I can read it fairly easily that way. At this point, I think it’s important also to do something like this. I’m going to rename this to myfirstname all lowercase. Remember what we said in the previous video that C# is a case sensitive language. So if you were to use the wrong capitalization, then you’re going to get a red squiggly line and it says my first name does not exist in the current context. Do you remember seeing that just a moment ago when we removed the declaration for x up here when we commented it out? The same thing is true here. Myfirstname, all lowercase is different from the variable defined called my F Name, First name. Capitalization matters. Make sure that you remember that. In this case, let’s just go ahead and change everything back correctly, and we should be good to go again. Great. Now you might be saying to yourself, well, this degree of precision seems pretty difficult. How am I going to remember exactly what I named things in the past? There are a couple of different tricks for them. Keeping your code and methods small, and we’ll talk about that later, that’s one way to do it. But then the other thing is to rely on IntelliSense, which is that little code window that I told you to ignore before. It’s actually pretty important. As I start typing my, notice that it pops up beneath what I’m typing, the correct capitalization, correct spelling for any of the variables that I’ve defined up to this point that start with the letters m, y. Now, at this point, what I can do is simply hit the equals sign on my keyboard and it will type everything else out for me. So I don’t have to worry about spelling, I don’t have to worry about capitalization. You may have noticed while I was typing, I was typing and then using arrow keys on my keyboard, and then you couldn’t really see my fingers moving, but I wasn’t typing every single keystroke. This is what allows software developers to write code very quickly. Once you get used to relying on IntelliSense, one of the tools that Visual Studio gives you in the text editor to make your typing more accurate and allow you to type much faster than maybe you normally could. We’ll come back to IntelliSense later. The other thing that I wanted to tell you about here or talk about is that we cannot define the same variable two times. Let’s try this. I’m going to go and view myFirstName and say I want another bucket in the computer’s memory with the same name, myFirstName, and the compiler says, you can’t do that. We’ve already got a bucket. We’re going to confuse buckets in memory if we give two buckets the same name. it says a local variable name, myFirstName, is already defined in this scope. You can’t do that. Now we could do this, but I highly recommend you don’t do that because, again, myfirstname all lowercase is different than my first name with camel case. But this would cause a high degree of confusion, so never do that. Be descriptive with your variable names, don’t repeat variable names, always stick to a naming convention, and never break that rule. If you follow those little rules, I think you’ll find some of these initial issues they’ll just dissipate, you won’t have to worry about them. What I want to do now is take a look at this second set of code. Then not only are we in line number 29 declaring the variable, but then in line number 31, we’re actually giving it a value. What if we were to rewrite this little passage of code? I’ll go ahead and comment all of this out, and I’m going to make this smaller. In fact, here’s one we’ll do, string myLastName equals Console.ReadLine, and then I will say above that Console.WriteLine, “Type your last name.” You can see that I took these two lines of code 29 and 31 and I combined them together. What I’m doing here is not only declaring the variable, but then I’m initializing its value to whatever we retrieve when we call Read Line. This is called initialization, and initialization is important because you want to give your variables a value as quickly as possible. This puts your variable into what’s called a, valid state, which will be an important idea as we learn to write real applications. But also experienced developers like to write less code, and they’re always looking for a convenient way to reduce the number of keystrokes that they have to type and reduce the amount of code that they have to read. Usually you want to declare your variables as you’re using them and not declare them like some people used to do a long time ago, put them all at the very top of a given method or or section of code. You should get into the practice of two things, declaring your variables as you need them in the body of your code, and then secondly, if you can, give them an initialized value immediately after you declare them, like we’ve done here in line number 34. Tell you what, let’s stop right there. I think we’ve covered a lot of ground for one lesson. Let’s do a quick recap and just talk about over a dozen things that we discussed. We talked about what a variable is. We talked about how to declare a variable, how to choose the correct data type. We talked about the int data type and the string data type. We talked about assigning values into variables and then retrieving values out of variables. We talked about the assignment operator. We looked at the arithmetic operator and also the string concatenation operator, which is both just the plus sign. We looked at Console.Write versus Console.WriteLine. We looked at the other life of the Console.ReadLine method that we can actually retrieve the values that the user types in. We talked about camel casing and naming conventions for our variables. We looked at IntelliSense. We talked about how to rename things, how to refactor our code using the little quick action. Remember the little light bulb that we could make changes to by hitting “Control” and “Period” on our keyboard, and then using our arrow keys to make selections and to rename all uses of our variable name throughout our entire code base. We probably talked about a lot more than that as well, but that’s going to wrap it up here and we’ll start again in the next lesson. We’ll see you there. Thanks. Hi, I’m Bob Tabor with Developer University. For more my training videos for beginners, please visit me at devu.com. In this lesson, things are going to get a little bit more interesting. Based on user input, we’re going to write logic to execute either one block of code or another block of code. When I use the term logic, I mean that we’re going to make a decision and execute some code based on some condition that could be the user’s input from the keyboard, maybe the state of the computer system itself, maybe some of the data that we have access to are available to us, but somehow we’re going to make a decision on whether to branch out and execute this code or execute this other code. Let’s begin the way that we normally do by creating a new project, and I’m going to go to “File”, “New”, “Project”. Make sure to choose a C-sharp Console application, and we’ll call this project decisions and click “Okay.” What I want to do is we’re going to create a little game and we’re going to do it right here in static void main. I’m going to go ahead and start typing. You can pause. Try to catch up with me. Hopefully, most of this will make sense. Let’s go ahead and run the application, and here we’re going to play Bob’s big giveaway and we can choose a door. What’s behind door one, two or three? I’m going to choose what’s behind door number one, and I said, hey, you won a new car. Awesome. Let’s play again, I’d like to win something else. Now we can type in the number two and, well, nothing really happens at all. I’m going to hit “Enter” again on a keyboard and the application just ends. We can try the same thing for three, but I suspect the same thing will happen with number two, and I can just type something randomly and again nothing really happens here. But let’s start at the basics and talk about this if statement that we’ve created here, which is really the purpose of this lesson in the first place. The if statement is called the decision statement because we will decide whether to execute any of the code inside of this inner code block based on this evaluation that we’re going to do after the if keyword. In this case, what we’re doing is evaluating whatever the user typed in and we’re gathering that from the Console.ReadLine like we learned in the previous lesson. The user typed something in then hit “Enter”. We got that now and the user value variable. We want to perform an evaluation to see if what the user typed in is equal to this literal string number one. Here’s where hopefully, I want to call your attention to this, you can see that I’m using two equal signs next to each other. We already learned previously that a single equal sign is actually an assignment operator. We’re assigning the value of whatever the user typed in, and in this case in the Console.ReadLine to the variable user value using this assignment operator. But whenever you use to equal signs next to each other, you’re going to do an evaluation for true or false. Whatever’s inside of this opening and closing parentheses, we’re going to perform an evaluation. Is user value in fact equal to the number one or rather the string character one, or is it not equal to the string character one? It can only be true or false. Based on that, if whatever the evaluation of that expression is, if it turns out to be true, then and only then will we perform the code defined in the code block immediately after that if statement. If that’s not true, if this turns out to be a false statement, then we’ll just ignore whatever is inside of this code block and we’ll continue the execution of our application to Line 23 and then beyond. That’s how it works. But I tell you what, this is a very interesting example because obviously here we have we have no prize for door two or three. Then what happens if somebody just types in four, five, six or SDF or whatever on the keyboard? We need to account for all of those scenarios in our application. Let me continue typing in some code here, and you can again pause the video if you need to to follow along. But we’ll start by using an else if statement right below our code block for the if statement, so here we go. Let’s stop right there for the moment, and you can see that in order to evaluate additional conditions, I can use the else if statements. In fact, I have two of them here. If this first evaluation is not true, then we’ll continue on and do a second and a third and maybe a fourth and fifth, how many other evaluations you want to do. However, if this is true, then will no longer run. Any of these additional checks will just continue on to line number 33, the same is true here. If this is not true, then we’re going to evaluate this next expression. If this is true and we execute the code inside of the code block immediately following it, then we’ll just go ahead and skip over this last else if statement and continue on to line number 33. If we were to run the application, let’s go ahead and just quickly run through scenario number 2. Hey, we want a boat and scenario number 3, which obviously would allow us to win a cat, but then we still haven’t accounted for the situation where we type in four or anything else, like the word or just random letters on the keyboard. Nothing really happens in those situations. What we really need is what I would call a catch all case. To do that, we’ll just create an else statement below that at the very end of our if else, if construct, and so here what we’ll do is just that string message equals, sorry, we didn’t understand, and then Console.Writeline(message). Now, we’re catching every other case possible, no matter what the user types in. Let’s go ahead and run the application. Again, I’ll just type in some junk from the keyboard. Hit “Enter” and says, sorry, we didn’t understand, and we continue on. That’s how and if decision statement work. It also has these optional parts of the else if and the else statements for either additional evaluations or what I call the catch all just in case none of the conditions are true. Now, there’s a couple of things about this, and we’re going to continue on to talk about one other type of decision operator that we can use a conditional operator. But before we do that, here’s an opportunity to clean up our code. Let’s look for areas where we’ve essentially got the same code repeated over and over and over again, and I can see a couple of of instances where this is true. The first would be where we have this Console.Writeline(message). You see, we’ve repeated that in line 20, 25, 30 and 35. Wouldn’t be great to keep our code a little smaller and only use that once at the very end of our evaluation. Like, put it right there outside of the if else, if else, decision statements. Let’s go ahead and just remove those from there completely. But when we do that, notice that I’m getting a red squiggly line under the word message. The name message doesn’t exist in the current context. We’re going to talk about scope and declaring variables inside of certain scopes and a little bit, but just to lead up to that conversation. Essentially, when we defined a variable inside of an inner scope, it’s not available outside of that scope. In other words, if we define a variable inside of a code block, inside of the curly braces, it’s only available inside of those curly braces not available outside of those curly braces. What we’ll have to do is define that message variable outside of our if statements so that it’s accessible to all of the inner code blocks, as well as our Console.Writeline(message) here in line number 35. It’s a very simple fix. We’ll just do this. String message equals and then we’ll just set it to an empty string to begin with. Now, you’ll see a different message, a red squiggly line. But we’ve seen this before. You can’t define the same variable twice, even if it’s in an inner scope. What we’ll do is just say, instead of defining the variable message there, we’ll just set our existing string variable called message to the value like so. Great. Now, our applications should run, we’ve eliminated a lot of code, and admittedly, I created a straw man here. I wrote more code than I needed to, but I wanted to illustrate this point. Again, if we were to run the application. It works correctly. But there’s one other change that we can make. Code blocks as we’re using them inside of, if else, if and else statements. If there’s not more than one line of code to execute, then we don’t even need to use the curly braces. In other words, since there’s just one line of code here underneath my if statement, I can just remove it and make it small like that real compact. Same is true here, and same is true here. Now, in this case, maybe just to illustrate why you would need the opening closing curly braces if I were to do this message equals the message plus, you lose like so, and I’ll even add a little space here to make it look correct. Let’s go ahead and test that else case just real quick, and I’ll type anything in there and hit “Enter” and says, sorry, we didn’t understand you lose. Notice that we were able to concatenate two strings together. I did it on two separate lines you don’t really need to. But if I were to attempt to remove the opening and closing curly braces there, we’re going to get a very different result when we run our applications. For example, if I hit “3”, notice that you want a new cut, you lose weight. Why did that happen? You want a new cut, but you lose. Well, there is no such thing as a free cut, but anyway, it’s because what it’s really seeing is this like that. If we want these two to be evaluated together inside the same code block, we have to include them in the same code block. Otherwise, this being outside of that code block will execute no matter which of these, if or else if conditions are true. Let’s go ahead and put that back in there to illustrate that idea. In fact, if we were to do something like this, I can even make this a little bit smaller. Let’s comment that out, and I’m going to show you a new operator, which is just this. Now, instead of going message equals message plus you lose, essentially, I’m saying, give me whatever’s in the variable message, concatenate on the word you lose, and then assign that into the variable message. That’s what I did here. I just do it all in one step right there, where I say, whatever’s in the variable message on the end of that concatenate, you lose. This is the assignment and concatenation operator combined into one. Just a little shortcut there. Now let’s do this. Let’s talk about another style of decision statements. It’s actually an operator called the conditional operator. This works well if you have an if or else scenario and you don’t have multiple items to evaluate like we did here. I’m going to copy some code from lines 14-15 and line 16 too, I’ll just copy all of those, and we’re going to paste them down below our command area. Then what I’m going to do is this. Will do this a little bit different this time. There we go. Now I’ve written a lot of extra code that I need to, I’m going to show you how to shorten this up in just a moment. Here, the key to this is this little evaluation that I’m doing on line number 42. Remember that we’re going to evaluate anything in between the parentheses. Whenever we see the double equal sign, that means we’re doing an evaluation. Is the user value that they typed in and submitted through the previous line of code, line number 40, is that equal to the literal value 1? If this equates to true, then what we want to do is, everything after the question mark, we’re going to take that value and assign it to our new variable called message. If the user types in one, will find the literal string boat and will assign it to the variable message. However, if this equates to false, so they type in something different, then anything after the colon will be taken and assigned to the variable message, and we’ll use that below. Let’s go ahead and run the application, we’ll run it twice. We can choose the door and if they choose door number 1, they win a boat. However, if we run the application a second time and we choose anything else, then you’ll only win a strand of lint. Again, this is only useful in a if, else condition, not when you have multiple conditions to evaluate. Now, let’s address this last little part here because I can shorten this up considerably. Notice, in order to get it all to print out on one line, I only use Console.Write, then I typed out the literal string, and then I have a second line where I’m actually then printing out the actual message from line 42. Then finally, to add a period at the end, I’m having to do yet another Console.Write statement in order to add a period. I can shorten all that up in one line of code. Watch how I do this. In fact, let’s go ahead. I’ll comment out these lines individually, like so. Then I’m going to use a replacement character inside of the Console.Write line in order to shorten this up a little bit. We’ll use the WriteLine instead, and we’ll fill you one a and then I’m going to use what I call a replacement code of zero. Then after I give it that literal string and you use a comma and then give it the actual message variable that I want. Whatever’s inside of this, I want it replaced in here. Eliminate the curly braces with the zero and put the message variable value inside of there. Let’s run the application again just to see this working. I’m gong to type number 1 and get the same result. Now, what if we wanted to expand on this idea here? Let me comment that out. What if I wanted to replace two values inside of that Console.WriteLine string? Let’s do something like this. You entered and then we’ll go 0. Therefore, you won a. In this case, after the comma, I’m going to enter the user value and then the message goes, make sure you can see that. Hopefully, you can see that on screen at one time. I need to do one little thing here, is change that to a 1. This says, take the first item. In software development, typically you don’t start with the number 1, you start with the number 0. I don’t know. We’re going to see this pop up again and again. The first item in the list will be at element 0. The second item in the list will always be at element 1 and so on. The very first item, this will be matched and replaced by the very first item in that comment a limited list of input parameters, after the literal string that little template that we’ve created for ourselves. Then here we’re going to do a second replacement and replace that with whatever is inside of the message variable, when we run the application. We type in number 1. It said you entered 1; therefore, you won a boat. Awesome. That’s enough for one lesson. Hopefully, you’ve learned a couple of important things in this lesson. First of all, we talked about the if decision statement, as well as the else-if and the else, and how to do a comparison and evaluation between two values to determine true or false. If we’re using an if statement and we’re doing that evaluation, then the code that is in the code block below it will get executed if that evaluation is true. If it’s not, it’ll either drop down to a second subsequent evaluation or even do a catch-all in the else statement. We talked about using curly braces for a code block versus when you don’t need them. We talked about keeping your code nice and tidy and small. We talked about declaring variables inside of scope and inner scope and outer scope as defined by our curly braces for code blocks. We talked about the conditional operators, being able to all in one line do a check for true or false, and if it’s true, assign one value versus a different value to a new variable. We talked about format codes inside of literal strings for our Console.Writeline and how to replace those replacement codes with variable values that we then also pass in our Console.Writeline statement like you see here at the very bottom in line number 49. Again, we covered a lot of ground. Hopefully, this all makes sense, if not, rewatch the video, just watch those portions that didn’t make sense. Make sure you’re typing in the code yourself so that you can come to some of these epiphanies as you’re typing. We’ll pick it up in the next lesson. See you there. Thanks. Hi, I’m Bob Tabor with Developer University. For more of my training videos for beginners, please visit me at devu.com. In this lesson, I want to spend a bit more time talking about some smaller syntax elements of the C# language that you need to master to understand how a properly form line of code is constructed in C#. In one of the previous lessons, almost like the first lesson, I said something to the effect that just like you use a period or question mark or exclamation mark at the end of a sentence in English to complete a thought, you also use a semicolon at the end of a line of code and C# to denote a complete thought. To extend that analogy a little bit, I may have briefly referred to C# syntax as having nouns and verbs. So I want to elaborate on these things and clarify what I mean by that in this lesson. I’m going to talk about the basic building blocks, and I guess you could say, parsing the parts of speech in C#. Let’s start off at the beginning. Statements are what you call complete thoughts in C#. Typically, one line of code. A statement is made up of one or more expressions and expressions are made up of one or more operators and operands, so we’ve seen a number of statements and expressions and operators and operands already, whether you realized it or not. As we’re taking a look at some of the previous work that we’ve got here, I’ve got the variables project from a previous lesson opened up. For example, you can see that essentially each line of code is a statement. Each of them are made up of one or more expressions. Here, for example, is an example expression. This happens to be a variable declaration statement made up of a operator, which in this case is a keyword Int for the data type integer, and then an operand, in this case, a variable name. We also use another operator, the semicolon for the end of the line of code. Another example would be here where we have an assignment where we’re actually calling a method. Here is an operand. It is the name of a class, and we’re using the open-close parentheses. Remember, these are operators. This is the method invocation operator. Then we’re using another operator here, the assignment operator, to assign the value of this expression to another operand, the name of the variable that we created. We were to look through the code, we could continue to parse out and understand what makes up operands, operators, expressions, and then entire code statements. Now, operands are similar to nouns. They are things like objects and classes and variables and even literal values. These are the subject, I guess you could say, of your statement of code. They’re pretty easy to remember because typically you give them names, you define the values yourself, and so on. Now operators are similar to verbs. They are things like the addition operator or the string concatenation operator. These are things that act on the operands in order to perform some action. Typically, you’re going to use the built-in operators, although you could create your own operators. A little bit of an advanced topic, but there are actually quite a few built-in operators and you’re going to need to memorize many of them. That’s how you come to express yourself in C#. Fortunately, as you start out, you probably only need to know a handful to be productive. What I want to do in this lesson is to focus on a few that I think you’re going to use probably 90 percent of the time as you begin learning and you can obviously add to that list as we continue. I’m going to actually present these in a rapid-fire fashion. I created a very nonsensical application. You can open this up, download it from wherever you’re currently watching this video or wherever you originally downloaded it from. I called this project OperatorsExpressionStatements, and the application itself does absolutely nothing meaningful at all. All it really does is show you some examples of the various operators and expressions that you’ll come across whenever you’re working in C#. At the very outset here you can see that I have a variable declaration. We talked about this already. I did something a little bit different this time where I’ve declared several variables all upfront as integer. So x, y, a, and b are all defined as integers. Just wanted to show you something a little bit different there. By separating them with commas, it’s an easy way to declare several variables of the same type, all on one line of code. I typically don’t recommend this, but you might see this around in use in books and on the Internet. Next up, assignment operator. We’ve already seen the equal sign at work in that capacity. Note here in line number 22 and following that there are actually many different mathematical operators. We’re only looking at the most basic ones, but there are also some advanced ones as well. But here we have addition, subtraction, multiplication, and division. As I demonstrate here, you can use parentheses to actually change the order of operations. In this particular situation, the parenthesis is not a method invocation operator. It’s actually how you would typically use it in an algebraic or mathematical sense. You would use it in order to specify the order of operations. So perform this expression first, then this expression, and then take the result of those two and multiply them together in that third expression and then assign them to the value of x. Then there are operators that are used to evaluate. We’ve already talked about the equality operator, where we’re using two equal sign next to each other to make sure that these two items are in fact equal. Here, again, we’re using parentheses in yet a different capacity to define the boundaries of our expression that will either equate to true or false. X equals Y is either true or false. We can use greater than we can use the less than operator. We can use greater than or equal to less than or equal to. All of these, again, should produce a true or false result. There are also two conditional operators that can be used to expand or enhance an evaluation, and they can be combined together multiple times, as I say here in the comments, so I could ensure that both X is greater than Y is true and A is greater than B is true by using the logical and operator. There’s also an or operator to say either X has to be greater than Y or A has to be greater than B in order for this outermost expression to be true. So here is the logical or two pipe characters next to each other. Then I guess we’ve already talked about that in-line conditional operator where we have some item that’s being evaluated. Then if it’s true, then we’ll take the first value and if it’s false, then we’ll take the second value. In this case, we’re assigning either car or boat into this message variables value. Then also wanted to talk about member access and method and vocation. We’re going to talk about object oriented programming quite a bit later on in this series of lessons, but we’ve already said how the console was a class and classes are containers, for lack of a more robust definition, for methods. The way that you access a member method of a class or an object is by using the dot or the period that is the member access or operator. Furthermore, we talked about the method and vocation operator. Here we are, invoking a method called WrightLine by using the opening and closing parentheses. In this particular case, we’re passing in an input parameter. Again, we want to hold off and talk about input parameters and methods a little bit later. But as you can see, here’s a number of different operators, and these are just what I would call a very baseline set. You need to memorize these so that you can express the most basic of C# commands and understand exactly what it is that you’re trying to do. It’s not an exhaustive list by any stretch of the imagination, but you will probably need these about 90, 95 percent of the time. Then you can expand your vocabulary of other operators and keywords over time. In each of these cases that we just looked at, an expression is made up of a combination of operands, which are things like the literal strings and variables and objects like the console class itself, and operators. Things like the addition operator, or shrink incantation operator, or equality assignment operators and so on. You use expressions then to form complete thoughts, statements in C#. Which are how the actions or the instructions of an application are expressed to the compiler and ultimately to the.Net runtime, which executes your application. Why am I telling you-all of this? Why go through this little English lesson, passing out the different parts of speech if you’ve ever had to take an English class? Well, it will help you to understand why this is not a valid statement in C#. You can’t just type x plus y and then give it a end of line character and expect it to do anything. The C# compiler will look at that and say, what are you trying to accomplish here? Have you lost your mind? What do you want me to do with all this? Fortunately, in situations like these, as you can see, Visual Studio can catch these syntactical mistakes even before you attempt to run the application. If we were to hover our mouse cursor over the visual guidance here, the red squiggly line, you can see that the fundamental problem with this line of code is that only assignment, call, increment decrement, and new object expressions can be used as a statement. What’s the problem here? Well, this is not a properly formed statement. We’re not assigning, calling, incrementing, decrementing, or creating new object expressions. What? We’re not formulating a complete thought, a good sentence. I could create an English sentence like this, the red ball. You would say the red ball does what? Who has the red ball? You can understand that just because you use words doesn’t mean you’re creating a complete thought or expression inside of the English language. Same thing is true with C#. That’s all I’m trying to say here. For beginners, understand that there’s a proper syntax, just like there’s a proper grammar in the English language. Understanding this is really a big step towards solving your own problems whenever you’re phrasing C# instructions that the C# compiler will understand and accept and ultimately compile into code that will be run by the.NET framework. Here, let’s recap what we talked about in this lesson. First of all, statements are complete instructions in C#. They consist of expressions and a statement is like a sentence in the English language. Expressions are composed of things like nouns and verbs, in other words, operators and operands. The operands are things like nouns, they’re the subject or what we want to do something with. Then there are the operators, which are more like the verbs. These will act on the nouns to perform some action. We said that operands are like variables and classes, literal strings. These are things that we get to name ourselves. They give the meaning to our application where as operators are, for the most part, built into the language and we have to memorize them. To start off, you might use something like what I’ve given you here in the form of a project for a cheat sheet. But I think you might just be able to walk your way through and rationalize your way through. Now that you understand that there’s a proper way to format a line of code, you might say, okay, what do I need to do here? You might be able to reason your way through the operands and the operators. I’m going to need a variable to contain some values. Once I have created that variable in memory, now I’m going to need to assign it to something now. How am I going to get to that something? I’m going to need to take another variable and this literal value, and I’m going to need to add them together with an operator. Hopefully you get the idea there. I hope that this was a useful exercise. I think it’s useful for beginners to understand that there are syntax rules, and they’re not so unlike what you’re already familiar with. Maybe they look a little bit different than your typical English sentence, but they still have to make sense and they have to perform an action or to do something. When you see errors, sometimes it’s because you typed something incorrectly. Then sometimes is, you may not be using the right forms of speech in a sense in order to express that complete thought at C#. Let’s pick it up in the next lesson, will see you there. Thanks. Hi, I’m Bob Tabor with Developer University. For more of my training videos for beginners, please visit me at devu.com. In this lesson, we’re going to focus on iteration statements; a specific iteration statement called the forlteration statement. Sometimes you’re going to need to loop or iterate through a sequence of things to find something that you’re looking for, to find a successful match. Actually, you’re going to do this data manipulation more than you realize. You’ll have to trust me that this is a very important tool in your toolbox that you’re building. As you can see, I’ve already taken the liberty of creating our project. I called it forlteration, pause the video, go through the steps that you already know how to perform to create a new console window project, and catch up with me. I’m going to begin to add some code here on line 13 in just a moment. Now, the syntax that we’re going to write here in just a moment is possibly the most cryptic of anything that you’ve seen yet. I’m going to be completely honest, sometimes I get things a little bit mixed up myself, but don’t worry. After we struggle through it once or twice, I’m going to share the little secret that I used to get it perfect the first time, every time. Having warned you about the complexity of the syntax, I’m still betting that you could figure it out and read it even before we actually attempt to run the application, even before I take the time to explain what each little bit of the application is doing. Let me write it out here and then we’ll try that. Very simple, at least very compact section of code here. Actually, you need one more line of code, Console.ReadLine. There we go. Now, ready for action. What do you think that this code does? Got a theory in mind? Well, let’s go ahead and run the application and see if your theory is true. You can see that we have a list of numbers from 0-9 and then we can hit ”Enter” to continue on. We’re using C-sharp to execute this little block of code right here until a certain condition is true, at which point we will stop executing that line of code and continue on to line number 17. This for statement says that we should begin by declaring a variable. We’re going to call it i. We could call it anything we want to, and we’re going to initialize its value to zero. Now, as long as i is less than 10, we’re going to continue to execute the code below it in our code block, defined by our curly braces. Each time that we iterate through, we’ll increment the value of i by one. This little bit right here is probably the part that you probably wouldn’t completely understand unless I explained it. You remember how we use the plus equals sign in order to automatically take the value of message and add something to the end of it and then assign it back to the value of message? Remember that a couple of lessons ago? We’re essentially doing that here. This is the the increment operator, so we’re going to increment the value of i by one. Again, we’re going to declare a variable and initialize the value. Then as long as this middle part is true, we’ll execute the code below. Once we finish executing, then we will increment the value of i and then do that evaluation one more time. If it’s still true, then we’ll execute this code again. We’ll increment i. If this is still true, we’ll increment it again and so on.. That’s how it works. Yes, this is cryptic syntax, but if you can just separate the three parts in your mind by remembering that there semicolons that separate them, that can help. You know you’re going to need to start off with some counter of some sort. You’re going to need a condition and then you’re going to need an incremental at the end. Again, I’ll show you a way to remember this so you never forget it. But before we do that, let me comment out this line of code and give you a variation on this idea. This would be fun. Here we go. You can see that I simply added inside of our code block for the for iteration statement, another if statement with its own code block inside of it. Here, we’re checking the current value of i, and once we find something we’re looking for where i is equal to 7, then what? We’ll perform this code. What is this code do? Well, this part’s obvious, but this break statement may not be so obvious. You use the break to bust out of or break out of the forlterations. We’re going to make it to the value where i is seven, and then we’re going to hit the break statement and then we’ll continue on to line number 23. Let’s see it in action. It’s not going to look all that exciting, but I got an idea. It found seven and it pretty much finished. I have an idea. Why don’t we watch this execute line by line? To do that, we’ll use the debugging tools inside of Visual Studio, which we have not even talked about up to this point, and yet is probably one of the most important features of using Visual Studio as opposed to just using a text editor and a command line compiler. To make this work, what I’m going to do is actually set a breakpoint here on this line of code. Now, how did I do that? I just went to this left most column and I clicked in that gray column and it created a little red dot and off to the right hand side. You see that that whole line of code is outlined in red. There are, truth be told, a number of different ways to set a break point. Probably the easiest way to do it is is what I just showed you. But there’s also keyboard shortcuts and there’s menu options as well. For example, with my blinking cursor online number 16, I’ll go to the debug window and I’ll select ”Toggle Breakpoint’. ‘ If we look over to the right hand side, you can also see that the F9 key will accomplish the same thing. Great. Now, let’s go ahead and run the application and see what happens. Immediately the application pops up before anything can be printed out to the console Window, notice that we have paused the execution of our code and we’re paused right here on this breakpoint. Now, at this point, I can do a lot of cool things. First of all, I can see what my local variables values are currently. I can also change the value of those variables. I can monitor those values. I can change which line of code will get executed next and a bunch of other things. Now, this is not a series on debugging. I could easily spend an hour showing you a lot of cool little features. However, what I do want to do is call your attention to this little window at the bottom. Currently, right now we’re in what’s called debug time, and with the application execution pause on this line of code, the next line of code that’s going to execute is that line right there that’s highlighted in yellow. I can look at these little windows like this locals window, for example, and you can see that the locals Window will contain any variables that are currently in scope at the moment. Obviously, args is something that we haven’t talked about yet. Let’s ignore that one. But what I want to focus on is the value of i. Current value is zero. How do I know that? Because I’m looking here in the value column. I can also see what the data type of I is. It’s an integer. I were to hover my mouse cursor over I you can see I’d be able to see it there as well. If I were to go and pin down that value, I’d be able to monitor it as well in this little helper window. In fact, I can drag it around here. Now watch what happens. Let me readjust some things here. I’m going to step through this line of code. Now, there’s a couple of different ways I can step through the code. I’m going to recommend that we only talk about step over for right now. When we learn about methods we can step into and step out of but for right now this middle button right here, the step over or the F10 key on your keyboard is what we want. I’m going to click it once and notice that we jumped from line number 16 to line number 22. Why was that? Well, the reason was because I was not equal to seven, so we didn’t execute the code inside of the code blocks underneath the if statement and we jumped to the end of the ‘for’ code block. Now let’s continue just to step through this. You’re going to see that we’ll increment I by one. Now when we do that, notice what happen. The value of I change from zero to one, and that change is indicated by a change in color. Whenever you see the color red that means something changed in the previous line execution. The value of that variable changed from some of the value now to the value of one. We can also see this in this little mini window right here as well. How it is now turned to the color red. Now that we’ve incremented, we’re going to do the next check to see is I still less than 10. Here we’re going to step one more time into our program. Here we’re going to step and in the Line 14 which opens up our code block and we’ll do another assessment is I currently the value of one equal to seven? No. So we’ll jump out of that if block and continue on, and we can just continue through this exercise until we reach where I is equal to seven. Now, truth be told, I don’t have to keep hitting the step over button. I can just use this continue button and this will just keep bringing me right back to the break point. This basically says, continue running until you hit another break point. Here at this point now I see that my I is equal to seven, and that’s what I’m checking for. Things should get interesting right now. I’m going to go back to start stepping line by line through my code. Here’s where I hit the console that right line and if I look now on screen it actually did write that to the console window. Now I’m going to step to the next line of code and notice that it jumped out from the brake statement outside to line number 23 outside the ‘for’ statement to the console that read line we can hit continue from that point on. Our application is still running until we hit the enter key on the keyboard and then we’ve exited out. Very cool. Now, you may have found it laborious to step through a number of times or even hit the continue button a number of times until we found just the right conditions. What we can do is make this break point into a conditional break point. To do that, I’m going to hover over the little red stop sign I guess you could call it in the left most column and I’m going to click this settings. Here this will open up a little break point settings window right in line in my code, it pushed all the other code down. Notice that it goes from Line 16 here to Line 17 way below it. I’m going to add a condition. Whenever a conditional expression is true, then we’ll break at that point. In our case, when I is in fact equal to seven, then we’ll break on that break point. You can see that when I hit “Enter” on my keyboard, it’s saved, and now I can close this. You can see that the little icon changed from just a red stop sign to having a white plus symbol inside of it. Now when we run the application notice that I is seven and that I is seven in our little window here before we even stopped into our break point. Now we can continue stepping line by line through our code and continue on. We got our result, and we can continue on. Again, I could spend an entire hour just showing you other cool little features that will help you debug your applications but understanding how to set a break point, how to run your application to a break point, how to step through line by line and then how to resume, at least resume temporarily or continue by using the continue button. Those are the key concepts in debugging. In using the Visual Studio debugger. Now let’s go ahead and it’ll turn off this break point. From now on what I want to do is just eliminate it I can do that one of two ways. To completely turn it off, I can just click and it’ll go away. Or I can temporarily disable it by using this little icon that was next to the gear that we clicked earlier. Now, you can see there’s a little outline in the left most column and an outline around the line of code that had the break point, but no longer are we actually going to break on that line. Let’s do this. Underneath the ‘for’ statement from before but above the console that read line, I want to do what I promised to at the very outset which will show you a way a foolproof way that you can get the syntax right for a for iteration statement, and truth be told for just about anything else by using a little secret code snippets. It’s not that much of a secret but you probably didn’t know about it, did you? To do this it’s real easy. If you can remember I need a for iteration statement just type in the word for. You’ll see that it pops up in the IntelliSense. If you look after the IntelliSense pops up a little message to the right there, code snippet for ‘for’ loop. Note: Tab twice to insert the ‘for’ snippet. Let’s do it. Tab, tab, and there we go. Notice that it went ahead and pretty much set it all up although there are some parts that we’re going to have to change like, for example, the length. I don’t have to use the value I for my iteration statement as my placeholder, as my counter, whatever you want to call it. I could call something like my value and notice as I’m typing, and then I hit the tab here on my keyboard, it changed everywhere that was using the I variable label to my value. Very cool. Hitting tab also took us to the next spot in our code that we’re going to need to replace, which was the length. Or in other words, how many times should this ‘for’ loop iterate? I’m going to say we’ll do it until my value is less than 12. Now, here again, we can use a number of different equality or inequality operators so we don’t have to use the less than we could use the equal or we could use the greater than whatever makes sense for our application. But I want to keep simple and leave it just like that. Once I’m done making changes I can see the “Enter” on my keyboard. There were some highlighted areas that were highlighted in gold color that indicate that these are replacement areas that go away. Now my mouse cursor is right between the opening and closing curly braces and at this point then I can continue to create my application, write line, and then my value like so and then we can run our application and we would get the following results. Just to recap. It was a short lesson but we learned a lot. Not only did we talk about for iteration statements and why you might want them and we’ll see them at use later, but we also talked about the debugging tools just briefly, and how to step through a code, how to monitor the value of variables, how to use the break statement to bust out, to break out of a iteration statement. We looked at code snippets and how to replace values in a code snippet in order to make it our own. We’ll use some of these techniques that we learned here throughout the rest of the series, very important video. Let’s continue on the next lesson. We’ll see you there. Thanks. Hi, I’m Bob Tabor with Developer University. For more my training videos for beginners, please visit me at devu.com. In this lesson, we’re going to talk about arrays, and I’m going to start by making a case for why you need arrays in the first place. Often you’re going to need to work with several related variable values. But how do you work with multiple variables and treat them all as if they’re part of the same group? Well, let me show you how not to do it. You can see an example of that on screen. First of all, I’ve taken the liberty of creating a project called understanding arrays, so make sure you catch up with me. Create a new console window application and you can just follow along. You don’t need to type this part in. It’s wrong anyway. You can see what I’ve done here. I need to keep track of five numbers, and I need these numbers to be related to each other, so without any better tools in my toolbox, I might just create something called Number 1, Number 2, Number 3, Number 4, and give them each a value. Now I want to find which variable holds the value of 16. I love to be able to loop through them like we learned about previously, to find which of the values hold Number 16, but I can’t really do that. I’m forced to create an if else structure. As you can see here below, in order to ultimately find which variable has the value of 16 inside of it. This is not the right way to go about working with multiple values that are somehow related and you want to treat them as a group. There’s a better way and that, as you might assume, would be with arrays, so we comment all of this out. Previously I talked about a variable as being a bucket in the computer’s memory that will hold some value. But let’s expand our thinking about this for just a moment and talk about an array. Think of an array as like a bucket, or maybe even better, a tackle box. Ever seen one of those? If you go fishing, there’s a lot of little compartments inside of it, and each one of those little compartments can hold something. Usually a little worm or whatever the case might be. What if we were to use that instead of a bucket? What if we were to put values in each of those little tray areas inside of the tackle box and store that up in memory? Then whenever we needed a value out of that tackle box, we just need to take it and look through and find the particular compartment with what we’re looking for, in order to work with it. That’s the idea of an array, at least if you want to overextend the bucket analogy. Another way to think of an array, it’s a sequence of data, a collection of data, although I’m hesitant to use those specific terms, because they have very specific meanings in.NET. Think of it in a very general sense. You have a collection of data you want to keep together, how do you do it? Well, one of the ways you can do that is with an array. Let’s do this and I’m going to go ahead and create my first array here. I want you to follow along and notice that I’m using square brackets, I’m not using curly braces when I’m working here. First of all, let’s take a look at the declaration of our array called numbers. It is an array of integers. In other words, there are going to be multiple integers all collected under this same umbrella named numbers. You can see that not only am I creating the declaration for this array, I’m also using an equal new in five. Some of this, like the equal in the new part, we’re going to talk about what that really means a little bit later. But for now, just accept it as how you go about creating an array. Then notice next to that, I have int and then inside the square brackets, I have the Number 5. So that’s how many elements that I want inside of my new array. I want a new array of integers that can hold five integers inside of them. Next what I do is I begin to access each element of the array and put a value inside of that element of the array. Here’s the 1st element of the array, the 2nd element of the array. Remember we’re zero based? Here’s the 4th element of the array and the 5th element of the array. Five elements inside of the array, just like we defined here in line Number 31. Now what if we wanted to access the value inside of one of the elements of the array? Well, I would do something like this. It’s a Console.WriteLine, obviously. Now what if I wanted to get to and print out the value that’s in the 2nd element of the array? Well, then I would use the correct index of the array to access that element. Here’s the numbers, and I want the 2nd element, which means I’m going to use the Index 1. So I’m going index into that array to get to the correct element. In this case, the 2nd element is at index Number 1. I can print that off the screen and we do a read line here. Like so and we can quickly run the application. You can see that we are printing to screen the Number 8, which in fact is the 2nd element of our array. Now the other thing that we can do is actually determine how many items are in the array by looking at the length property of the array itself. Console.WriteLine, and I’ll just go numbers.Length. Let’s see what that will output. In fact, let me go ahead and comment that out and run the application. You can see that we’re able to programmatically determine how many items are in the array by using the length property of the array. There’s five elements inside the array. Great. Now what would happen if we were to attempt to insert data into another item, a 6th element of the array? What do you suppose would happen here? Well, we’ll try it. We’ll run the application and you’ll see that we get an exception and IndexOutOfRangeException was unhandled. In other words, we are outside of the boundary of the space that we defined inside of our array. We’re trying to access compartments that were never created in the computer’s memory inside of our little tackle box. In order to remedy this, we can either redefine our array at the time of declaration that we need actually six items or we can go ahead and we can change at runtime, the number of items in our array. That’s a little bit of an advanced topic. I don’t want to talk about how you would go about doing that, but it is possible to do it programmatically at runtime. Let’s move on from there, and let’s talk about maybe a simpler approach to creating new arrays, and that is to not only declare the array, but then also initialize its values at the time of declaration. So let me comment out everything I have here and we’ll do this. Now, instead of giving it a specific size, we’re going to let the compiler figure it out on its own. Because we’re just going to start typing in the values of the elements that should be stored inside of our array. Now in this case, I can create it or just put all the items I want to put in there, and I can trust that the new array that will be created in memory will be able to hold all six items this time. Let me comment that out. We’ve been working specifically with integers, but what if we were to work with strings? How would we go about doing that? Well, same idea here. In this case, we want to give it a number of literal strings. Like so, and so let me move this over a little bit. You can see that we are able to create an array of strings. We don’t have to declare upfront how many elements we want in our new array, we’ll let the compiler figure that out. It will create four items. Now there’s a number of different ways that we can loop through to access each of the items in our array. Let me show you two ways, and one of them is going to be what you’re already familiar with the for loop, right? I’m going to go for tap ”Tab”, and so we’ll start with an integer I equals zero, and now let’s do this instead of names dot Length. Right? Then inside of here we’ll go and do this
Console.WriteLine and we’ll go names. What do you suppose we’ll put in the middle here? We’ll use the value I. So what we’re going to do is start at zero and continue iterating through until we reach the length of our array and then we’ll stop and jump out of our array. But until then, we’ll do a Console.ReadLine here, and you can see that this will allow us to print out all four items inside of our array to the Console window. Great. Now there’s a lot of management of this, I hear, but there’s an easier way to go about this. Let me comment this out real quick. I’m going to show you a second style of iteration statement. In this case, we’ll just do this. We’ll do foreach and I’ll go ahead and use the code Snippet. I’ll just go tap ”Tab”. For each string name and names I made up the term Name as singular in Names is actually what we called our array right. So now I’m going to hit ”Enter” on my keyboard twice and I’ll just do Console.WriteLine name and let’s go to Console.ReadLine. What this will do is it’ll allow us to essentially loop through every single name in our array of names and for each item, it will copy the current element into this temporary variable code name of type string. Then we can use that to do whatever we want. In this case, we’re just going to print it off the screen so much easier, that is, but we can use either technique in order to iterate through our sequence of data. All right, now, let me show you one last thing you can do, it will be pretty powerful stuff and we can create arrays of different things. What if we wanted to take a string and reverse the string? How would we go about taking, for example, the name Bob Tabor and reversing that to Robert Bob? How would I change that? Well, what we can do is take a string and convert it into an array of individual characters. Once we have an array of individual characters, we can then say, go ahead and reverse the order of those items so that the last becomes first. In the first becomes last, so let’s do this, I’m going to I’m going to create a string called Zig, and it’s going to contain one of my favorite speakers quotes that I have a patterned my life after ”You can get what you want out of life if you help enough other people get what they want.” Now that’s a very long line of code. What I would probably do is I would try to chop this up into multiple lines of code. We said this before that you can do something like this in C-sharp. All I’m doing is going to just break it in half and use this concatenation operator to marry the first string and the second string together, so that’s all really one line of code. Now that I have this, what I want to do is create an array of characters. I’m going to use the Char Keyword, which is the data type Char, meaning I want one character, but I’m going to create an array of characters called charArray, and then I take this zig string and I’m going to call a helper method on it called ToCharArray. Every data type has some helper methods that are built into it by the dot NET Framework. What this will simply do is take a long string and we’ll split it up into individual characters and put those into an array of characters. Now that we have our statement here in an array of individual characters, I can do something like this. I’ll call Array.Reverse and I’m going to pass in the character array. Then finally, what I want to do is we’ll do a foreach tab tab for each char, and I’ll just call this a ziglarchar in my charArray. Console.Write, not WriteLine, but just write, and the zigChar, hopefully all this makes sense. Let’s do a Console.ReadLine. This is just to show you some of the flexibility of working with arrays. Let’s run the application and now we were able to write that whole string backwards. That’s pretty much it. There’s a lot more that you can do with arrays, however, and as we move through C-sharp, you’re going to find that your use several arrays will diminish over time and you’ll start using something a little bit more elegant. Think of it as an array on steroids, or maybe like Super Array. It’s going to be called a collection. There’s a bunch of different types of collections, and we’ll learn about those near the very end of the series of lessons. But at anyway, that’s how you work with arrays. Remember that you have to declare an array by giving it at the time of declaration its size. Then you can access individual elements of array by using indexes into the array to access or to set the values in a given element of an array. We can loop through elements of an array using a four or a foreach iteration statement, and we can even use some cool utility methods like Array.Reverse to swap all of the items in the array, or there’s also ways to sort items and so on. Let’s continue on the next lesson. We’re doing great. See you there. Hi, I’m Bob Tabor with Developer University. For more my training videos for beginners, please visit me at devu.com. In this lesson, I want to show how to create and how to then call simple methods. Now, creating methods are going to help us with a number of different things as we write more interesting applications. Methods are going to help us organize our code better. They’re going to eliminate duplicate code and the need to copy something we did earlier and paste it later in our code base. They’re going to allow us then to take a certain feature or functionality in our application and give it a name and then call it by its name anywhere in our application. Then if we were to ever need to update or fix an issue with our method, with that code that’s encapsulated in a method we have to do one place instead of changing it everywhere, we copied and pasted our code. Remember what we said at the very outset of this course that a method is merely a block of code as defined by curly braces, and it has a name, and since it has a name, we can call it by its name in order to invoke that code defined in its code block. Methods are actually one of the most important building blocks that we’re going to learn in this course, and it will allow us to build more interesting and complex applications. This is definitely something that we need to understand thoroughly. To begin, you’ll notice that I’ve already created a project called Simple Method. Please take a moment, create a new console window project and catch up with me. What I’m going to do is build the most simple example. I can possibly imagine a simple ”Hello world” application again, but this time using a method. We’re going to define our helper method inside of our class program, because remember, we’re going to keep methods inside of the context of a class. Related methods go together in the same class, we’ll expand on that later. But it should be outside of the definition of our previous method, the static void main. I’m going to go right to the end of the closing curly braces for static void main, and I’m going to hit ”Enter” a couple of times on my keyboard that should put my mouse cursor after static void main’s definition, but before the end of our class programs closing curly brace, so somewhere in this area is where we want to work. We have to define things in the right place, just like we learned before, and here, let’s create our first very simple helper method. That’s all it takes. Now I’ll explain the word private when we talk about accessibility modifiers and classes. We’ll talk about the word static much later in this course. However, just to let you know, it has more to do with building console window applications than typically what you might find yourself using in a different style of application. But we’ll talk about it later. The void is something that’s important. We’ll talk about that in just a few moments here. I’m going to create a block of code and then I’m going to give it a name. In this case, the name is simple HelloWorld. Additionally, I’m going to give it an opening and closing parentheses and we’ll look at what those are used for here in just a moment. However, then in the body, I’m simply going to just write any of the code that I need, my HelloWorld function to do. Now, in this case, one line of code very simple but hopefully you get the idea. Now how do I call that method? How do I execute it from my static void main? Well, remember it has a name and we can call it by its name in order to invoke it. But remember, there’s one other piece of information that we need to provide here. Not only do we need to give it the name of the method that we want to invoke but also we need to use the method invocation operators which are the opening and closing parentheses in this context. Now we’ve called our method and we expect output in the console window. Now I’m going to go ahead and add one more line of code just so we can see our result like we always do and now when we run our application, we will get the unexciting results, Hello world. But the most important part of this was to create the simplest example we possibly could. Now that you see how easy it is to create a method and how easy it is to call the method, let’s go ahead and shut down that project. Instead, what I want you to do is open up the project and you should be able to find this where you’re currently watching this video, wherever you originally downloaded from, there should be source code available. You should be able to find that source code in before folder for Lesson 10, copy that HelperMethods project folder into your project’s directory or somewhere on your hard drive, and then you can open it up from there. I’ve already got this opened up here and you can see that I’ve created a simple name game application. Again, this is simple but at least there’s more code that we can use to demonstrate how useful methods can be for us. It’s going to ask us for our name and then where we were born and then we’re going to use the little algorithm I guess you could call from the previous lesson where we learn how to take a string, how to convert it into an array of characters, how to reverse the order of each of the characters in the array and then display it back out to the console window. That’s what we have here in our results, Oak Park Tabor Bob spelled backwards. Now, in order to accomplish this, I have what, from lines 13 to lines 56 so about 43 lines of code. Admittedly, I made this longer than it probably needs to be but notice the amount of duplicate code that I’ve introduced into the application. Here is where I am retrieving the first name and the last name and the city and those are essentially the same even though what I’m collecting is a little bit different but it’s only two lines of code so that doesn’t hurt much. Here we are actually taking the first name or the last name or the city and we’re going to do the reverse operation on it and we do that three times and there’s the third one. Then what we’re going to do is print out the results into a string called result which will then output in a console that right live. But notice here, we’re essentially doing the same thing here and here and then again here so there’s a lot of duplication. Now, duplicate code in and of itself is not a huge problem, there’s really no way you can completely eliminate duplicate code in your application but duplicate code is usually the result of copying and pasting code. You’ve invented the wheel earlier in your codebase and your first thought is, “Well, I’ll just copy and paste it because I need it here and here and here in my code.” Now, invariably what happens is your intent is to copy it but to make a few subtle changes to it and in your haste, frequently, at least if you’re like me, you will forget and you’ll make a mistake and forget to change something and you’ve introduced a bug and it can steal your soul like even if it’s just seconds. But what if it’s minutes or even hours of your time trying to figure out why you have a weird problem with your application? Copy and paste is dangerous, you should always treat it with great suspicion. But in addition to that, if you have the same code repeated multiple times, then whenever there’s a change that’s requested in how our application works, we’re going to have to change it in multiple places. But what if we were to take some of this functionality like this, for example, and this, and we were to extract it out and put it into its own method and then just call it three times? First of all, it would reduce our need for copy and paste. If we needed to fix a problem with our code, we can do it in one place, and then also, if we were to give that method a meaningful name in our system, it would describe what we’re attempting to accomplish. Right now, we’re just filtering through lines of code and it’s a little bit more difficult to ascertain quickly what this application is attempting to do. But if we were to maybe give our methods nice meaningful names, it might read more like a paragraph of English instead of a bunch of disparate lines of C# code. So that’s the goal. Now, the second reason we might want to break this up into methods is to simplify the readability of the code. We already talked about making it more human-readable but also, there’s a lot of lines of code here that we have to pass through to understand what’s going on and if we can reduce the amount of code to read, then we can improve the readability of our code. We want to reduce bloat every time we have the opportunity. We should strive to make our code readable, clean, clear, and perform well and maintainable so that if we need to make a change, we can do it in one place and methods help us accomplish all of those things. Let’s do this. Let’s create a method, we’ve already learned how to do that. I’m going to go somewhere between the end of our static void main but before the end of our program class, I’m going to define a private static void, ReverseString like so, and what I’ll do is copy some of the work that we’ve done here, for example, lines 24 and 25 and I’ll paste those here in our new method and then I’m going to copy the code that we used to actually print all these out to screen and I’ll paste those here as well in our reverse string method. Now to get started here just to make sure that this method is going to work, I’m going to hard code the message. I’m going to create string message equals Hello world and then I will change firstName to just message throughout and firstNameArray to messageArray and we’ll hit control period to rename like we learned about before. Then finally, what I could do is gather up all of the individual items printed out in reverse order using this foreach or I could just go here and go Console.Write each item like so and that’ll accomplish, at least for now, the same thing. Now that I have this working, I want to comment out everything I’ve done up to this point so that I can isolate, and then we’ll start reintroducing things back in as we get them working. I’m going to call the reverse string method by using the name of the method and then also again, method invocation operator and then obviously the end of line character and I’m going to go ahead and hit “Start”, and not a very exciting example but now we know that the logic of our reverse string method is working. What I’d really like to do is make this a reusable method. Currently, right now, it’s not all that useful. How many times do I need a print Hello world in an application? But if I were to remove this line of code here and replace it with an input parameter so that the caller can pass in the string that it wants reversed, now I improve the usability of this method dramatically. To create an input parameter, I need to give it a data type and then a moniker or a name. What I’ll do is say, I’m going to allow the caller to pass in a string and internally I’m going to call that string message. I’m creating essentially a variable that allows an outside passage of code to pass values into the body of my method. I can utilize that value inside of my method and then hopefully as a result of that, achieve some more interesting results. Now, having done that, I’ve changed the signature of the method. I used to have just a method called ReverseString but it accepted no input parameters but now I have to accept one input parameter and that’s not optional so I get this red squiggly line beneath the reverse string and if I were to hover my mouse cursor over, it’s going to say there’s no argument given that it corresponds to the required form parameter message of, and you’re like, “What does that mean?” Essentially, we did not call the method correctly now because we have to give it something like a hardcoded string, or probably the better thing to do here would be to give it the first name that we collect way up here in lines number 16 and 17. Let me uncomment that out and go down here and comment. Now I’m collecting the first and last name of the city but everything else I’ll leave commented out for now. Eventually, we’ll remove them and I’m going to call this reverse string method three times. Each time I’m going to change what I’m passing in like so. Now when I run the application, well, let’s do this as well. Let me copy that so that I can get similar results. Let’s go ahead and remove that and let’s see the application now. Make sure you have what I have on screen, possibly if you need to. Let’s run the application and let’s see it working and it should work similar to what we had before with fewer lines of code. It mostly works but you notice there’s a subtle problem with this. There’s no space in between Oak Park, Tabor and Bob. This is a good example of where I can make a change one place in my code and it will fix the problem throughout the code base, wherever I’m using and calling my new method. To fix this problem, all I need to do is to cancel “Right” and then add in a blank character that should allow sufficient spacing in between each call to reverse string. Now when I run the application and I put in my details, Bob Tabor, Oak Park. It should work correctly and it does great. Now this is definitely one way to go about writing this application. As I look at this method reverse string I see a problem. Typically, whenever I create methods, I attempt to describe in English what that method is responsible for doing inside of my software system. In this case, I would describe the functionality of this method as it reverses a string and it prints it to screen. But herein lies the problem, I really only want each method to do one thing in my system and when I use the word and, and print it to screen, I feel like that’s two responsibilities in the system. Typically, what I would do is split this out into two separate methods and you might say, well, that’s a little excessive, and that’s true in the simple case. But following that rule of thumb will help you as you begin to think about how to compose methods, what goes into a method? How many methods should I write? Should I create one massive method or lots of tiny methods? Typically the answer is more smaller methods with descriptive names are better. In this case, what we’re going to do is change up the functionality of the application a little bit. What I’ll do is take out all of this where I’m actually doing the writing to screen and what I want reverse string now to do is accept an input string and then return or report back, giving the result to the caller. In other words, right now we’re using the void keyword, which means I want you to go off and do something, but please don’t report back to me. I don’t care what you have to say, I don’t need to know anything from you, you just go, you work, you be quiet and everything is great. However, we might want to change this and say instead of being quiet when you finish your job, I want you to report back to me what the results were of what you did. In this case, I might want to say return back to me the reversed string. I’m going to give you a string and then what I want you to return to me is a string that’s been reversed. Notice when I added or changed void to string, I get a red squiggly line because I have not officially returned anything back to the caller. I need to use the return keyword like so. I could do a for each and gather up each individual item into a longer string like we done, pretty much previously right here by building that result. However, there is an easy way to do this with just one line of code, just like there is a helper method called Reverse on the array class. There’s also a string class, and the string class has helper methods too. One of them is the Concat method, and it will allow us to pass in an array of individual characters and it will concatenate them all together and return back a full string. In this case, let’s just give it the message array like so and that should work just fine. Now notice that I’m able to call reverse string and I’m not really accepting back any values. Why is that? I thought if we were going to say, Hey, report back to me that I would need to do something with it. In other words, I would expect to see something like this, where I’m capturing whatever has been returned from the reverse string method. That’s optional. I can listen for it and retrieve it and save it or do something with it, or I can ignore it. In this case, what I would probably want to do is actually save it, so I would call this reversed first name like so and then string reversed last name equals and then string reversed city equals. We’ll shorten this up in a moment. But hopefully you’ll see where I’m going with this. Then what I can do is console dot right. Right line or just here, let’s just do right. Reversed first name plus space. This seems laborious to do it this way. I’ve got a better idea the string has another helper method. We looked at the concat method, but it also has a format and the format will work a lot like canceled outright line. In fact, they’re almost identical. The only difference is canceled outright line will print its results to screen whereas string dot format will merely just create a new string as a result of whatever been formatted. But the reason I’m using this is so that I can use the replacement codes like so. Here I go 0, 1, 2 and I can pass in reversed first name, reversed last name, and reverse city like so. Since that is off to the right-hand side of my screen, I can’t easily see it. Typically, what I’ll do is move each of the input parameters to the method in this case console dotright. I’ll move in the separate lines to increase the readability. Notice that they’re indented a little bit. But this is all essentially one line of code, even though it’s spread on four lines. But it improves readability because I don’t want to have to scroll off to the right-hand side of the screen in order to read my work. Get in the habit of formatting your code for readability and keep things narrow and small. If they do go off to the side of the screen, don’t be afraid to move things down to different lines to increase the readability. Now let’s see what we have. This should work. Let’s run the application. It works great. But what if I want to put this into its own method? I could simply do that like so. I think I can just use a void in this case. What I could do is go display results and I could just take this and paste it in. But now what I need to do is pass in these three values. How do I go about doing that? Well, know how to add input parameter. How do we add multiple input parameters? What we’ll do is define our first one reversed first name like so then to add subsequent input parameters, I’ll just use a comma on the keyboard and add the second one like so and then the third one like so. Again, since it’s off to the right-hand side of the screen, I might put my mouse cursor right before the S in string and move those input parameters below the definition for our display result method again, for readability sake. You may not agree you don’t have to. That’s a stylistic choice. At this point, I should be able then to call display result. Let’s call display result passing in the reversed first name. I just happened to use the same names here, but I could have called either the input parameter something different or the temporary variables here something different. Reverse string, last name and as I’m doing this, I’m beginning to think to myself, why am I even going through all of this? Why do I even need these variables can I just eliminate those altogether and just copy this? Paste it here. I mean, it returns a string. I should be able to do that and I should be able to do this and I should be able to do this. Then I’ll put the Mitch on the line and that should work just fine and here I can eliminate these lines of code completely for my application. See if it works. Still works great, feels like this should probably go into this display result. That should reduce it from being there. Now suppose that I don’t want to pass in each of these individual values. What if I want to display the result and only pass in one value. What could I do in that case? Well, I can provide additional ways of calling a method by creating what are called overloaded versions of our methods. In this case, what I’ll simply do is I’ll start out and copy and paste the exact same method definition twice. Notice on the second definition, I get an error. Let me hover my mouse cursor over so you can see it, type program. That’s the class program already defines a member called display result with the same parameter types, you can create additional versions of the same method with the same name, but they have to have a different methods signature. A method signature is the number in the data type of the input parameters in your method definition. In this case, I already have a method called display result with three strings. I could change these names to just any old gobbledygook text there, and I still get an error. It’s the same problem because the fundamental fact that we’ve not changed the signature of the method means that I’m still having the same problem. However, I could change this by allowing only a single message or a single string to be input as an input parameter. Now I have two completely different versions of the method as far as C-sharp is concerned. Now, in this case, I wouldn’t need any of this. I probably just do this. Like so, and then I could call it by doing this. Basically what I was trying to avoid last time, but we’ll go ahead and do it anyway. This time we’re passing in one long string. Notice the use of the concatenation operator and the use of some empty spaces defined by two double quotes with just an empty space in between. We should have two lines that display essentially the same thing here. Let’s make sure we do this right. Well, in a WriteLine between them just to make sure there’s a break. Bob Tabor and we get two results that look identical. Now you might wonder why are we doing this? Why in the world would you ever want to create two methods with the exact same name that essentially do the same thing, but allow the user to pass in different information? Let me give you a good example of why you might want to do that is with the Console.WriteLine. Here we go with Console.WriteLine Have you ever noticed as you as you type the opening parentheses for the Method Invocation operator, that there’s a little message that pops up down there, there’s one of 19 and then you look to the right of it, and as I use the arrow keys on my keyboard to go up and down. Notice that the number goes one, two, three, four or five. These are all the different data types that the WriteLine method will accept, it’ll accept an input parameter of type Boolean, which is true false. It’ll accept a single character or an array of characters. It’ll accept a decimal value, which is usually used for money or a double, which is used for longer mathematical calculations or a float, which is a massive number in terms of the number of values after the decimal point. It allows you to pass in an integer and other types of integer style values. It allows you to pass in a string and then others as well 19 different versions of Console.WriteLine to make it convenient for the user of the application to utilize that method in their app for the developer, the application to use it in their app. Now when we go to display result, we’ll see the same thing in IntelliSense display result and notice that I have two versions. I’m looking at version one of two, and notice the emphasis on the input parameter that the first version accepts one input parameter of type string called message, and then the second version accepts three input parameters of type String, reverse String, reverse last name and reverse city. There you go. That is why you would create overloaded versions of your methods. Now, in this case, notice that we could eliminate so much of the code from this in order to essentially get the same working results, I’ll just delete that, and, for the sake of simplicity, I’ll go ahead and remove this as well. Now we’ve reduced down the amount of code dramatically for our application and improved the flexibility of our application by adding multiple ways to actually display the results. At Developer University, I issue a decree to students that no method should have more than six lines of code in it, if it has more than six lines of code in it, then it’s probably attempting to do too much in the system. You should be able again to express what it’s doing in English and then if you find yourself saying it does this and that, then it’s probably an opportunity to split those up into multiple methods. Of course, rules are meant to be broken, and as a rule of thumb, six lines of code per method will keep your code tidy and readable. It’ll keep everything scoped, nice and very tightly, and it’ll improve the quality of your code dramatically. That’s all I wanted to say about methods, but we’re going to be using them from this point on. If there’s anything about this that doesn’t make a whole lot of sense to you by all means, please make sure that you watch this lesson again or seek out some other resources. You’re doing great. Let’s continue on. See you in the next video. Thank you. Hi, I’m Bob Tabor with Developer University for more training videos for beginners, please visit me at w.com. In this lesson, we’re going to look at another iteration statement, the while iteration statement. Let’s just recap the iteration statements we’ve learned about up to this point. We’ve learned about the for loop or the for iteration statement and it allowed us to iterate through a block of code a number of preset times based on a counter. Then we also learned about the for each iteration statement that allowed us to iterate through a block of code once per item in an array. Now, in both of these cases, ahead of time how many iterations or how many times to iterate through the given block of code but what if you didn’t know up front how many times that you needed to iterate? Maybe you need to keep iterating until some condition is met. In that case, you’ll want to use the while iteration statement. Also, we’ll take a look at the do-while iteration statement, which allows us to always iterate at least one time before breaking out of the iteration statement. We’ll look at both of them in this lesson, and I’m trying to think of use cases where this would be useful. The most obvious one to me was creating some little menu system for our console window application. You’ve seen him before, especially if you’ve worked with DOS in the past. At any rate, what we want to do is begin with a new project you can see have already created it. It’s called WhileIteration. Again, another console window application. Please pause the video. Catch up with me. When you’re ready, let’s go ahead and get started by creating a method that will print out a list of options to our users in the form of a menu. We’ll do something like this. We have some more work to do here, but what we want to do is display this, so let’s just start by displaying the main menu here like so, and let’s run the application. Here we can choose an option. No matter what we choose at this point, our display will disappear. But suppose that we wanted to actually kick off another feature of our application, so say, for example, let’s go private static void of print numbers and then we’ll go private static void guessing game, like this and it will just console.WriteLine. Now let’s go ahead and call those from here. PrintNumbers and then GuessingGame. Now let’s run the application and we choose the first option and we’re able to play the print numbers game. But when I hit “Enter” we are completely removed from the application. What if I wanted to return back to that main menu? How could I go about that? Well, I could use a while statement to determine whether to show the menu again or to completely exit out of the application. To make this work, what I’m going to do is start off with a new data type called bool, we referred to it briefly a moment ago. It’s basically true or false. We want to create a new Boolean variable called displayMenu and we’ll set its initial value equal to true. Now what we’ll do is create a while statement. I’ll just type in while, tap tab. What I’ll say is while the display menu equals true, and then we will call display menu. Now a couple of things here. What we’ll need to do is actually then retrieve back for main menu, a Boolean, whether the user clicked “Exit” or not. What we’ll set is displayMenu equals MainMenu and then have MainMenu return a bool itself. Of course, we’ve completely broken the application at this point. That’s okay. Here we are going to continue to display the main menu until main menu returns the value false. If somebody chooses option number 3 to exit, then we might choose to completely exit the application, in which case we’ll return false. Now, if they choose some other option, like 4 or 5 or 6 or some other text option, then you might just want to redisplay the menu. We will return true again. Furthermore, we might want to return true as well here after we go through each of these options as well. Now let’s go ahead and run the application and see how it works this time. First of all, if we choose option number 1, it’ll display a message, and after we hit the “Enter” key in the keyboard, it will display the menu again and I can select number 2 and it’ll display the message and then I can hit “Enter” and now we can hit the “Exit” and we actually exit out of the application. What the while statement allowed us to do in this case is check for a condition and when that condition is true, then we can break out of the while loop. Otherwise, we’re going to keep executing the code inside of our code block. Now what we can do here is actually shorten this up a little bit. We don’t need to say, while displayMenu equals true, remember that just like when we’re using the if statement or the else if, we want to evaluate an expression and if an expression is true, then we want to either execute that block of code below it or not. In this case, if displayMenu is already true or if it’s false, we don’t need to actually do this evaluation. It already evaluates to true or false. We don’t have to do an equality there or check for equality. It’s either true or false, and so we can just write it like that very simply. Moving on. Now, what we want to do is maybe fill in the gap on some of these other little games we have here. Let’s play the print the numbers game. In order to do that, let’s go ahead and say Console.Write, type a number, and then int results equals Console.ReadLine. That’s going to return back a string, but what we really want is an integer. I’m going to go integer.Parse and this will allow us to take whatever string has been returned and convert it into an integer. Now we should have the actual integer value. Here what will do is create a counter for ourselves. Int counter equals 1 and then we’ll go while tap tab, the counter is less than our results. Then we will do a Console.Write with the current value counter. We’ll go Console.Write and we’ll do a little delimiter and then we’ll increase the counter. Now there’s a tiny bug with the application. We will come back to that in just a moment here. But let’s go ahead and run the application and let’s go ahead let’s type in a number 5 and it types in, and it will print out the numbers 1, 2, 3, 4. We’re able to change the number of times on the fly that will iterate through a block of code. Now it just so happens that this isn’t exactly what we wanted. Let me exit out of this. What we really wanted was to display from 1-5. I’m going to go ahead and add result plus 1. If I typed in the number 5, this would actually make this value 6. As long as we’re less than six, go ahead and continue to execute these lines of code. But once this statement becomes false, once the counter if it’s equal to 6, it will break out and will hit this line of code here in line number 59, the Console.ReadLine. That should work. Now, the other thing that I noticed when we ran the application is that we keep seeing additional data being written to the window. I might want to clear out everything that’s been displayed so far. Here we’ll start at the top and do Console.Clear, and that should clear off the screen for us. I’ll just copy and paste that here to print numbers as well. When we run the application again, this time I’m going to choose option number 1, and notice that it cleared off the screen and when the print numbers game, I’m going to type the number, let’s go 4, it types out 1, 2, 3, 4. I hit “Enter” it clears that off and it displays the menu again. Awesome. The next thing I want to do is play the guessing game. Again, here I’m going to go ahead and clear off everything that’s currently on the screen. What I want to do is choose a random number and then I’m going to allow the end-user who’s playing the game to try and guess the number between one and 10. How do I create a random number in C-Sharp? We actually use this built-in class in the Dunnett from the class library called the random class. We’ll create a new instance of the random class, we’ll talk about what that means, create an instance of a class, in an upcoming lesson. Let me do this. We’ll go random, my random equals new random. Again, that should make no sense to you whatsoever. That’s just fine. I’ll explain what that actually did in an upcoming class when we talk about classes. I want to get a random number from my random class, so I’m going to call the next method, and here one of the overloaded versions is that I get to give it a minimum value and a maximum value. The minimum value will be 1, but the maximum value, I want it to be 10. I’m going to say, don’t let it be more than 10. In other words, 11 is out of bounds. Now that I have a random number, I’m going to also keep track of how many guesses the player has guessed up to this point. Then I want to also keep track of whether or not the user was correct or not. Incorrect, and we’re going to say it is true that they were incorrect. Now watch this, I’m going to create a do while statement. I want the block of code that I’m going to create execute at least one time. That’s why I’m going to choose the do while as opposed to the while. The while we’ll evaluate the very first time and we may never actually run the code inside of our code block, but this time I want it to run at least one time. We’ll say, do this, but then at the very end, we’ll check for the statement while. If the while condition is true, then we can break out of it. While we continue to be incorrect is true. While we continue to be incorrect, then we’re going to keep guessing. Let’s start this Console.WriteLine and we’ll say, guess the number between one and 10. We want to retrieve that number. The string result equals Console.ReadLine. Now that we have it, we can do an evaluation. If the result is equal to the random number, so if whatever the user typed in is equal to the random number that we generated, then we want to break out of the wild statements so we’re no longer incorrect. In other words, let’s go ahead and say that incorrect equal to false. At this point, we guessed correctly and we’ll break out of the wild statement, and here we would want to say Console.WriteLine. Hey, you did it correct. However if they did not guess correctly, then what we would want to do is write Console.WriteLine and then wrong. We probably want them to guess again, which will happen because while incorrect is still true, then we’ll come and we’ll re-execute this block of code. Looks like a missing end of line character here. I can see that as I hover my mouse cursor over that little red area that I forgot a semicolon there. Otherwise, this should work. Now there’s one of the things that I want to do. I want to keep track of the number of guesses. Each time the user adds a guess, we’ve already initialized that value there, that variable guesses. I’m going to increase guesses or increment guesses by one. I type in the word guesses plus plus. That means I want to add one to the current value of guesses. Then here I want to type out how many times it took. It took you guesses, then like so. Let’s run the application. Let’s choose to guess a number between one and 10. We’ll start off at three. You can see it says it’s wrong, so I could continue to guess a number between one and 10. Let’s go 4, 5, 6, 7, 8. The number was eight. It took me six guesses to get to that. Now when I hit “Enter”, it returns me back to my main menu and here I’ll just hit “Three” to exit. Let’s go ahead and change our menu at this point. Let’s just do Print Numbers and then the Guessing Game. We’ve used the wild statement in a couple of capacities. The wild statement here is used so that we can continue to display the menu until the user decides to exit. We are using it to merely print out values to a screen, but we get to determine it at runtime or let the user to determine it at runtime as opposed to the four or the four each where it’s predetermined ahead of time. Then finally, we’re able to use the do while to continue to ask a series of questions until we get a satisfactory answer, at which point then we can break out of the loop. The do variation allows us to run our code block at least one time as opposed to immediately jumping outside of the block if the condition is true. That’s why we would use the while iteration statement. It’s pretty useful in certain cases. Let’s continue on. We’ll learn about strings in the next lesson. We’ll you see there. Thanks. Hi, I’m Bob Tabor with Developer University. For more of my training videos for beginners, please visit me at devu.com. Now, many of the types of applications that you’ll build as a C-sharp developer will require you to work with text, whether you’re formatting the text for display to the end user or whether you’re manipulating the text in some way. A good example would be whenever you are massaging data. That’s a term that developers use to talk about taking data from a file or a database, and it’s in some raw form and you need to manipulate it. You need to remove certain characters. You need to add certain characters in certain positions in order to get it and prepare it for ingestion to be used by some other software system or to be saved in a different file format, whatever the case might be. Manipulating data is a key skill, whether for display or for the sake of massaging data into the right format. Furthermore, whenever you’re working with the string data type, you’re working with a data type that can hold a lot of information. To extend the bucket analogy, you’re working with a really big bucket. When you’re working with big buckets, you have the responsibility of working with them in an efficient way. Because when you’re working with data that takes up a lot of memory, it requires a lot of processing power, you are putting a strain on system resources. Now admittedly, it would take a lot of string manipulation to slow down a computer, especially a modern computer. However, software developers, we want to do things efficiently, and so it’s important to understand that there are tools in the dotNet Framework Class library that will help us work with and manipulate strings in a very efficient way. That’s really the purpose of this lesson. To show you how to perform some simple string manipulations like inserting special characters in your little strings, formatting strings, especially numbers and dates and things of that nature. Manipulating strings, changing things about string, searching for items and removing them or replacing them with something else and strings. Then also working with strings in a more efficient way. As you can see, I’ve already taken the liberty of setting up a new console window project called Working with Strings. Please take a moment, pause the video and catch up with me. I’ve already added three lines of code that we’ll use to demonstrate some key manipulations for our strings. What I wind up doing is just typing in a string and then showing you some manipulation and then moving on to the next line. But at any rate, let’s go ahead and start by talking about the special nature of the backslash character, which is that character there. I always used to get my characters confused. That’s forward slash, that’s backslash. A backslash character can be used to escape or insert escape sequences into literal strings. This will allow us to do things like put special characters, insert line feeds and things into a literal string. For a good example of this, what if I wanted to type something ironic like my so-called life? I wanted to insert a series of double quotes around the word so-called so that it displays the way that I would, as the author of this, expect it to be displayed on screen. Now unfortunately, you can see that the Visual Studio on behalf of the C-sharp compiler doesn’t like this at all. It thinks that you have two literal strings here. The word my and life and in between something that it can make no sense of whatsoever, the word so or the term so a minus symbol. Then the word called. These are not variables that have been declared. It doesn’t recognize them as keywords. C-sharp does not like this. In order to insert a special character like a double quote to say, I don’t want this to delineate a literal string, I want to use this inside of my literal string, I’ll use the backslash character before each double quote, which escapes out the double quote and makes it available for use inside of the literal string itself. Now when we run the application, we can get double quotes inside of our side of our string. Now similarly, let’s put in my string equals. I tell you what, I’m just going to copy this to my clipboard so I can keep using it. Now, what if I needed to add a new line? What if I need a new line and I want to split this up under two separate lines in my application? What I can do is insert a new line character. Think of a line feed. Slash n will create a line feed. Let’s go to run the application. You can see that it’s smart enough to know that even though we didn’t separate with spaces around the word a and new it was still able to find that escape character for the line feed and represent it correctly in our little string. Now, you might say well, that’s all well and good, but what if I need to actually use the backslash character? For example, in an instruction to go to your C colon slash drive, you’ll notice that we get a red squiggly line underneath the backslash because it’s expecting us to use the backslashes and escape character as an escape sequence. But we’ve given it nothing after that to indicate which escape sequence we want to use. In this case, we have two options. In fact, in all of these cases we have two options. We can use another backslash character to escape out of it to represent this correctly. Now you can see that you should go to your C colon backslash drive. I’ll just do this again. Go to your C drive. What we can do is add a at symbol in front of the literal string, and that tells C-Sharp that we want to use our backslash characters as true backslash characters, not as escape sequences or special characters. Let’s move on from there. We’ve already talked about the use of string dot format and we showed how we could do something like this, where we are going to insert the words 1st and 2nd into this template. The template contains a number of replacement codes. The number inside the replacement code corresponds to which argument is passed in to the string that format as input parameter. Let’s run the application and we would get what you might expect first equals second. What I didn’t tell you at the time was that you can actually reuse the replacement code multiple times. Like so or you can use them in a different order if you like. Let’s go back and change up the order where the second will be the first item displayed and the first will be the second item displayed, like so. Furthermore, the replacement code has some special powers. For example, if we want to do string.Format and say for example, that we wanted to display currency to the end user. I want to display $123.45 to my end user. In my case, since my computer’s culture is set to English U.S, this will be represented as dollars and cents but if your country and culture codes are set to for example, English U.K. or some other language and some other culture you would probably see something different whenever you choose to do this. You’ll see your native country and culture’s symbols for currency. But to create and format values for currency, you use the colon and then C immediately after the numeric replacement code. In this case, I’m using say zero still represents the first item in the list but the colon C says I want you to format it like currency. When we run it at least on my computer you’ll see dollars and cents with the dollar symbol. There’s all these little variations on this. For example, what if I wanted to just display a really long number to an end-user like 1234567890. I want it to look like a number not like how I have it here where you can’t really tell is that what, 12 billion or 123 million or what? To remove the confusion what you can do is use the colon and format character. This will add in decimal points and commas to give you the appropriate formatting for a large number 1,234,000,000 and so on. Furthermore, continuing that same thought what if we were to go string.Format and we wanted to represent a value as a percentage? What if I wanted to display this as a percent? Be sure to get those formatting codes in there. Just to show there’s nothing on my sleeve here, this is called as a percentage like so and then we will insert a percentage here. If this replacement code lets go and run the application and you can see that the percentage is 12.3 percent in this case. Finally, the last one I’m going to show you but I’ll show you where you can find more information is how to create a custom format. For example, in the United States, phone numbers have a very specific way that they’re presented. Let’s go string.Format and let’s go 1234567890. I want that displayed like a phone number so I would use zero and then I’m going to use count symbols to represent each digit that I want formatted. In this case, actually, I’m going to use parentheses around the first three numbers because that’s usually how an area code in a phone number in the United States is presented then a space then three more numbers then a dash then four more numbers. That’s just how phone numbers are presented in the U.S. Let’s go and run the application. You can see that it in fact formats that number the way that I would expect. Let me throw one little monkey wrench in this. What if I were to supply too many extra digits? I added another one two at the very end and yet I don’t have that accounted for in my formatting. Where will it be presented? Well, as you can see it pushes out the area code to five digits instead of just three. The moral of the story there is that the formatting will go from right to left whenever you’re using custom pound symbols to create a custom format for a numeric value. Again, the numbers will push their way out and once you get to the very end, it will put as many numbers as it can on that very first character and push that formatting out to the left. Be aware of that. The next thing that we want to do is start manipulating strings in a more meaningful way. Up to this point, we’ve been formatting strings but what if I want to actually change some things about the strings themselves? Let me start by providing a little something that we can sink our teeth into and work with. I’m going to type in a lyric from a song that I like. Notice that I added or I left in an extra space here at the very beginning of the string and then I left it in two spaces at the very end of that string. Let’s go. My string equals my string. I think the most important thing you want to realize about when you’re working with these data types is that they do have built in functionality that were provided to us by Microsoft. For example, every string has the substring helper method that we can use to just say, hey, I want to start at a specific point and then grab all of the characters within a given range. I can say start at position six and grab me back everything after position six. When I run the application you can see it starts with the word summer which is at the sixth position and grabs everything to the very end of that line. Here is position one, two, three, four, five, and six. It truncates off the first six characters and starts me there and I pulling everything else giving me a subset of the strings from that point on. But I can also say go ahead and give me the next 14 characters after that. Don’t give me everything to the very end of that string just give me the next 14 characters and so I can isolate a couple of characters in this case just three words in that string. I can also do something like my string.Toupper and that will do what you might think it will. It’ll make everything upper case. Great. What if I wanted to replace one character with a different character. My string.Replace and say find every blank space and replace it with a double dash like so. When we run the application, you see that we get double dashes instead of our spaces. It makes it more obvious that we had some spaces at the beginning and the end. We can also use mystring.Remove and we can remove a number of characters from our string. Instead of selecting out the substring of characters we took threes. We can actually remove those entirely from the string. You can see, summer we took it has been removed from the string completely. Also, what if we want to actually remove those trailing and preceding spaces? We could use the trim method. Let’s do my string equals, first of all, string.Format. Here I’m going to grab the length of the string to demonstrate this. The before length and then the after length. Let’s go mystring.Length and then my string dot will call the trim method to strip off all of the extra spaces in the beginning and the end. Or I could choose to trim off only the ending spaces or the beginning spaces but I’m going to call the trim method to get rid of it all and then determine what the length of the string is at that point. You recall that we used the length property whenever we were working with the array to find out how many items were in the array. We can also use the length property on a string to tell us how long the string is. That’s ultimately what we’re doing here. Tell me how long the string is before we make any changes to it and then after we trim off those extra spaces, how long is the string itself? Run the application again. You can see that the before was 46, the after was 43. We trimmed off three spaces. Great. The last thing I want to do is talk about working with strings in a more efficient way. For example, let me just type in a really quick code example here. We’ll just do this. Actually here. The mainstream plus equals. Hopefully you remember what this operator was for, where we’re saying, give me whatever the value of string is and concatenate everything on the right hand side to it. Here we’re concatenating on just double dashes and in the current value of i’s, we loop through 100 times and we’ll merely then just display my string in the console window. Let’s go ahead and actually get rid of that. We’ll start here with a blank slate. Let’s run the application. The output isn’t all that interesting, it’s just a printout of numbers with some dashes in there. But what’s going on behind the scenes is the more interesting part of this. What happens when you’re working with the string data type is that it’s called a immutable data type, meaning that you can’t just add more values to it. What happens behind the scenes is there’s this little dance that the dot NET Framework runtime is is performing to make it look like you’re still working with the original variable My Strings, the original bucket. But what it does is it creates a second bucket and it starts copying things over. In this case, it copies the previous value of My String plus any of the new stuff we want to put in there, and it creates this new string in a new bucket and then it removes the old bucket and it gives the new bucket, the name My String. Then we say, let’s do it again. In fact, let’s do it a hundred times and it has to go through that dance 100 times in order to produce the final result that we’re printing then in our console window. You can see that’s a very inefficient way and we’re requiring a lot of memory management that might put a drain on the system if we were to do a lot of it. Instead, what we can use is a different data type. Whenever we’re going to manipulate strings in this way, where we’re going to do a lot of string contamination or a lot of string manipulation, we can use something called a String Builder. Again, just like I said with a random class from the previous video, this may not make a whole lot of sense at first but hopefully once I talk about what classes are and how to create new instances of classes, this nomenclature that String Builder My String equals new String Builder. What is that all doing? We’ll talk about that very soon. But just let’s create a new String Builder class and then we’re going to do something very similar to what we did before where we will iterate through a 100 times. But this time, instead of just doing a simple concatenation, we’re going to use an append method, which is a more efficient way to append additional information to the String Builder object, rather than going through the previous step of forcing the runtime to create all these temporary versions of string. My String apend i and the result will look identical. But what’s going on under the hood is that we’re working with strings in a more efficient way. Use the String Builder along with the append method of the String Builder to work with strings in a very efficient manner. We talked about quite a bit in a very short amount of time how to work with the backslash character for escaping and inserting escape special characters into our literal strings, how to use string.format. In fact, let me show you this little page here for standard numeric string formats. If you just search for this on bin.com, you’ll be able to find this article and it gives you examples in many other usages for the format that we looked at several examples of here. We looked at several of the built-in helper methods to replace or define subsections or completely remove or actually create two upper or two lower to change the case of strings. Then finally, how to work with strings in a more efficient manner. Now we’re going to give the same treatment to dates and times because you’ll again find yourself working with dates and times frequently whenever you’re building applications and there’s a lot of similar functionality there as well. We’ll see in the next lesson. Thank you. Hi, I’m Bob Tabor with Developer University. For more of my training videos for beginners, please visit me at DevU.com. In the previous lesson, we looked at how to format strings and how to manipulate strings, whether it be for display or for the purpose of massaging data. In this lesson, we’ll do the same thing except for dates. We’ll start off by talking about formatting dates and times. We’ll look at how to add and subtract time to a given date. We’ll look at how to create a daytime object that represents this moment in time or the past or the future. Then finally, we’ll look at how to determine the length or the duration of time between two daytime objects. To begin, I’ve created a new project called Dates and Times. Pause the video, please and catch up with me. What we’ll do here is actually just create a new DateTime time object by going DateTime and we call this my value and we’re going to initialize its value to a valid DateTime. The easiest way to do that is to represent this very moment as the application is executing. We’ll go DateTime.Now and that represents this instant. The easiest thing that we can do is just do a Console.WriteLine, taking my value and calling the ToString method. Now you’ll see we have a lot of to something strings and we’ll look at a several of these in an effort to format our DateTime the way that we want. But this default ToString method will take our our country and our locale and will present dates and times as they are typically presented in our country and in our culture. Here in the United States, we usually represent the month first and then the date. I know most other countries it is date, month, year and then we have the time of afternoon that I’m actually recording the video. Notice that it also has AM, PM as opposed to military time or 24 hours. In order to change the way that this is presented, we’re given a bunch of other additional helper methods and so we can do something like this, myValue.ToShortDateString and this will just display the month, date, year. Isolate These short time string. Here we just want to display what time of day it is , 3:35 in the afternoon. Great. We can also choose a more long form version of the date. You can see it’s Tuesday, March 15, 2016, as I record this and we can do the same longer version for time as well, so myValue.ToLongTimeString. You can see not only do we have hours and minutes, but also seconds in the long time string. Great. Oftentimes what we’ll want to do is do some daytime math, which means we either want to add hours, minutes, I guess, to seconds, minutes, hours, days, months, years, whatever the case might be. But we can do it through a series of helper methods, the add methods. Here I’m just going to console.WriteLine and we’ll take myValue and we’ll start off with something simple, like AddDays. You can see that we can add milliseconds, seconds, hours, days and everything up from there. Let’s just do something simple, like AddDays. We’ll add three days and then we’ll just do it ToLongDateString on it like that. Now you may have noticed me do this in the past where I’ve
used the period, remember, that’s the Remember Access operator and chained together a series of commands. In this case, we have a value that represents a date. If I were to call the AddDays method notices that hover my mouse cursor over it, that the return value of AddDays is another date time. Now, since I have another date time in my hand that represents today plus three days, then I can call that date times ToLongDateTimeString, which now returns as you can see a string data type. That’s the notion of chaining method calls together. As long as you continue to chain together methods that return some value of some data type, you can continue to call methods for that given data type. Let’s go ahead and see now, three days from now it will be, in fact, Friday, March 18th. Let’s do something with regards to hours and let’s go myValue.AddHours. We add three hours ToLongTimeString and that would be 6:38 PM. Then what if I wanted to subtract time? Are there any subtract hours or subtract days? No. However, what you can do is simply use a negative number to subtract, so instead of adding days, I’ll subtract days. We’ll just go ahead and run that. You can see three days ago it was Saturday, March 12th. Great. In addition, we can just grab off parts of a date or time. Here again, let’s go myValue, and let’s just pull off this current month. This will return an integer. Now, Console.writeLine we know can accept an integer, so we’ll just go ahead and print out the current month. The third month, obviously, that’s going to be March. Now, we’ve looked at how to create the current DateTime, but what if I wanted to create a DateTime in the past or in the future? I could do something like this, so DateTime and I’m going to call this myBirthday. Here again is that new keyword. I’ve hinted at a number of times, we will get to it, don’t worry, but I’m going to use it one more time, new DateTime, and I’m going to parse in the year 1969, the month December, and then the day the 7th, that was the day I was born. Now what I can do is something like we’ve been doing up to this point, Console.writeLine and just myBirthday.ToShortDateString just to prove that it’s of date just like the other dates that we’ve been working with, so 12/7/1969. Now there’s one final way to create a new DateTime, so let’s create another version of birthday equals DateTime.Parse, remember we’ve used in parse. We were able to take a string and turn it into an integer. Here we’re going to take a string and turn it into a date, hopefully. We’ll just type in myBirthday again one more time, and that should give us a DateTime object that represents December 7, 1969. Now what I’m going to do is try to determine how many hours that I’ve been alive or how many days I’ve been alive. Days is probably more interesting number. In order to represent a span of time, we’re going to use a new data type called TimeSpan. Here I’m going to use a new TimeSpan, and we’re going to call this myAge equals DateTime.Now.Subtract, and the subtract method will take the current date and subtract whatever date we want to use. In this case, we’ll use myBirthday. Now that I have an object that represents a span of time, I can say represent that span of time in terms of days or years or whatever the case might be. To do that, I’ll go Console.writeLine and then I’ll use this myAge.. Here I can say, give me the total number of days that I’ve been alive and print those to screen. You can see I’ve been alive, what? Well, 16,900 days. I’m getting old. I say that every time I record this video and I feel older every time. Anyway. Here we were able to format dates for display. We’re able to manipulate dates by adding and subtracting date and time, and they were able to determine the difference between two dates using a TimeSpan object. We also talked about different ways to create a date, whether it be now or some time in the past or future, by either just using one of the versions of the DateTime objects constructor, we’ll talk about that later, or by using DateTime.Parse and parsing in a string. Let’s stop right there and we’ll pick it up in the next lesson. Doing great. See you there. Thanks. Hi, I’m Bob Tabor with Developer University. For more of my training videos for beginners, please visit me at devu.com. You might recall at the outset of this course, I said that a class is a container for related methods, and I used the console class as an example of this. We had the Console.writeLine, Console.readLine, Console.Write, we’ve even used Console.Clear. All of these methods that had something to do with working with the console window. I said it makes sense to put them all in the same class, the console class. Now, truth be told, I intentionally oversimplified my explanation about classes and their relationship to methods, because first of all, I wanted you to gain a little bit of confidence in yourself that you can do this, that this isn’t hard. You can get your hands around it and you’re going to do just fine. I wanted to do that before we got into the topic of classes, because while there’s nothing hard per say about classes, they do lend themselves to a conversation about object-oriented programming, a style of programming that some beginners find a little bit difficult to grasp at first. Now, the code that you’ve been writing in your methods have all been defined inside of classes, and you’ve been calling methods that were defined inside of classes. Classes have been all around you, you’ve been working with them from the first line of code that you wrote. You’re really already an old pro at this whether you realize it or not. I’m merely going to fill in some of the details that you don’t yet know about in this lesson in a couple of subsequent lessons so that it rounds out your knowledge so that you can fully harness the power of the.NET Framework Class Library in your applications. Maybe someday, whenever you sit down to architect some big application for some large company that you go to work for, you’ll begin to think like an experienced, object-oriented software developer. But at this early point in your C-Sharp experience, I really just want you to be able to do one thing and one thing well, that is to find what you’re looking for in the.NET Framework Class Library and be able to have the confidence to utilize the methods and the properties in those classes that have been defined there. The truth of the matter is that object-oriented programming is such a massive topic that I certainly couldn’t do it justice in this course. In fact, I have a whole course devoted to it on devu.com. Again, I really just want to accomplish one thing here, I want you to know enough about classes and objects and properties and methods and things like that so that you can harness the power of the.NET Framework Class Library inside of your own applications. Now, the way that we’re going to learn about classes and methods and properties and all that good stuff is by creating simple custom classes of our very own. Let’s start by talking about creating a simple application for a car lot. Suppose that I own a car lot and I want to sell cars, and I want to build an application that helps me keep track of all the cars on my car lot. I might need to create a number of variables to hold information about a given car, because I’m going to use that information to then determine its value based on its make and its model and its year and so on. I might start off by creating a couple of variables called car1Make, car1Model, car1Year, and so on, in order to keep track of that information. Now, what if I need a second car in my application? Well, then I guess I could create another set of variables called car2Make, car2Model, car2Year. What if I need a third one? Well, I think you see where I’m going with this, things are going to get out of hand pretty quickly here. Then what if I decide one day that the value of the car is not only based on the make, model, and the year, but we also need to keep track of the color of the car as well. In that case, now I got to do car1Color, string, car2Color and so on. You can see that this simply is not the right approach to keep track of information that should be collected together about a given entity. I need a way to keep all of this data about a car together in its own little container. I want to keep track of the make, the model, the year, the color, and maybe a bunch of other things too about a single car, but I don’t want to have to treat it like a bunch of loose information, I need it all related together. What I’m going to do is start off by defining a class that contains four properties that describe any given car on my car lot. To begin, what I’m going to do, you can see I have a project that I’ve already started with here, SimpleClasses, go ahead and pause the video and catch up with me if you like. What I want to do is work actually outside of the first class that’s already been defined in our program.cs file. I want to work inside of the namespace SimpleClasses, but I don’t want to define a new class inside of my existing class, I want to work outside of that class here, and so I’m going to define a new car class like so. I’m going to give it four properties, and I can type it all out like this and I’ll explain what I’m doing here in just a moment or I can use a shortcut prop, tab, tab and then I can use the replacement areas by using the tab on my keyboard, so I want to make a string, tab, tab model, enter, enter, prop, tab, tab, int, year, enter, enter, prop, tab, tab, string tab, tab, color, enter, enter. I’ve just defined a class name car with four properties. This car class allows me to define a data type that describes every car in the world. Every car has a make, a model, a year, and a color, and a bunch of other information that I might or might not be interested in for my specific application, but my aim here is to use this definition of what comprises a car in order to create additional instances of the car class that represent all of the cars on my car lot. In other words, I want to create a bucket in the computer’s memory that’s just the right size to hold information about any given car on my car lot. It should contain not only the fact that it’s a car, but then also the value of its make and its model and its year and its color, all in one big bucket up in the computer’s memory so that I can access it. There’s two parts to this. There’s defining the class itself and then once I’ve defined it, I can create instances of that class. Here the class is the definition, but when I create a new instance of this class, then I’ll be working with an object and sometimes those terms get confused. But the class is the blueprint, the object is an instantiation or something that’s been created as a result of having the blueprint or the pattern. The way that we create a new instance of the car class is to do this. I’ll just call this my car to avoid confusion. This point, I’ve defined it just like any variable I would by declaring the data type itself, whether it be string or integer. This is just a little bit more interesting, a little more complex. It’s the car class. Then I give it a name that I want to call it by my car. Now, that’s part of what I need to do. The next thing that I want to do is actually then create a new instance of that class and say, put this up in the memory, in the bucket, so to speak. Here we go, new car. Again, there’s two parts of this equation, we’ll talk about this more as we go throughout this course. First of all, I want to declare a new car in memory and then I want to create an instance of car and then put it up in the memory, so there’s two distinct steps there. In the real world, you can use the same blueprint to create many different houses. In the neighborhoods that I’ve lived in before, you might describe them as cookie cutter houses, they all look the same. You could use the same pattern to create clothing over and over, or you could use the same recipe to create the same cake or casserole and get the same results each time. Each time you want to build a new house, it will be at a different address. Each time you follow the pattern, you’ll create a new instance of the clothing that can be sold to a different customer. Each time you follow that recipe, you create a new instance of the recipe and you can offer it during either the same meal or a different meal. The same is true with classes, each time that you create a new instance of the class, you have a new object that is distinct and separate from the other instances of that same class in the computer’s memory. They each live by themselves. A class is like a cookie cutter. Now keep in mind, you can’t eat the cookie cutter itself. You eat the cookies that you make from the cookie cutter. The cookie cutter gives each of the cookies some shape, and so when you instantiate a new instance of a class, you’re basically using your class as a cookie cutter to stamp out new instances and you have 1, 2, 3, 4 new instances of cookies that you can then put in the oven and bake. Focus on the new keyword, it is what you would consider to be the factory. It actually builds the new car and puts it into memory. It uses the blueprint, it uses the pattern, it uses the recipe, it uses the cookie cutter in order to create a new instance of that blueprint or that pattern or that recipe or that cookie cutter and it brings the class to life in the computer’s memory, and it makes it usable by your application. You can create many instances of a given class or you can create many objects, all based on the same class, but each object will be distinct from the others. If by no other distinction then by is merely the address in memory where they live. What I want to do is not only set the properties of this car because I have these four properties that I want to use to distinguish this car on my color to represent this single car. But then also, I may want to then access or get those properties back out, and it’s working just like you’re working with variables. In this case, instead of just accessing make variable, I would go mycar.make and I would set that equal to Oldsmobile. Now, admittedly, in this particular case, I am merely hard coding these values. If this was a real application, I would ask an end user to input this information or I grab it from a database, something along those lines. There we have it. We have one instance of the car class and I’ve set all of its properties, and now I want to get those properties and print them out in a console window. We’ll just do this in the most easy way possible. We access, or we get the values just like we set the values before by using the name of the object, dot the name of the property. Let’s go make, myCar.Model, myCar.Year, and myCar.Color. Now you might be wondering, well, Bob, why did you do it that way and not Car.Make, or Car.Model? Remember car in that instance, car describes the class, the blueprint. But what we want to work with is one instance of the blueprint so that’s why we’re calling that instance myCar. It’s the variable name in the computer memory that we want to work with. Let’s go ahead and separate these out on the separate lines and then finally will go Console.Readline, so and this should not be an exciting application at all because we’re merely just printing things to screen. But at least I can show creating new instance of a class, setting the properties, and then getting the properties and printing them out. That’s what this get in the set are for. There are actually longer versions to declare a property in fact, let’s just do this propfull, tab, tab. This is a longer, more complete version of creating a property, but I don’t want to talk about it right now there are reasons why you would want to use this. But for the most part, for our simple needs, we’ll just use this abbreviated version of defining a property in our classes. Now did you notice that we got full IntelliSense support, so whenever I typed out myCar. and I used the member accessor operator that I’m able to see all the members of the class, the Make, the Model, the Year and the Color. All represented as a little wrench icons in IntelliSense so that I can access them, whether to set their value or get their value. Furthermore, I’m able to set values the way that I would just with normal variables by using the the assignment operator, I’m able to work with the variables and write them just like I would any other variable in my system. There’s nothing all that special about it outside of the fact that they’re all related, to a specific instance of a class. We created a new data type, the Car data type, and since the data type, we can use it just like we would any data type in our system, so if I wanted to create a little method here, private static, and I’ll use the decimal data type because I’m going to work with dollars or money currency, and I’m going to create a method called determine market value and I’m going to allow this to accept a car as an input parameter. What I’ll do is just in this case, I’m going to have to carValue to $100 and we’ll leave it at that. In fact, I’ll go ahead and the end here. However, if this was a real application someday, I might look up the car online using some web service to get a more accurate value. But for today, we’re just going to hard code the value to be 100 and we’ll return carValue. Here I can go determine market value, I can pass in myCar, and I should return back the value, so let’s go decimal value equals determined market value and then let’s go Console.Writeline and will use what we learned previously, to print out the value of the car like so, and let’s run the application. You can see that it’s worth $100. Now notice what I did here. I used the C in car in a c in car. The C corresponds to the name of the class because I named it with a capital C, and the C-sharp compiler is smart enough to know that again, C car and c car are two different things, and this is a common naming convention to use the same name for an object if there’s no reason not to. If there wasn’t something special about the car like it being in some special state, but I can reuse the word car, I chose not to do that here just to make it obvious what I was actually doing. But there’s nothing wrong with doing this as well. Defining this input parameters data type and then giving that input parameter the same name. But just with it lowercase character they’re two very different things. Moving on, I want to talk about creating methods on the class. We’ve already said that classes are containers for methods. We’ve created this helper method here inside of my static void main. But it might make more sense for us to create that method here inside of the car class itself since the car class already has access to information like the make model of the year and the color and that’s the information that we would use in making a determination on its value. Here, let’s go ahead and define this as a public decimal DetermineMarketValue. Now, we’re not going to allow anything to be passed in, because we already have all the information we need right here. Let’s create a little algorithm here. If the year is greater than 1990, then we will set the value of the car, the car’s value, which we need to define as a, so let’s go. Decimal carValue, set the carValue equal to $10,000 so if it’s a relatively new car, we’ll set it to 10,000. Otherwise, we’ll say the car’s values only words of 2,000. This is a very, very overly simplistic example. But we just want to demonstrate the fact that inside of an instance of the class, you’re going to be able to access its properties. We’re able to access the current car’s year in order to determine its value, and so in this case, what I’ll do is let’s comment this out and comment that out, and here will go Console.Writeline, myCar.DetermineMarketValue. Like so. Because this is going to come back as a decimal, I’m still going to want to format it. Now let’s go ahead and run the application. Since it’s in 1986, is before 1990, it’s only worth $2,000. In this lesson, we used a very concrete example. We’ve all seen cars, driven cars, own cars. A car is easy to conceptualize and represent in a class because there’s a tangible real world equivalent. Now my assumption again, is that your main exposure to classes will be whenever you’re using classes define by Microsoft in the.Netframe class library, and most of the time, those classes don’t represent real tangible things. They’re very conceptual in nature. You might have a class that represents a connection to the internet. You might have a class that represents a buffer of information that streaming from hard drive. They don’t really have real world tangible equivalence, so you need to be aware of that. In most cases, the.Net Framework class library classes don’t have real world equivalents, but the ideas are exactly the same. As you choose a software developer, you might want to invest a little bit more time in learning how to create your own library of classes, and those classes can interact with each other, they can represent real things in your company or in the real world or conceptual things. The process that you go through to break down a problem in the world and represented in objects, is object-oriented analysis and design. Again, that’s not a topic that we’re going to cover in this series of lessons, but you can learn more about that at DevEW.com, where I spend a lot of time talking about those things. To recap, a class is just a data type in.Net and it’s similar to any other data type like a string or an integer. It just allows you to define additional properties and methods so you can define a custom class with properties and methods, and then you create instances of those classes or rather you create an instance of in class, therefore working with an object using the new operator. You can then access that object’s properties and methods using the.Operator, the member accessor operator. There’s quite a bit more to say about classes. Don’t worry if you don’t understand everything just yet, why you even need them, how to really fully utilize them. Just make sure you understand the process that we went through in this lesson of defining a new class, creating an instance of a class, setting its properties, getting its properties, passing an instance of a class into a method, or even defining the method inside the class itself, and allowing it to access its own members like its other properties. If you really don’t understand much more than that, then you’re doing just fine. You’re exactly where you need to be don’t worry, we’ll cover lots of other topics related this in the upcoming lessons. We’ll see you there. Thank you. Hi, I’m Bob Tabor with Developer University for more of my training videos for beginners, please visit me at DEVU.COM. In this lesson we’ll continue to talk about classes and methods. We’ll begin by talking about the lifetime of objects so objects come to life. They live for a period of time, and then they die. They’re removed from memory, and we’ll talk about the.NET Framework runtime and its role in the creation, the maintenance and then ultimately, the removal of objects from memory. Next, we’ll talk about constructors, which are simply methods that allow us to write code as developers at the moment when a new instance of a class is created. Finally, we’ll talk about static methods and properties that study keywords been lingering around now for some time, and we’ve been using static properties and static methods throughout this course, even from our very first examples. So we’ll finally tackle that issue in this lesson. Let’s begin by creating a new project. You can see I’ve already done that and pause the video and catch up to where I’m at right now. I’ve created a new project called Object Lifetime. Furthermore, you’ll see that I copied the car definition from our previous lesson. If you like, you can type that in help build some muscle memory, help remind you to use the Prop Tab Tab shortcut, the code snippet in Visual Studio to create these short and auto implemented versions of properties. We’ve talked about that a little while, and then ultimately you can see in line number 13, we create a new instance of our car class. That new instance we’ll call myCar, and we’ve talked about this in the previous lesson, but I felt like this deserved a little bit more explanation because there is actually a lot that’s going on under the hood, and it would be helpful to understand this as we begin to work with classes and objects. Whenever we issue a command to create a new instance of a class like we have in line number 13, the.NET Framework runtime has to go out and create a spot in the computer’s memory that’s large enough to hold a new instance of the car class. Now that much we know. The computer’s memory has addresses that are similar to street addresses like the address you live at, the address that I live at. Now, admittedly, a computer’s memory address is looked dramatically different than our addresses, like 123 East Main Street, because the computer’s addresses are typically represented in hexadecimal values, but they’re known addresses nonetheless. It’s easy then, for the computer to find something in its memory by using its address. The.Net Frameworks first job is to find an empty available address where nothing is currently living, where there’s no data that’s currently being stored, and that address has to be large enough to store an instance of our class. The.NET Framework runtime will then create the object instance, and it will copy any of its values that are currently stored in that object instance up into that memory address. Then it takes note of where it put that object. It notes the address of the memory where it put that instance of our object, and then it serves that address back to us, and we store that address in the actual name or the instance name of our class. In this case, myCar, that variable is actually holding on to a reference or, in other words, an address in the computer’s memory where we can access that object once again. Now, whenever we need to access the new instance of the car class, we merely can use its reference name. It’s in this case myCar, so myCar is simply holding an address. It’s simply a reference to an instance of, in this case, a car class in the computer’s memory. Whenever you need to work with that instance of the car class, you just use the myCar Identifier and the.NET Framework runtime, takes care of everything else for you. It gives you the illusion that you’re actually working with the object itself, but in reality, you’re just holding on to a reference to an address in the computer’s memory. Now there’s an analogy that helps me to sort all this out in my mind, and we’re going to continue to extend that bucket analogy, if that object is stored in the computer’s memory, if it’s what we have equated to a bucket, an address, an area that holds on to our values, then what’s returned back to us as programmers is a handle, that’s what myCar is, it’s our handle to the bucket. We’ve used that bucket analogy in a number of different times and it’s served us well. But we essentially are storing values in that bucket just like we were before, and we’re holding on to that bucket using our reference to that memory area in our computers memory. What happens if we were to let go of the handle? Well, at that point we’ll no longer be able to get back to the bucket. We’ve lost the bucket somewhere in the computer’s memory. The bucket will no longer be accessible to us. Now, can we ever get back to that bucket? Well, no. What happens is that the.NET Framework runtime will be constantly monitoring the memory that it manages, and it’s looking for objects that no longer have any handles associated with them. Once we let go of a handle, the reference count, the handle count, I guess you could call it will go to zero and at that point, the .NET Framework runtime will say, I see that nobody’s interested in you anymore. They’ve let all of their handles to you expire or to go out of scope. That must mean that you’re no longer needed and it removes it and throws it in the garbage. That process of monitoring memory, looking for objects that no longer have any references to them is called garbage collection. It’s one of the core features of the .NET Framework runtime, and it’s one of the reasons why it’s easier to work with C-sharp at first as a developer than maybe going directly to C++. In an unmanaged language like C++, you, the developer, may have to manage memory on your own, and sometimes you might forget that you actually are leaving things in memory and you’re not cleaning them up, you’re not removing them yourself so your application might have a memory leak. Or you might have a corrupted memory region where you’re using an area of memory and you forget that you’re using it, so you copy something else to that area of memory. Now you go back to retrieve the value that you originally put in there, and it’s corrupted. That leads to corrupted memory in applications. You don’t really get that issue so much in C-sharp because again, the .NET Framework runtime takes care of all the memory management for you. Let’s do a little of experiment here, if we said that we can have one handle to a bucket, what happens if we attempt to create a second handle to the same bucket? Let me do this real quick. Let me go to myCar and start setting some of the properties like the Make equal Oldsmobile. Then we’ll set the Model equal to that Cutlass Supreme and then we’ll set the Year equal to 1986. Finally, we’ll set the Color to Silver. Now, keep that in mind, we’ve created a new object that we’re referencing using the myCar identifier. Instance of the car class lives in memory, and we’re holding on to it with a handle called myCar. But what if we were to create another car like this? So my other car, what have we really done right now? We simply have created a handle, but we’ve not attached it to any buckets of cars in our computers memory. At this point, what I could do is go myOtherCar equals myCar. Now, what have we really done there? Well, we’ve merely taken one handle to a bucket in memory, and we’ve created a second handle and said, “Hey, let me copy your address,” so that we’re both referencing the same bucket in the computer’s memory. Now, to prove that what I’ll do is do a Console.Writeline and we will do what we did before. Just give me a second here and we’ll reference myOtherCar’s Make, myOtherCar’s Model, myOtherCar’s Year and then myOtherCar’s Color. Let me separate these two different lines for readability sake. Then a Console.Redline for good measure. Now let’s run the application. You can see that even though we created or set the properties of myCar since we copied the reference to the car object in the computer’s memory into a new variable called myOtherCar, I can still get to the values that are in memory because they’re both pointed to the same object. Now I can even do something like this where I actually change something, myOtherCar.Model equal to the 98. That was the large style model for that car. Let’s then go back to and do something similar to this just to prove that they’re one and the same here, and I’ll say, “Hey.” Let’s do that. We’re going to use our reference called myotherCar and set the model, change the model from the value Cutlass Supreme to the 98. Then we’re going to say, Hey, show me what’s in the myCar object. So now we’re going to run the application, and you can see now we’re printing out what’s currently in my car. It’s the same thing that we changed in my other car because they’re both pointed to the same place. I just want to emphatically make that point here. As you can see, we have now two references to the same object in memory. We essentially attach the second handle to the same bucket so that we can use either one to retrieve the data in the bucket, so to speak. If you don’t like that analogy, maybe it helps to think of this in terms of balloons. I have a balloon and I have two strings tied to the balloon. What happens when I cut the first string? I’m still holding on to the balloon, but what happens when I cut the second string? The balloon now will fly away, and we’ll never see it ever again. As references go out of scope, in other words, whenever the current thread of execution leaves the current code block that we’re currently in, or those object references are set to null intentionally by the software developer, then the number of references to the object, the number of handles to the bucket, the number of strings attached to the balloon, they go to zero. So here again, when the. NET Framework runtime looks through memory and finds objects that have a reference count of zero it will remove those objects from memory. We talked about the two instances in which the connections to the object get removed. One is that the reference goes out of scope, so whenever we create a new variable called myCar, it will continue to be in scope as long as we’re inside of this main method. But once we exit out of the main method, that variable goes out of scope. it’s no longer available for us to access any longer. The same would be true if we created a different method, and defined a variable. As soon as we go out of scope of that method and we have finished executing all the lines of code in that method, then any of the variables that were declared inside of that method go out of scope. In this case, we would lose then any references to the objects that we created in the context of that method. That’s one instance in which we’ll lose references to objects that we created. But the second is if we, as the developers, actively take a role in cutting the strings or removing the handles from the buckets in memory. The way that we do that is by setting our objects equal to null. The value null is not zero, and it’s not an empty string, it just means indeterminate. In this case, what we’ll do is go here, and we’ll set myOtherCar equal to null like so, and when we do this now we’ll remove one of the handles to the bucket, so we’re back to just one handle in the bucket. To prove this, let me go ahead and copy this little section of code, and go here and put it below this, and when I do that, notice what happens will get an exception. The exception is that there’s a null reference exception that was unhandled, and the reason why it was a null reference exception is because we have now removed the handle. The handle does not point to any objects in memory, and yet we’re still attempting to access values from the object in memory, so we get an exception in our application. Now what will happen if we were to remove the second reference like so myCar equals null. Well, at that point now we have removed all the references to the bucket, even if we were to attempt to get to it with either myOtherCar or myCar, either way, the references are gone completely, and so now the object will be removed at some indeterminate time in the future by the.NET Framework runtime. In some situations, this indeterminate period of time can cause a problem, especially when the object in memory is holding on to some special resource, maybe something like a reference to a network connection or a file on the file system or holding on to a handle to access a given database. Again, we don’t know exactly when the.NET Framework runtime will actually execute the garbage collection step, and that might pose a problem in certain situations. In these cases, you would want to use a more deterministic approach to requesting that.Net removes the object from memory and, if necessary will finalize and clean up anything that needs to happen inside of that object to completely get rid of it in the computer’s memory. In these cases, you want to learn about deterministic finalization. That’s a little bit of an advanced topic, so we’re not going to talk about it in the series of lessons. Just keep in mind that whenever we set reference to null or whenever we go out of scope, we will be removing all the references to our objects. But the .NET Framework runtime itself figures out when it’s ready and willing to remove those objects from memory completely. In most cases, that’s not a problem. Occasionally, you’re going to run into a situation where it is a problem, know that there is a remedy for it called deterministic finalization. That should suffice our explanation of really what’s going on whenever we create new instances of objects, how objects are maintained in memory, and then at what point they’re removed from memory. Let’s move on and talk about constructors. I said at the very outset that a constructor is merely a method that allows us as developers to execute code at the moment that a new instance of a class is created. There’s something really subtle about what’s going on here in this line of code line number 13. Did you notice that whenever we use the new keyword, and we give it the name of the class that we want to create a new instance of that we’re also calling it using the method invocation operator. Why do you suppose that is? Whether you realize it or not, you’re calling a method whenever you create a new instance of a class, and that method is referred to as a constructor. It allows you, the developer, the option, you don’t have to do this. It’s an option to write some code at that very moment whenever a new instance of a class is created. Constructors can be used really for any purpose, but typically they’re used in order to put that new object into a valid state, meaning that you can use it to initialize the values of the properties of that given object, and so it’s immediately usable. Now, let me give you a really quick example here. Let’s say that you want to create a constructor that would allow you to set a property of the car at the point whenever you create a new car class. That property is available immediately in the very next line of code whenever we begin to work with it here in line number 15. Whenever you actually want to create a constructor, you would go and create something like this, public car. In this case, what I’m going to do is simply set the make property to Nissan. By default, whenever we create a new car class, we’re going to set one of its properties, the make property to Nissan. Now let me say this as well, you might see the keyword this used that this keyword is optional, it refers to this instance of this object, and it’s just to help clarify where this variable name or this name is coming from. When I see that this keyword, I automatically think, Oh, that’s part of the declaration of the class itself. It’s saying that you want to access a member of this class that’s been created. But as you can see, it’s faded out in my text editor. It might not be in yours, which lets me know that I could actually remove this it’s not necessary. You might see that though in other people’s code, just understand what that is. Now if we were to go ahead, and create a new instance of the car class, here’s what I’ll do. I’ll actually comment out all of this code. Like so, and then I’ll comment out the code that we know will break the application. We can leave the rest of it, I suppose. Notice that the very first item that is displayed is the make of the car, and it’s set to Nissan. I didn’t set any other properties. That’s why we didn’t get any other values there in the printout, but hopefully you can at least see how we go about creating constructors. Now, admittedly, it may not make a lot of sense right now why you’d want to do this, but I’m showing you the technique you’d use, not the rationale, necessarily, but the rationale is simple. What we would typically do here is to put any new instance of an object into a valid state. You could load values into the various properties of your class from a configuration file or from database or some other place in order, again, to get that object into a valid state so that it’s immediately usable at the point of whenever it’s instantiated. Let’s go ahead and talk about overloaded constructor. You’ll see this frequently whenever working with objects in the.NET Framework Class Library. Just like you can create an overloaded method in your classes by changing the methods signature, in other words, the number and the data type of the input parameters for the method, you can do the same thing with a constructor. You can create an overloaded constructor. What I’m going to do is create an overloaded constructor here like so. Now at this point, the method signatures are the same, so I’m going to get a little error here. But to modify that, I will merely add at least one input parameter of type string, but I’ll go ahead and do them all as well. Then here in the body of the constructor, I would Make equals make. This Make is in reference to the property itself. This ‘m’ in make is the name of the input parameter, and it’s a good convention to use the same name for readability sake and for your own sanity. You don’t have to do it this way, but just keep in mind that M and m are two different items as far as C-sharp is concerned. It’s not confused. You might be confused, but you will be able to handle this just fine. Now you might ask, what’s the point of that? Well, in many cases, whenever you create a new instance of a class, typically you don’t want to take five steps to do this. You would want to immediately whenever you create a new instance of a class, so my third car equals new car. At this point, you can do one of two things. Notice here that underneath the open parentheses, I have one or two ways that I can call the constructor. I can either give it no input parameters or I can give it four strings as input parameters to initialize that new instance of car and put it into a valid state immediately. Here I might go Ford, escape, 2005, white like so. Now I have not only created a new instance of the car class but I immediately initialized its values by calling its overloaded constructor to populate all of its values at the moment of instantiation. What would happen if we were to actually remove these two completely? What if we were to comment these out? What happens? You can see that we’re still using the method invocation operator for our new instance of car that would suggest that we’re calling a constructor, but we don’t have a constructor defined. Why is this working? Why isn’t it giving us an error? Well, the reason is because a default constructor is automatically created for you whenever you compile your classes. It will be a constructor without any input parameters and will have no body, but it’s essentially the equivalent of doing this right here, except with nothing inside of it. That’s created automatically for you. No matter what, you’re going to have a constructor, it just won’t do anything for you. The implicit default constructor has no input parameters, no method body, but it allows you to make calls and create new instances of classes in a consistent way. It’s actually just generated for you. Again, it compile time, of course, by defining it yourself, you’re taking control of the process of instantiation. Let’s talk about the static keyword now. You’ve seen static around since the very beginning. I said, let’s ignore that for now. We created our own methods, and I said, we have to use the keyword static. I’ll explain later. Well, now is the time. I want to ask a question, did you ever notice that whenever we were working with the Console window, we never had to create an instance of the console class in order to call its methods? That combined with the fact that whenever we wanted to work with DateTime, we could get to this moment in time by using the DateTime.Now property, but we never had to create an instance of DateTime. Furthermore, whenever we were actually working with arrays and we wanted to call the reverse method, do you remember we did Arrray.Reverse and then we passed in the array itself? How is it that we were able to use the reverse method without creating an instance of the array class? Well, in each of these cases the creators of those classes, or specifically those methods adorned their methods with the keyword static, which means that you do not have to create an instance of a class in order to utilize that method. In some cases, they may have defined an entire class as static, meaning that all of its properties and methods were static. You can create your own static methods in classes as well. Again, the objective here at the very outset is to just help you utilize the.NET Framework Class Library. So just know that some of the classes and methods in the.NET Framework Class Library are static and some are instance or require you to create an instance of the class before you call its methods and properties. Static methods will be available to you without first requiring you to create an instance of a class. Just so you can see how this works, we can create a static method on our car class like so. In this case, we’ll go public static void MyMethod, and here we’ll do Console.WriteLine called static MyMethod. Now we can go here near the very top and just say Car.MyMethod, and notice I didn’t have to create an instance of car. I’m using the actual car class definition itself when we run the application. Before we go too far here, let’s comment out pretty much everything. Let’s remove that. Let me go down here. Just make this so that we don’t run into any potential issues here. Let’s run the application and you can see that we were able to successfully call the static MyMethod. Now, what would happen if we attempted to reference one of the properties in our class? Let’s just print out the make property. Notice that I immediately get a red squiggly line beneath the word make. It says that an object reference is required for the non-static field, method, or property called Car.Make. It’s important to keep in mind that there’s a fundamental difference between working with classes that have static members versus instance members. Instance members are the things that we’ve been working up to this point where we have a series of properties that describe a single instance of a given entity like a car. They might be methods that operate on a single instance of a car like the constructors that we saw, whereas a static member, like a static method in this case, they don’t really operate on any single instance. They’re more like utilities. You can call them at any time. They don’t depend on the state of a given instance of the class or even the application itself, they can be used at any time because they’re not really tied to one specific car. They’re true of all cars and can be used at any time. Static members versus instance members, just keep those two clean in your mind. You might want to ask the question, why would you ever create a static member like a static method? Well, that’s a bit more complicated. That might require a longer discussion of things like design patterns which are common solutions to common problems for software developers or coding heuristics which are more the best ways to go about solving problems. I just want you to know that there’s a fundamental difference between static members in a class and instance members of a class and it’s easy to recognize them. If it’s a static member, it’ll have the static keyword and in which case you cannot reference any instance members like instance properties or even other instance methods that act on instance properties. They require an instance of the class to operate. Just know that there are these two types of members in a given class and that you’re going to encounter both whenever you’re working with the.NET Framework Class Library. Why you would use one or the other, well, that’s really again another story. I would say this that typically, I would recommend that you don’t mix and match them in the same class. Clearly, not everybody agrees with me because you’ll find that many times, but it’s not really important at this point to understand why you would use one over the other, just know that that possibility exists, that’s why you don’t always have to create an instance of a class before you use the members of its class; in this case, a given method. Let’s recap what we talked about in this lesson. We began talking about the lifetime of an object, how we create a new instance of an object, what that’s doing in terms of creating an area in the computer’s memory, returning back to us an address, a reference to that object in memory, what happens during the lifetime of that object, and ultimately, what happens whenever we remove all of the references to that object. Talked about the role of the.NET Framework runtime and how it’s keeping track of the number of references to objects so that it can perform garbage collection on objects that have no more references to them in memory as means of keeping things clean and making the memory available to other applications or even our application again. We talked about constructors and how developers can use them to put a new instance of an object into a valid state at the point when that object is created. Then we talked about the static keyword. We looked at some usages of static members inside of the.NET Framework Class Library. We looked at creating our own static member, this MyMethod. We talked about the difference between static members and instance members and how it’s really oil and water. You can’t mix the two and why that is. We didn’t really talk about why you would choose to use one over the other. However, that’s again a topic for another day. Hopefully, all of these concepts make sense. If not, don’t continue on and hoping that you’ll just catch up to them at some point in the future. Make sure you thoroughly understand this before you continue on. If you are continuing, great, we’ll see you in the next lesson. We’ll see you there. Thanks. Hi, I’m Bob Tabor with Developer University. For more of my training videos for beginners, please visit me at devu.com. Now we haven’t spent a lot of time talking about variable scope. It’s actually extremely important. We recently learned that it also impacts the lifetime of objects. We want to spend a little bit more time really making sure we understand the scope of the variables, whether they be variables holding simple types or references to complex types in our applications. Not only do I want to fully explain that, but then I want to use that as a launching pad to explain key words like public and private that we’ve seen several times in our course up to this point but I haven’t really talked about. Before we talk about that, let’s talk about variable scope. Let me start by saying that whenever you declare variable inside of a block of code, that variable is only alive for the life of that code block and any of the interior code blocks or code blocks inside of that code block. Meaning that when the code block is finished executing, the variable that was defined inside of that code block is no longer accessible and its values are disposed off by the.NET Framework runtime. We’ll start by looking at how that is impacted by common code blocks that we’ve been working with up to this point and then we’ll use that and expand beyond there. You can see that I’ve created a project called UnderstandingScope and you can pause the video and catch up with me. I want to create this project and focus on testing how variable scope works. I’ll start with a pretty simple code example. Again, the concepts that we talk about also apply to object references not just variables that hold simple strings and integers. Let’s start by creating a simple for iteration statement and we’ll just loop through 10 times and we’ll do a Console.WriteLine containing the value of i, and then here, we’ll do the Console.ReadLine. We can see our results and we’ll run the application. As we would expect, we can see values from zero through nine. Now, what if I wanted to access the value of i here right after the closing curly brace for the for statement? We’ll notice that I’ll get a red squiggly line under i, and if I hover my mouse cursor over, it says that i does not exist in the current context. Why? Because i is now outside of the scope of its definition, we defined i inside of the for loop, it’s available inside the for statement itself plus in the code block below it but not outside of either of those. Let’s comment that up. Second, we’ll continue by going and creating a string of j equal to empty string. What we’ll do inside of our loop here is just go j equals i.ToString. Now let’s go outside of our loop where we’d be able to access the value of j. Let’s go outside of the four, and well, we’d we be able to actually print to screen the value of j. We were not getting any errors, so let’s run the application. You can see that the last value that was inserted into j was the value nine. Since we defined j outside of the scope of this of the for statement and its code block, we can access it inside of that code block and outside of that code block as well. Next up, let’s look at something like this, where we’ll actually create what’s called a field or a private field. We’ll go private static string k equals. A private field is like a property, except it’s private in nature, but it is available to all of the members of the class. We should be able to see k inside of our for loop. Let’s do i.ToString. We should be able to see it here as well outside of the for loop like so. Let’s go ahead and run the application. You can see that second Console.WriteLine will also display the number 9, but the real question is, what if we were to create a helper method? Static void, and we’ll just call this HelperMethod. Here we go. Console.WriteLine, and we’ll say this is the value of k from the HelperMethod, and we’ll do that. Now here, we’ll call the HelperMethod like so. Will this work? Will we be able to access the value of k as it was set inside of our for loop outside of our static void Main? Let’s run the application, and you can see that we can, in fact, get the value of k from the HelperMethod. Why? Because k was defined at, I guess, you could say, the class level. It is a sibling to static void Main and static void HelperMethod; therefore, it’s accessible to each of these as well as any of their inner code blocks. Hopefully, this is starting to make sense. Let’s go inside of the for loop now, and here what we’ll do is a simple if statement. So if i is equal to 9, so on the very last run of this, then let’s declare a string called l, and we’ll set that to i.ToString. Then outside of that, we’ll go Console.WriteLine the value of l. As you might anticipate, we will see that l does not exist in the current context. Why? Because we declared the value of the string variable l inside of the if statements curly braces. Outside of those curly braces, it’s no longer accessible, so we have to comment that out. Hopefully, this solidify in your mind many of the combinations that we can use in determining whether something’s in scope or out of scope. If you had any confusion about this, hopefully, that cleared it up a little bit. Now let’s move on to the larger topic of accessibility modifiers. We’ve been creating classes, specifically the car class up to this point, and whenever we were creating methods, I would typically use the public keyword. Occasionally, I would use the keyword private like I did here in line number 11. Private and public are both accessibility modifiers. They’re used to implement a tentative object-oriented programming called encapsulation, which is actually pretty important. In a nutshell, you should think of classes as black boxes. Whenever you think of a black box, maybe you can think of one of those old-style television sets. Maybe your parents or grandparents had one. I remember as a kid, us having one, there were no remote controls. You had to get up, walk across the room and actually turn the dials of the TV in order to tune to either VHF or UHF channels. You had another dial where you would adjust the volume. You had an antenna in the back, so you would connect this wire out to your antenna, and you had another one where you would plug it into the wall. Everything else about the television was self-contained. Now, as a kid, I was fascinated whenever my dad would pop off the back of the television set, and he’d go and try to fix it by changing up the tubes. It always seemed like magic to me because I knew absolutely nothing about the innards of televisions. All I knew were the public interfaces, the button for on/off, the dials to turn the channel, the dial to turn the volume up and down the antenna, whatever that did, and the little plug that would obviously give it electricity, but frankly, in order to use the television set, that’s all you really needed to know. You did not have to know anything about how a television worked. All you really needed to know is how to plug it in and change channels, turn it on and off, and then adjust its volume, and that is exactly how your classes should be treated. All the important behind-the-scenes functionality should be encapsulated behind interfaces like public methods and public properties. Now classes might, in fact, have private fields like we looked at here in line number 11, or they might have private methods that are used behind the scenes to enable all the magic that goes on inside of that class, but the consumer of the class shouldn’t know anything about the inner workings of the class in order to work with the class, to operate the class. All they need to know is what’s publicly exposed through the public properties and public methods. In a nutshell, private means that a method can be called by any other method inside of the same class. I used the term private HelperMethod a number of times accidentally. Essentially, when I use the term private helper method, I’m talking about a private method that’s add some additional functionality to those public methods that are exposed to anybody who needs to work with the class through that method. A public method is what’s actually going to be then called by somebody outside of the class, some other code outside of the given class, and private methods are only going to be called by members inside of the class. Let me do this. I’m going to paste in some code, recreate our Car class, and here I have a public and a private method. The public method is called DoSomething, and the private method is called just helperMethod. These are not very interesting examples. I want to keep this as simple as possible. Now, from the outside of this Car class, it’s just roll this whole thing up here and save it. Now, whenever I want to go here inside of my static void main, I might want to work with the Car class, so I’ll go, Car, myCar equals new Car, and then I’ll do myCar. and notice that I can only see the public method DoSomething. Now I might happen to know there are other methods also inside this class, but I can’t see them from outside. Their visibility is hidden to me because they’re marked as private. All I really need to know is how to use the DoSomething method. If I understand that I can call that, all the implementation details will be hidden from me, but it’ll work as I expect it to work. Here you can see that it’s merely prints out the words Hello world. Now, whose responsibility inside the class it is to actually display that? That’s none of my concern. All I need to know is how to call the public method DoSomething. In a sense, the consumer of the Car class has absolutely no idea that the helper method even exists. All it really knows is that there’s one public method and it could call that public method, but it doesn’t know any of the Harry implementation details. Now, I use the term in a sense that in a sense, the consumer of the car. The consumer is going to be a software developer and a software developer is going to be able to drill in and say, “Oh, I see how it’s doing its work. It’s actually making a call out to this other private helper method.” There is a sense in which it is public to developers, but it’s private from the perspective of the consumer, which is this main method. It can only see the DoSomething method, not the private helperMethod. That’s all we really mean here. Now admittedly, this is extremely mundane. It’s a simple example that’s only real value is to illustrate the notion of encapsulation, that we typically want to hide the implementation of our classes behind well-known public interfaces. In this case, a friendly method called DoSomething. The purpose of this lesson is to better understand the notion of scope because we said that once variables, especially variables that contain object references, fall out of scope, their objects will be garbage collected. Furthermore, it’s important to understand that there are parts of classes that you have access to and parts of classes that you don’t have access to. Now, if you ever decide that you want to create your own custom classes someday, even a library of classes that represent the business domain of your company or of your specific application, it could be a game, you should strive to expose public methods and give a simple, straightforward, obvious way to call the public methods from your class, but keep all the other helperMethod, all the other internals privately tucked away and not available to prying eyes. You don’t want a developer to simply go fiddling around inside of all of your methods and use your class in a way in which it was unintended. You want to give them a way to use your class properly through the methods that you’ve designed and that you’ve made available through public interfaces. This also will help to remove any ambiguity in the usage of your classes, and it should be much cleaner as well. All of these things were under consideration whenever the developer’s build the.NET Framework Class Library. In the.NET Framework Class Library, methods and properties are exposed using the public keyword. Now, they might also be using private fields and private methods behind the scenes, but you would never know. They may use other types of accessibility modifiers as well. There’s actually a couple available called protected and internal. However, these are primarily for whenever you’re working, either in a rich inheritance relationship between classes and you’re building a rich inheritance hierarchy between classes, or whenever you’re working with a very large library that’s compiled into separate assemblies. That’s when some of these other accessibility modifiers might come into play. They’re topics that are beyond the scope of this absolute beginner series, but topics that I do cover on Developer University. If you want to know more about object
oriented programming and encapsulation by all means, go ahead. We are well past halfway through this course. You’re doing great. We’ve already covered the most difficult material already, now we’re just adding on details, so you should be encouraged by that, that you’re still plugging away at this and you’re doing great. We’ll see in the next lesson. Thank you. Hi, I’m Bob Tabor with Developer University. For more my training videos for beginners, please visit me at devu.com. Previously in this course, I said that the .NET Framework Class Library is merely a collection of classes, each containing methods filled with functionality that we can utilize in our applications, but we didn’t have to write. Microsoft has spent tens of thousands of man hours building and maintaining this library of code, and we can benefit from it by merely calling into his classes and methods inside of our applications. Now, the Framework Class Library is massive. Thousands of classes, each with their own set of methods, and so the developers of the Framework Class Library wisely decided to split this library of code up into multiple files. Just imagine if you had to load the entire library into memory every time you wanted to run your application. First of all, it would be excruciatingly slow. Then secondly, it would probably take up the maturity of your computer’s memory. They split up the code into multiple files. These code files are called.NET assemblies. In fact, even the applications that we build, they’re ultimately compiling into.NET assemblies. As you can see, I have a new project called AssembliesAndNamespaces already open. I’ve added two lines of code. If you want to pause the video and catch up, that would be great. In lines 13 and 14, I’m merely printing Hello world to the screen and then pausing the execution of the application. However, even in this application, a executable.NET assembly is being generated the very first time that we run the application while we’re debugging. Now, if you want to take a look at what happens, go to your project’s directory and inside of the project folder, you’ll see that there’s a bin directory. We avoided this very early in this course, but now I want to talk about it briefly. The bin directory will contain both at a bug in a release version, ultimately a release version. The debug version will contain additional files required by Visual Studio to connect to the execution of the compiled executable. This allows us to step through the execution and pause the execution line by line in the Visual Studio debugger. Now, we can additionally, then after we created our application and thoroughly debugged it, we can say I want to create a release version of the application and go to Build Solution in the Build menu, and it will create a version of our application without any of those debugging symbols without that connection to the debugger. If you look at the file system, you might be a little confused to see that it also has a lot of those extra files in there, but they’re basically ignored. But that is what’s going on behind the scenes. Notice that in each of these cases, we’re building an executable file that will run and we could even just double click it and run the application from here like we did before. Now that is different from the type of.NET assembly that allows you to create a library of code that can be shared across multiple projects. In that case, you’d be compiling a project into a.DLL file extension. We can create a code library. I’ll show you how to do that in another video. But at any rate, the.NET Framework has to already be installed on any computer where you want your application to work or to run. Basically, every copy of Windows already has the.NET Framework runtime and the class libraries installed in a location that’s globally accessible, called the global assembly cache. Every.NET application can reference the same set of assemblies in that one spot on your hard drive. Now, you might say that whenever you build your application and set up your application, you may not realize that by choosing to create a file new project and then selecting the Console Window project template, you were actually creating references to those files in the .NET Framework Class Library. That’s one of the functions of the setup routine for a project template. If you take a look at the references node underneath your project in the Solution Explorer over here on the right hand side, you’ll see that, there are some references already to these things like system, System.Core, System.Data, System.Net and so on. We’ll talk about what these are in just a moment, but that’s indicative of the fact that we have references into files of the .NET Framework Class Library that the creator of the Console Window application thought we might be or might find useful at some point. We’ll come back to that in just a moment. Now sometimes you’ll need an assembly from the .NET Framework Class Library that has not been referenced and I’ll demonstrate how to do that in an upcoming lesson. Or perhaps you need to add a reference to an assembly created by a third party, maybe even yourself. Again, I’ll demonstrate not only how to create your own class library, but then also how to create references to third party assemblies as well. Again, there are 10’s of thousands of classes defined in the full.Network framework class library. In a few cases, the same class name was used, or at least there was the potential for it to be used. When that happened, the creators needed a way to be able to tell one class from a different class and so they introduced the notion of name spaces and name spaces are like last names for your classes. Think about your name or my name. For example, somebody might say, “Bob loves coffee” You might say, “Well, which Bob?” There’s like a billion Bobs in the world. But if somebody were to say, “Robert Theron Tabor likes coffee.” Well, that narrows it down. I’m pretty sure that I’m the only person in the world that has that combination of first middle and last name. I could either use the full name, Robert Theron Tabor to reference one person or once we understand the context of who we’re talking about, maybe we’re talking about only people in this room. Then you might say, well, Bob likes coffee, he’s the only Bob in this room, so they must be talking about Bob. The same idea works with your code. We could use the full name of the classes that we need inside of our application. For example, the full name of the console class is actually System.Console.WriteLine. Or the System.Console class. That’s the full name of the class and then we’re calling the method in that class. However, you’ll notice that I didn’t have to use the Word system here. Why not? Well, because we used a using statement at the very outset of this code file, which says, ‘I want you to look inside of these name spaces whenever you find a class reference that you don’t recognize and so the C-sharp compiler, it finds the Word Console and it says, “Hmm, I wonder where that came from?” It begins to look through the name spaces listed in the code file and it says, “Oh yeah, I found a class name called Console inside of a name space called System.” So it thinks, “All right, well that must be the Console class that he’s talking about.” Occasionally, you might have two classes with the same name, and you’ve added using statements for each of these inside of your code file. When that happens, you merely need to disambiguate by adding the full name of the class instead of relying simply on the using statement. You’ll notice here that by default, the program that CS file has a number of different using statements. In my text editor they’re faded out a little bit, which indicates to me that they’re not being utilized at this moment. We could remove unused using statements from our code and our code will compile just fine. This is a convenience for us, that was set up for us by whoever created the project template for a console window application. To further illustrate this idea, let’s talk about how we can go about using the.NET Framework Class Library to do meaningful things and how we would go about finding the source code or the classes we need to do something cool in our application. For example, maybe I want to write data to a text file. How could I go about doing that? Well I might open up bing.com, and I’m going to type in site:Microsoft.com, so I’m going to limit the search results to just those that are returned by Microsoft.com. This is going to help me find the documentation specifically created by Microsoft as opposed to third party articles or whatever the case might be. So site called Microsoft.com and then I might just say, write to a text file and then using C-sharp. One of the top results are from msdn.Microsoft.com and msdn stands for the Microsoft Developer Network. This is your primary source of information as a software developer on the Microsoft platform. In this case, here’s a how to article that will describe the code that we would need to write in order to write data to a file. Here’s a long code example. In fact, it gives us three examples in one and we could use one of these examples in our application in order to write data to a file. We might decide to go ahead and use the second example. It comes close to what we want to work with here and I copy and paste it into my application and I might remove some of the extra information here just because I don’t need it and I may need to modify this path. I believe I created a folder called Lesson 17 for this purpose. Notice that it’s going to use a class name file and a method called write all text. In this particular case, notice that we already are given the full name name space of this file class, System.IO.File. What we could do is actually remove that from here and go up and add a using statement for System.IO like so. Notice that, the compiler will find it and will be able to run our application and got a little message there because I was in release mode, lets go back to the bug and start that over again. We don’t get any feedback there but if we were to open up our Lesson 17 folder, we would be able to find the text in a text file. Great, so now we can use that little snippet of code to do what we want to do, but notice it all started by searching on msdn finding a code snippet that we could use and then we can modify it and add our own text here that we want to then we want to write this to our file, so that we start stitching things together. That’s one way that we can find the features inside of the.NET Framework Class library that we need is to search on msdn. Let’s try one more quick example and let’s go back here, back to bing.com and here again I’m going to cycle in on Microsoft.com. Then I want to do c-sharp download html as a string and I might find another reference. This is a different style web page. There is one web page on msdn for every class and every method in the.NET Framework class library. In this case, we’re looking at a specific page for this download string method and if you were to look at the remarks and some of the additional, the syntax and some of the exceptions that it would throw, what we ultimately get to is a little snippet of code that we can copy and that we can paste inside of our application. That’s what happens this time, it does not recognize the term WebClient. Why not? Well, we may not have the assembly referenced in our project, or we may have the assembly referenced, but we do not have a using statement that would include the WebClient class. To remedy this, I’m going to hit “Control period” on my keyboard, and it says that it found this class in system.Net so we can automatically add a using statement for the System.Net class by merely hitting the “Enter key” on my keyboard. Or I can just go ahead and say, let’s go ahead and use the full name of the class here, I’ll choose the first option using the Arrow keys and the Enter key on my keyboard. Notice what happens, it adds a using statement for System.Net and then notice that the WebClient class is now found. It is obviously in a different color, there’s no red squiggly line, so it looks like it found the correct class that we’re looking for. Now we merely need to give it a URL so let’s try, msdn.Microsoft.com like so and then we will just write this out to string reply. Then we might even rework our application in attempt to save that into our text file as well. Let’s see what we get here. Hopefully, this will work, we’ll run the application. It took a moment, but it loaded up a bunch of HTML into our console window. We can see the closing body in the closing each HTML tag. Now, if we were to go back to our folder and find our Lesson17 folder, and open up our text file, we see here is the full web page that we scraped off of msdn.Microsoft.com. That pretty much wraps up what I wanted to say in this lesson. We are able to utilize the classes and methods in the.NET Framework Class Library and we can find what we need by doing simple searches in bing.com using Psyco and Microsoft.com to find the classes in the methods that we want to work with. Once we find those classes that we want to work with and we find maybe even little code snippets, we can copy those into our program, and we may need to at that point, fix the references to those classes. Now, in this first case, remember that it gave us the full name of our file class System.IO.File. But in the second case, we had to provide the using statement, System.Net in order for the compiler to find the class that we were wanting to reference and work with. Ultimately we did that with hitting “Control Period” in the keyboard to add a using statement to the very top of our code. We talked about the purpose of namespaces to provide disambiguate between class names. We talked about the using statement as a way of creating a shortcut or a context and say, we’re not talking about every class in the.NET Framework Class Library. We’re only talking about the classes that happen to be in these namespaces. If you find Mr. C# compiler, if you find a class that you don’t recognize, look in those namespaces first before you complain. We’re going to continue on these ideas in the next lesson. We’ll see you there, thank you. Hi I’m Bob Tabor with Developer University, for more of my training videos for beginners, please visit me at DevU.com. Now, previously I said that the creator of the console window application project template, added references to those assemblies in the.NET Framework Class Library, that we as developers might find useful for the majority of use cases. However, if we need an assembly containing some portion of the.NET Framework class library, that has not already been added to our project, then we can simply add a reference to it. This is one of three ways that will demonstrate how to add a reference to an assembly. The first being an assembly from the.NET Framework Class Library. Now there’s a number of different ways to go about this. The easiest way, I think, is to go to the Solution Explorer, and then right click on “References” and select “Add Reference”. Here you can see there are a series of, I guess, tabs along the left hand side that would allow us to choose from the various types of assemblies that are available to us. We want to choose the Framework and these are all of the assemblies that are part of the.NET Framework Class Library. Now, and you can see that there’s already check marks next to a number of the system.this, system.that, System.Net.Http. These contain a number of different classes each with many methods, and these are just automatically accessible. If we needed something that is not contained here then we could choose, for example, to just select a checkmark next to the one that we want to add to our project System.Net and click “Okay”. You can see that is added a reference to System.Net into our project in the Solution Explorer. Now we can reference any of the classes and utilize any of the methods in that particular assembly. That’s one way if we need to access some part of the.NET Framework Class Library. Now, in addition to that, there are libraries that are created both by Microsoft and there are libraries that are created by open source contributors, other companies that are provided for free for very specific purposes in our applications. These are often common features that many applications need that’s why they’ve been open sourced. However, they’re available through a special tool called NuGet, which is a repository that’s maintained by a foundation supported by Microsoft, but ultimately its own entity. There are a number of different ways to work with NuGet in Visual Studio, I’m going to choose the visual way to do it. I find that to be the easiest for those who are just getting started and for me, because I’m a more visual person. There’s also a textual, almost command line style interface that would allow you to do similar things and even script these things. Let’s go to the tools menu and select “NuGet Package Manager” and then “Manage NuGet Packages for Solution”. This will open up a tab, undoubtedly no matter what you see on my screen, it will look different on your screen because this is going under active development for the last few years, and it has changed frequently. Now, if there were a package that we wanted to add to our solution, we could simply search for it. Typically, we can learn about these things through blog posts and what have you. Say for example, I wanted to access a database from my console window application, and I wanted to use the Entity Framework API from Microsoft. It’s available as a NuGet package through this Manage NuGet Packages for Solution dialog. I can select it as one of the options you can see it’s one of the most frequently downloaded. Furthermore, I would then choose which project in my solution that I wanted to add it to, and I can choose the “Install” button. There’s some other options as well, I’ll leave you to to investigate those on your own. I’m going to go ahead and click “Okay” I agree to the terms for using the Entity Framework. In this particular case, it installed just a number of references to assemblies and copied them down locally to my computer. Now, depending on the type of package, it could not only contain.Net assemblies, but also sample source code files. It could actually run macros inside of Visual Studio. It could include things like style sheets and HTML, and even graphical assets that it will include your project. This is the second way that we can go about adding assemblies and more to our projects. But the third way that I want to talk about is whenever we want to add a reference to a class library that we created. Now we haven’t created a class library up to this point, so this is a perfect opportunity to do that and then add a reference to it in our project. What I’ll do is start off by creating, New, Project. Let me go ahead and let you see my entire screen here. In the New Project dialogue, I want to make sure to choose C#. Then I want to choose Class Library, notice that I chose the one that doesn’t have a little Nu.Get logo next to it, it’s just looks like several books in the old C# logo. Now, this will undoubtedly look different to you, but just make sure that it’s a regular old class library. Here we’re going to call this MyCodeLibrary and click “Okay”, and I’m going to go ahead and say, I don’t really care to save my other solution there. Inside of this, you can see that I don’t have a program.cs, all I have is a Class1.cs. There’s no static void main, and so what I’ll call this is the scrape class, and we’ll have one public method, so public string ScrapeWebpage and we’ll create a version of this where you provide it just the URL. Then we’ll create a second version of this. Where you provide the URL and file path. In this first case, I’m just going to copy down some of the code that we’ve worked with previously to create this functionality, where we were actually using this webClient to go out, download a page and then save it to a text file. I’m just going to generalize this. Remember what I did previously when I hit “Control Period” on the keyboard in order to add a reference to System.Net or add a using statement for System.Net? The next thing I’m going to do here is actually replace this hardcoded string with whatever gets passed in by the end user. Finally, I’m also going to have to add or resolve this reference to the file class. It’s in the System.IO namespace, so I’m going to add a using statement for that. However, in this specific case, I’m not going to write this to a file and this overloaded version of it. In fact, what I’ll just do is return whatever’s been actually downloaded from client.DownloadString. Now, the second version, we’ll do something almost identical. Here, let me replace this with the URL, and we’ll get rid of Console.WriteLine and we’ll go ahead, and write this to the file path that was passed in and then we’ll return the reply. Now, truth be told, this might be a good situation where I could actually take these lines of code and create a private helper method out of them. Maybe that’s a good idea, let’s do that right now. Private string GetWebpage and we’ll pass in the URL here. Now both of these can just call, GetWebpage like so. Here we’ll go string return or reply, equals GetWebpage. See what I did there. I was able to use a private helper method to encapsulate the functionality of actually getting the web page itself, and then in this case, I was able to extend the ScrapeWebpage method to include writing that to an actual file path. Now that I’ve created this, and let me go ahead and rename this file as well by right clicking on it and selecting Rename, and I’m going to choose to name this file scrape as well. I could name it anything I want, it won’t matter, because the name of the class itself is scrape. But at this point now I’m going to go ahead and build the solution. It looks like it built. In fact, let me go ahead and build a release version of this. Great. Now let’s open up a second version of Visual Studio. I’m going to call this MyClient. This will be a console application called MyClient, and we’ll click “Okay.” What I want to do, is to first of all, add a reference to that deal that we created just a moment ago. I’m going to go in right click on references and select add a reference. Here I have some choices. Ideally, I would be able to look and find it in the same solution, will come back to that and do it in just a little bit here. But I may have to go and actually browse through the file system to find this, and unfortunately, this is popping off the screen. However, hopefully we can work our way through this. I’m going to navigate to the bin directory into the release directory and find my code library, and then I’m going to select the “Add” button and then click “Okay”. Now that I’ve done that, what I should be able to do is get to the scrape class, but it doesn’t see this scrape class. I’m going to hit ” Control” “Period” on my keyboard, and notice that it will find the correct using statement. The Using MyCodeLibrary namespace. Scrape myScrape equals new Scrape. Now I should be able to go myScrape. and there we go, ScrapeWebpage and I should be able to give it a url. Let’s go. That should return a string. String return or actually just value equals. Let’s move this over a little bit. Then I should be able to print that to screen. Console or write. Now we should be able to run the application. It takes a moment, but it pops up. What we’re able to do there? Well, we created a reusable library now. Whenever we want to scrape a web page, we can utilize this and any of our other projects. Now did you find how inconvenient it was to actually go and search around whatever we wanted to add a reference to it? I had to go, and browse through all my projects and everything. But I do want you to notice one thing about what happened after we did that. Let’s go to my projects. Let’s find that client and let’s navigate into the bin directory. Notice that it copied MyCodeLibrary.dll into the bin directory for the client application. That’s one of the things that it will do with any of the third party assemblies that it we’ll utilize. But wouldn’t it be easier if we were to start this over from scratch, and we were to create a single solution that had both the client and the code library in the same solution. Let’s do that now. I’m going to actually open up a third copy of Visual Studio. Here, let’s create a new solution. What I’m going to do is actually scroll all the way to the bottom and choose other project types, and choose Visual Studio Solutions, and find a solution. This might be in a different place, so you may have to hunt around for it, but you ultimately want to choose blank solutions that should be available to you. We’re going to call this a Lesson 18. The solution’s name will be Lesson 18, but we’re going to do is add projects to the solution. The first project that I’m going to add, and there’s a number of ways to do this like add, but it goes off to the right hand side of the screen, I could add new project, file add New Project and then we’re going to choose the class library. We’ll call this the ScrapeLibrary. Then I’m going to choose to create another project and add it to our solution called app type Console Application. This will be the ScrapeClient. In our ScrapeLibrary, what I do just for simplicity’s sake, is actually go to the work that we’ve done here a moment ago, and I’m going to copy all of this like so. Let’s come back here, and I want to paste all this in, like so. Yes, I’m going to have to resolve these class names by adding using statements here and here as well. That should work. Looks like I actually lost my class name, so let’s go public class Scrape. Then let’s make sure to put everything inside of it. There we go. Now we get it working, and I’ll rename this as well to just scrape. I could have left it, call the class one to but that’ll work just fine. I’m going to go ahead and build that right clicked on the project name a select to build. Now what I want to do in the client to utilize that class library, is I need to add a reference to it. So here again, I’m going to right click and select add reference. This time we go to projects, and notice if solution is selected, the Scrape Library will be an option. I’ll choose that and click “Okay.” Now we can utilize ScapeLibrary in our application. Let’s go ahead and just type in the “Word” scrape., and it’s not going to find it. The Here Control period, and I need to add a using statement, since I renamed it. Now it’s called ScrapeLibrary. I’m going to add the ScrapeLibrary namespace too, using statement to the code file. In fact, I don’t have to do all that right. I can just copy and paste it from the previous client, like so. We can rerun the application. It says a project with an output type of class library cannot be started directly. Why do you suppose that happened? Well, because there are actually two projects now in my solution, and you can’t execute a library, correct? So what we need to do is right click on the client project and select Set as Startup Project. It’s going to close that, when we attempt to run the application. It’ll work. Furthermore, if we were to make any changes to how the library actually works, let’s say what could we do here? That’s interesting. Let’s do this. We’ll make a change in one spot and then I’ll go content plus equals THAT’S ALL FOLKS for the very end of that string this return. I’ll return content this time. Let’s make sure we have everything there. We’ve made pretty big change to the application. Now when I run the application, it will recompile the DLL. It will add it to our project and at the very end, it adds, “THAT’S ALL FOLKS”. The only thing I could think of off the top of my head. Hopefully now you can see that there are several different ways to add assemblies. If it’s part of the .NET Framework Class Library, then obviously there’s a way to do that. If it’s a free or open source, package that’s available from NuGet, we can use the NuGet package manager or we can create our own third party class library and then add a reference to it by browsing. Or if we were to create the client and the library inside of the same solution, then we can reference it in the add reference dialog. But just under the project solution option, and we get the added benefit of being able to make updates, not having to go through two copies of Visual Studio to updated. It’ll update the next time we hit, run it or recompile it and everything. That’s pretty much it for this lesson. We’ll continue on the next lesson. Will see you there. Thanks. Hi, I’m Bob Tabor with Developer University for more my training videos for beginners, please visit me at devu.com. Previously, we looked at arrays which allowed us to keep a sequence or group of related data together inside of the same variable, so we would create an array by providing a data type, and so each item in the array had to be of that data type. We would also provide the number of elements we expected in the array by defining that number between a set of square brackets. Now that we have that predefined sized array, we could add items into each element of the array or retrieve values out of each of the elements of the array by indexing into the array using a zero based index to index in and address one specific element of the array. Now, once we have the data collected into an array, we could do some interesting things. We could iterate through the array and investigate each element in the array, or we could even pass the array around as if it were one variable. Pass it in, for example, as an input parameter to a method. But you recall that time, I also said that at some point we would talk about collections. I even gave collections a nickname, calling them arrays on steroids. I think you’re going to agree after this lesson that collections are great whenever you’re working with all data types, especially those custom data types that we’ve been working with up to this point in this series of lessons. For example, the car class that we created ourselves. Now, as far as the .NET Framework class library is concerned, it will often use both arrays and collections, depending on the need. But I think you will probably wind up preferring to use collections in your applications because of the rich filtering, sorting, and aggregation features that are available to collections through a technology, a language called LINQ L-I-N-Q, which stands for the language integrated query. It was a very innovative feature whenever it was first introduced back a number of years ago in C-sharp and other .NET languages. Other languages have since implemented something similar to it. But we’re going to dive into that topic of LINQ and what you can do with it in the very next lesson. But first of all, let’s talk about collections. We’re going to talk about two collections, specifically lists and dictionaries. Now, truth be told, there’s probably a dozen additional varieties of collections that you could use for very specific purposes. They each have a superpower. They each have a very specific use case where they’re intended to be used. I find myself using lists and dictionaries 95 percent of the time. So we’re going to focus on those for this lesson. But after this lesson, by all means, feel free to go off and learn all of the additional collections that are available to you and what they can do that’s a little bit different than the list in the dictionary. Suppose that I have a number of cars on my car lot and I want to write an application that allows me to manage them. So I need some way to collect all of the individual instances of the car class together into a single array or collection. Now again, I might use an array of cars, but I think I’m probably going to choose to use a collection because of the added features that I’m going to gain using collections. We’re going to talk about a bunch of different types of collections, but I want to start off with a conversation about an older style of collection that’s no longer used anymore to show why there’s a newer style collection that’s available, and it’ll help you maybe understand that idea a little bit better. As you can see, I’ve got a project called Working With Collections already set up here. Please take a moment and create a new console window project. I’m also going to paste in two classes that I’ve defined simplified version of the car class that we’ve used before. Then also, I’m going to create a book class, as you can see there at the bottom. Very simple classes. The next thing that I’m going to do is actually paste in some code to actually create new instances of each of these classes and then populate their values. You may want to pause the video yet a third time and copy in the code that I have copied to screen there as well. The very first thing that I’m going to want to do is to work with a collection and I’m going to work with something called an array list. Let me just say this about array lists that they are dynamically sized, which is one of the great benefits. You don’t have to do anything to say, I need to add one more item and another item and another item. Remember with arrays, I said it was possible to resize an array, but it’s a little bit of an advanced operation. Not so with an array list. That’s one of the big benefits. You can just keep adding items to it and it’ll be just fine. It will also support cool features like sorting. You can easily remove items from the collection and so on. Let’s go ahead and create a new instance of this array list. When I do notice that we don’t already have a reference or a using statement to a namespace, so what I’ll have to do is hit Control period on my keyboard, and you can see that it is in a namespace called using system. collections. I’ll go ahead and add that namespace to my project and so will create a new one called myArrayList equals new ArrayList, like so. Now that I have my array list, I can begin to add items to the array lists like, for example, the first car, and then I can add a second car like so. Now, one of the problems with the old style collections like the array list is that there was no easy way to actually limit the type of data that would be stored inside of the array. For example, I want to work with automobiles, but I might accidentally add a book into the array and it will work just fine. There’s no complaints. The old style collections are not strongly tight in so much that you can put anything inside of a collection. At first glance, that might seem great. But what if I wanted to actually then print out a list of all of the cars makes and models? Let me start by at the very bottom here type Console.ReadLine so we can get through that formality and then I’m going to just do a for each. What am I going to work with here? Let’s just say I’m going to work for each car car in my array list. Then I might do a Console.Writeline, and let’s just go car.Make like so, and print that to screen. You must run the application, and we will get an exception whenever we hit the third item in our array list. Notice that it is printed first to the screen, but when we get to the book, it says that there’s an invalid cast exception. In other words, we could not convert a book which was the third item in the array list into a car, so when we get to this spot as we’re iterating through each of the items in our array list, we’re going to hit a problem here. The fundamental problem is that we allowed our collection to store something other than cars, so we cannot work with these collections in a strongly type fashion. Now, what I can do, one of the neat features here, is that I can actually remove that item prior to going into that for each list, and we should just be able to execute the application without problem. That is at least one of the good benefits there. But unfortunately, the downsides outweigh the benefits. Let’s go ahead and take a look at the newer style collections. The first, I said, was that we were going to look at a list, and more correctly, we’re going to look at something called a generic list. Often, you’ll see it referred to as List of T, like so. That of T in the term generic might require a little bit of explanation. When done, that was first released, the first set of collections allow you to put anything you wanted into them, like we saw here just a moment ago. Now, it might make sense in some contexts, but typically, it doesn’t, and it leads to potential errors like you saw. Now, at some point, then, C# introduced the notion of generics, and specifically for our purposes, they released a series of generic collections. A collection is essentially generic but it requires that you make it specific by giving it the data type that should be allowed inside of that collection. We have a generic list but we’re going to make it a specific list to car so that we can’t even add a book to that collection. Let’s attempt to do this one more time. This time we’re going to go List, and notice that I’m using angle brackets, and in-between the angle brackets, I’m going to say what data type I want to use. In this case, I want to use the car data type. List of car called myList equals new List of Car, like so. At this point, we can go ahead and add the car1, just fine. We can add car list to myList beginning at car2, just fine. But what happens when we attempt to add the book into our list? Well, at the point when we attempt to add the book to the list, we get an exception. We hover over and it says it cannot convert a book to a car. That makes a little bit more sense. It is specific to a car data type, so we cannot add a book to that list. But from this point on, we can work with it now with some confidence, so each car car in myList, and we can use the car.Model, like so, and we would get what we would expect here, create a list of our car models. That’s one of the big benefits of working with a generic type, is that it allows us to work with the specific data type and only allow those types into our collection. This is probably the most popular of all of the collections available. But I’m going to show you one additional collection called the dictionary. A dictionary is similar to, think of Webster’s dictionary, where you have a word and you look it up in alphabetical order and find the word that you want definition of. Then once you find the word, you can look to its right and it will have the definition. There is a key, which is the word itself that we want to look up, and then there is the definition next to it. There are two components to each entry in a dictionary; there’s the key and then the value itself. Typically, when you see a generic dictionary mentioned, it’s going to be listed like this Dictionary of TKey, TValue. In this case, what we’ll do is specify the data type of the key. This allows us to find one specific item by the key. Now the key should be something that is unique to every entry in the dictionary. In the case of people, there might be some identifier. It could be a customer ID in your system, it could be a Social Security number if you’re in the United States, but something that uniquely identifies one entity inside of that dictionary. Then the value can be of any data type. In the case of, again, a customer, you might have the customer ID being the key, but the customer object itself is the value that we actually want to get access to. Now, in our case, this seems a little bit weak because our car class only has make and model, and we know that we can have multiple cars that have the exact same make and model. They may have different colors, they might have been created in different years, but you can have multiple cars in the car lot that have the exact same make and model. Neither of these are good candidates for keys, but there is something called a vehicle identification number. Let’s do a prop string and let’s call this VIN. That will differentiate every car in the world that’s been created. What I’ll do is come back up here to the definition in the car1.VIN and I’m just going to use a very short VIN number. I think they are typically like 18 or 24 characters long, something like that; I’m not exactly sure. But this should uniquely identify every car in the world, especially every car in our car lot. Now what I can do is create a dictionary of my cars by starting off and saying something like dictionary, and they were going to give you the two data types. The VIN will be of type string, and then the actual value will be of type car. We’re going to call this myDictionary equals new Dictionary of String Car; notice the InteliSense help me out by essentially giving me a lot of that, and I can just hit the semicolon at the end of the line for it to type out that entire phrase. Now that I have this, what I can do is go myDictionary.Add, and we’ll do car1.VIN and passing car as the actual value. The car1.VIN, again, is our key into the actual car1 itself. Likewise, we’ll go add car2.VIN in car2. At this point, here, if I were to attempt to find a given item, so Console.Writeline, and I need to find a specific car in my car lot, I can allow a user to type in the VIN number and I can look it up in the dictionary quite easily. Then there’s a number of different ways to go about this, I think probably one of the easiest ways actually. Let’s go back and not use the dot. Here, I’m actually going to use the key itself, so we’ll call this B2. Then now we can reference a specific item in the Dictionary of Type car so we can get the make, for example, and print that out to screen, like so. We were able to find the Geo that way. Hopefully that makes sense. Let’s continue on, if you recall when we originally were looking at, let me comment all this out. We were looking at a raise, I said there’s some interesting things you can do to initialize an array with values like we see here. We’re creating an array of strings called names and to initialize it, I give it a collection of names that are common to limited. Now I have an array that has four elements in it and it’s already been initialized with the values. You can do the same thing with the objects to initialize objects at the point of instantiation, to do that will use an object initialization syntax. In fact, let’s go ahead and just approve this all works is coming out everything we have up here as well and get rid of the cars in the book, we’ll come down here and go car1 equals your car and then notice what I do, I use that same syntax the curly braces and inside of here, what I can do is actually define all the values to make equals. Let’s just dream large here and go make in the model would be a 750 and we’ll make it an ally and then we’ll also give it a vehicle information of C3 like so. Now I’ve done the actually three things in one line of code, I create a new variable called car I create a new instance of car and computer’s memory now I’m getting access to that address in memory by using the car1 label, the variable name and then I go ahead and populate the properties of the car object at the moment that I create that new instance by using this object initializer syntax. Some people don’t like this. It looks like it might be doing too much in one line of code, but I think you’ll find that if you ever do need a hard code, examples like I do frequently that the shortened syntax actually saves you several lines of code and it’s just fine it’s valid code Let’s go ahead and while we’re working here, let’s go ahead and create a Toyota. We’ll set the model equal to a 4Runner and we’ll get to that a VIN of D4. Like so and now we can work with the cars just like we did before but in and of itself, this might not be so interesting but this is the object initializer syntax. We can take this one step further when it comes to working with collections, we can use collection initializer syntax. I want to point out one other thing that we didn’t have to use a constructor to make this work like we looked at before that we’re able to regardless of the constructor, go ahead and set these attributes just like we use the syntax there. Well, let’s now talk about a collection of initializer which can look a little hairy, but it’s essentially the same thing we’re just taking it to the next level here. In this particular case, let’s go ahead and create a list of car called my list equals a new list of car. Now, at this point, what I can do and notice that I put this on separate lines here, I typically might keep this on the same line just for my own sanity here and now inside of this new empty list of cars, I can create a series of car objects like so. In fact, what I can do at this point then is use an object initializer inside of that, so here we’re going to make vehicles lose it all then we’ll set down the model equal to Cutlass Supreme and then the number of a set that equal to E5, so comma and then we’ll create another new car to add to this list of cars and we’ll set its object initializer setting it’s make equal to Nissan and its model equal to an Altima then finally, it’s VIN will equal F6, something like that. Now what I’ve done, all in one line of code essentially is I’ve created a collection and I’ve added two objects and in each of those objects, I went ahead and already initialized all of the property values. There’s a lot going on there and just that one line of code. Great. At any rate, just wanted to recap the things that we talked about in this lesson. First of all, we talked about the difference between arrays and collections and I promised that there will be a more obvious set of features that are available to collections, which we’ll learn about in the next video. We talked about the old style collections versus the new generic collections. We said generic collections are superior because they allow us to make sure that we’re only adding specific types to our collections so we make a generic collection specific by passing in the data type that should be allowed to be referenced inside of that collection. Then we looked at object initializer just a shorthand syntax for initializing the properties of a new instance of an object and then finally taking that one step further within a collection initializer where not only are we creating a new collection, but then initializing it with new instances of the car collection. In both of those cases then we are using object initializers. We can do it all on one line of code. Now, honestly, unless you’re building a lot of example code like I do, you may not see this as often unless you are creating some hardcoded objects for use within your application. But I wanted you to be aware of that syntax nonetheless, because we’re going to use it again in the next lesson and we’ll see you there. Thank you. Hi, I’m Bob Tabor with Developer University for more my training videos for beginners, please visit me at DEVU.com. In this lesson, we’re going to look at link the language integrated query syntax that was introduced some years ago to provide a way to filter thoughts and perform other aggregate operations on collections of our data types so will demonstrate two different styles of Links Syntax there’s a query syntax that will resemble the structured Prairie Language sequel for querying databases. If you’re already familiar with sequel, this will at least feel familiar. Then there’s also a method syntax which might feel more familiar to C-sharp developers. However, there is one little strange nomenclature thingy that we got to figure out but I think it’s pretty easy. I think I have a good way of explaining it to you and hopefully you’ll understand what it’s trying to do there but what I’d recommend is you find the code for this lesson there should be a before and after folder and you want to copy the code in the before folder in your project’s directory and then open it up and you’ll be where I’m at right now. In the understanding on project, you can see that I merely created a car class and then I also have here a collection initialized class filled with cars filled with attributes that will be able to search and sold on and that’ll give us something to work with here. What I want to do to begin with is to show you a comparison between links query syntax then links method syntax to do the exact same thing. You’ll see the obvious difference and we’ll talk about the ways in which they’re different but let’s begin with a query where we want to find all BMW in this list of cars called my cars. It’s as easy as this, and we’ll talk about the VAR keyword here in just a moment but BMW equals from car any cars looks from car in my cars, where car.Make is equal to a BMW, that’s car. Now, let’s come down here and print all those guys out. Let’s go console that right line and let’s provide the vehicle information number let’s do zero on on, actually, let’s do this. It’s good for each tab VAR car in my in the BWS Console.WriteLine and then we’ll go car.Model and car.VIN. Let’s go ahead and add some of our replacement characters in there. Now, let’s go ahead and run the application and we see that we get three cars that have been returned, so of all the cars, I think there’s five or six three of them with those VINS A1C3 and E5 are BMWs. If you take a look at the data, that would be correct. Very quick, concise way of finding only those cars that match that criteria. What if we wanted to add additional criteria? Say, for example, we wanted to also see where the cars year equals 2010. We could do that and rerun the application and now we see that it just finds one of the BMW cars was created in 2010. It is this last one, the 555i. That is the language integrated query syntax, the query syntax of Link. Let’s go ahead and comment that out and compare that to the method syntax and so here will go far. BMW equals myCars.where. This will give us all the BMWs, so let’s go and run that. It gives us the same three that we got before. Now, what if we wanted to also find only those where the year is equal to 2010? We would do that. You can see we found just that E5; that last one. This might take a little explanation here. For the moment, let’s just ignore this last part, we’ll talk about that in a moment. But what you see here, in fact, the whole thing in between the opening and closing parentheses is called a Lambda expression, and you can think of it as a mini method. Essentially, what will happen is, you say given p, so given an instance of the collection, only return back to me those instances of car where the make is equal to BMW. See how easy that is? Again, just a mini method. You could think of this as like the input parameter, and then this is just some condition. When it’s true, then return that instance and add it into this little collection over here, so now that we have a subset of all of the available cars in our car lot. Furthermore, I just added the logical and operator and said, and make sure also that it was in 2010. Well, that filters out two of them. Again, a Lambda expressions are just many methods, so for any given item in the collection it has to match that criteria. If it does then we can add it to this little subset collection over here that I’m calling bmws. The var keyword it has a very different connotation in C# than it does in other programming languages. Here I would ask that you forget what you know about var from maybe JavaScript or Visual Basic or some other programming language, it does not mean the same thing. In this case, the var keyword says that we’re going to let the compiler figure out what the correct data type is. I’m not even sure what gets returned from this little query that we do here. If you were to hover your mouse cursor over the where, you can see that it’s going to return back in i numerable car. What’s that? Not entirely sure, doesn’t really matter. I don’t care what it is, I know that it is a collection of cars. To prove this point we’ll talk about this in just a moment a little bit later where the var keyword can really come in handy because we truly don’t know what it is that’s being created by our link queries. Again, the var keyword it’s still strongly typed. We’re just going to let the compiler figure out what the type is at the point when the code is compiled. Let’s move on, and let’s take a look at a few other examples. I may want to find an ordered list of cars, so I might go orderedCars equals from the car in m Cars, order by car.Year descending, select car. That’s how I would take all of my cars and order them in descending order by their year. Let’s just change this from bmw, let’s put this to orderedCars like so, and this might help if we actually saw the year itself so I’ll add in the car.Year as well. Let’s go ahead and run the application. You can see it starts at 2010 and in descending order, works its way back to 2008. Awesome. That same query if we were to do it in using the methods syntax instead of the query syntax, it would look something like this. var orderedCars equals myCars.OrderByDescending. Given each item in the collection only return those or actually order them by the year like so. We should see the same grouping and we do. Starts at 2010 and works its way back to 2008. Again, in my opinion, this is more concise. The only conceptual hurdle you’ve got to jump over is just make sure you understand what a Lambda expression is. In this particular case, it’s not a filter, we’re just saying, given each item in our collection, we want to order by this particular property; the year and then add that ordered item to our new collection of cars over here. Now, there’s a lot of interesting things that we can do, and I’m only going to work with the methods syntax from this point on. The first we might do something like this, for example, if we want to find just the first item, so let’s go ahead and grab this and maybe we want to find the first item where the make is equal to a BMW like so. This will give us the first car; the first BMW car in the list that it finds. Let’s go ahead and console. In fact, let’s change the name of this to firstBMW, so Console.WriteLine. All that right. firstBMW. just a VIN number should be sufficient. Let’s run that. We can find that the first BMW in the list was A1, or we can do the first BMW, and we can actually start by ordering by descending given the year. You can see I’m chaining these together, we’ve talked about method chaining before. This will return a collection of cars and then this will return a single car in that collection. Then we’ll print that single items then out to window, in this case E5. The list is first sorted and then we grab the first one that matches our criteria of BMW. That’s how we can use first. We’re going to comment that out. We can also do something like this, Console.WriteLine., and inside of here, let’s go, myCars.TrueForAll and say the year is greater than 2012. We need one more right there. Is it true that all the cars in my car lot that every one of them is greater than 2012? That would be false. Well, then how about are they all greater than at least 2009? That’s still false because we have at least one that was created in 2008. If we were to change this to 2007. Are they all at least greater than 2007? True, so that’s true for all. Very helpful in order to aggregate and look across all of them and see is this true for all the items in my list. We can also even do something interesting like this instead of doing this for each statement where it’s essentially what, at least two if not four lines of code we can create a for each like so. myCars.ForEach and then inside of here for every item, let’s just do a Console.WriteLine and in here, I can do p.VIN and p.StickerPrice. Let’s go ahead and do that. In fact, we’ll just do that as well, one and zero. Hopefully, that all makes sense. Now let’s run that and see what happens. Here we are, we’re able to list them all out and format their values, so you see how much more compact this looks than what we were writing here. We do it all in one line of code. Again, we’re passing in for every single item in our collection just call Console.WriteLine and then use that particular item’s VIN sticker price inside of our formatted string. Here’s another interesting example of this. Maybe in each case we want to go, so myCars.ForEach. In fact, here let’s do this before that line of code. Let’s keep that one and then go myCars.ForEach. In this case, I want to perform an operation on each of the data inside of there, so I might take the sticker price and reduce it by $3,000, actually let’s go minus equal to. This will take the sticker price and subtract $3,000 from the sticker price of every car in my collection. You can see now what was if we could get a comparison going here. See, unfortunately not going to be very easy to do, but you can see that what was $55,000 is now 52,000 what was 35 is now 32,000, and so on. Again, a lot of functionality in a very small space. Let’s continue on with this thought and go and do something like, myCars.Exist. Do we have a car in our car lot where the model is equal to the 745LI, true or false. Here let’s do a Console.WriteLine and let’s see if they turn true or false? Yes, we have at least one item in our inventory where the model is equal to 740LI. Now here’s another good aggregate function. Here, let’s just do Console.WriteLine, and let’s go, myCars.Sum, and here we’ll say, sum up all the sticker prices. Let’s see what the total value of our car lot is right now. You can see that it’s about 247,000 so actually, there should probably be a better way to format that using a format code but hopefully you get the idea. We’re able to sum up a single field across all objects in our collection of cars. There’s so many other things that I can show you, but I don’t want to overwhelm you, but I want us just to go in one ear and out the other for now. This VAR keyword we’ve looked at, and I said that we use it because we want the compiler to figure out what the data type is. Sometimes it’s easy for us to figure out what the data type of something is. Sometimes it’s not so easy. To illustrate this, let’s do a Console.WriteLine line and what I want to do is call on myCars, I’m going to call the get type. In fact, all data types in.Net have this GetType method declared because it’s declared on the grandparent of all objects called System.Object. It defines this method called GetType, which will tell us what the type of a given object is and we can print it to screen. In this first case, what is myCars? Well, myCars is a generic list of the car data types so this understanding link is the name-space and specifically, though, it’s the car type. We’re basically saying this is a list of T, a list of car. That’s what’s being printed out whenever we’re looking at what the cars are, and that’s pretty easy to see because we define it here. But once we perform an operation like one of these here, let’s just see ordered cars and copy that again and stick it down here, and then I’m going to do Console.WriteLine, orderedCars.GetType and it’ll show us what the data type is for ordered cars. Let’s go ahead and compare the two. Now, in this case, you can see that we’re no longer dealing with a list of car, even though under the hood we know we’re working with a list of cars the way that it’s represented in.Net is that it’s actually ordered enumerable so an ordered list of the LINQ.Car, Understanding LINQ.Car. Again, that makes sense. That’s an ordered version of cars. How about just a regular old where statement. Let’s do that. Let’s copy that and see what the data type of that is and then do the same thing here. Let us see what that is. Let us go and run the application again. Here is this third one. The second one was an ordered enumerable, then the one where we just called the where was an enumerable plus the WhereListIterator. So things are starting to get a little funky here. It might be difficult for us to be able to express this ourselves if it were not for this var keyword. The var keyword is essential to help us to be able to create these very complex queries, and not have to worry about what the data type of it is that’s returned. We know that it is a type of list. It is an innumerable list, whether it is ordered or not, and we can follow each our way through it or whatever the case might be. Now the last thing that I want to demonstrate, is I am going to take this first query here, and I am going to pop it all the way the bottom. If this stuff doesn’t make sense, don’t worry about it too much. I wanted to go in one ear and out the other, just to explain again the value of the var keyword. In this case, let’s change something about this. Let’s call this my new cars. Here, I am not going to return cars. I am actually going to do what’s called a projection. I am going to only take certain values, certain properties of a car, and I am going to project them into a new data type. What’s the name of the new data type? Where am I defining that data type? I’m not going to define the data type. It is an anonymous type. What’s the name? I have no idea, it is anonymous. We can define types at runtime and only choose those properties that we need in the type for the moment. Why may not be obvious just yet, but I want you to understand that there is this idea, what we can do here. In this case, let’s pull out just a few things like the car.Make, car.Model. That is all we want. We are going to leave all the other attributes of car alone. We are going to only take these two values, put them into a new anonymous type. Where is the type defined? Nowhere, we just made it up off the top of our heads, and we are going to save each of those anonymous types into a new collection of anonymous types. Let’s go. Console.WriteLine, and then take a look at newCars.GetType, and let’s run the application yet another time. This time you can see that we also here have an innumerable plus where select list iterator, whatever that means, but then notice that the data type involves something called an anonymous type that has two attributes, two string attributes, which would be the make and the model. I say all that to say this, that whenever you’re working with LINQ, there is a lot going on under the hood to make it all very easy and accessible, but it all depends on defining your types as var, which says, let the compiler figure out what the type is, we are not so worried about it, because the data type might be so crazy that we can’t even comprehend what it is. Hopefully that makes sense. Just to recap the things that we talked about, we talked about the difference between the query syntax and the method syntax. We looked at a number of different examples of the various LINQ extension methods that were available. We saw how we could break apart an individual Lambda expression, to better understand that essentially we are saying each item in the collection, run this little mini method against it, and return back a given item that matches that criteria. Then we looked at how to tell what the types were and what the value of the var keyword is, and then we looked at anonymous types. We covered a lot of ground, I didn’t explain everything in great detail. But the key here is just to look at that little
formula and try to understand what it means. Look for examples online, make yourself a little cheat sheet, and you should be able to utilize these methods inside your own application. That wraps up LINQ. This will be the hardest thing that you have to think about today, I guarantee it. That is it, we’ll see YOU in the next lesson. Thanks. I am Bob Tabor with Developer University. For more of my training videos for beginners, please visit me at DevYou.com. In this lesson, we will introduce a new decision statement, the if, else if else statement, and the conditional operator are both great. They work best when there is only a handful of things to evaluate. But if you start needing to evaluate many different potential cases, you might find that the Switch statement is a little bit more concise and keeps things a little tidier. That would probably be one of the only reasons why you would use it, and I will show you a second reason why in this video as well. We will come back to the switch in just a little bit. But first, I want to talk about a special data type called an enum or an enumeration. Typically, we want to limit the possible values of a given variable. Now, admittedly, we’re already limiting the possible values that can go into a variable by virtue of the fact that we’ve given it a specific data type. However, even within that, I may want to limit the number of possible values to just a handful. Typically in software development, you want to limit and constrain your data to ensure its validity inside of your system. An enumeration is a data type that limits and constrains all possible values to only those that are valid and have meaning within our system. For example, we might want to keep track of series of to-do items. Maybe that is the type of application we’re building, and each to-do item is represented by an instance of a to- do class. We may want to keep track of the current status of a given to-do item on our list. We may want to constrain the possible statuses to maybe like five, that the task has not been started yet, or it is in progress, or it is on hold, it is been completed, or perhaps it is been deleted. There might be some other statuses, but you can see how I may want to just limit the number of options that are available for a status field or status property of my to-do class. We could do this in a number of different ways. We could just concoct a numbering scheme, where one always represents not started, and two represents in progress. I refer to those as magic numbers. They may have some meaning in the system but it’s not readily obvious if you’re reading the source code. As the developer, you may have to look up at some external reference, maybe some code comments, and who knows, maybe they are not even current anymore. Maybe things have changed since whoever wrote those code comments originally wrote them. I may need to reference or look through a number of different code to ensure that what number one means in the system, what number two means in the system and so on. The same thing can be true with strings. I could just use a literal string to indicate the current status, so I could use a literal string, not started or in progress. But the problem with literal strings is that somebody can misstype them, or there could be a space, and not started one time, and then somebody uses not started without a space the other time. If you are looking for all those items that have not been started yet you may have a hard time finding those that have not yet been started or in progress because they are not spelled correctly or whatever the case might be. The great thing about an enumeration is that it gives us a textual equivalent to a numeric value so that it will remove any ambiguity inside of the system. As developers, we know exactly what we are working with, and yet behind the scenes, it is still working with the number. Enumerations are used frequently in the.Net Framework Class library for the very same reasons. For example, you can see here that I have a project, EnumsAndSwitch, and here again, if you look in the code folder for this lesson, there should be a before and after. You want to copy the project from the before folder and copy it into your project’s directory so that you can catch up to me where I am at at this moment. You can always pause the video and type all this in if you like, it can quite a bit of typing though. You can see that in this project, I have already created a to-do class, with a description of each to-do item, the estimated number of hours it should take to complete the to-do item. Then notice that I have a status of type status. Where does this come from? You can see directly below, I have created an enumeration called status, and I have not started in progress, on hold, completed, and deleted. Did you notice as I hover my mouse cursor over it, that each of these values are given a numeric values? Not started equals zero, in progress equals one, on hold equals two and so on. If we were to store these values somehow in a database or a text file, those are the values that might actually get stored. However, they will be translated into this more textual format so that when we are actually working with the data, as you can see here in the static void main as we are creating a new list of to-do’s using the collection, initializer or syntax, here, I am setting the status equal to either completed or in progress or deleted or not started and so on. Visually, it’s much easier for a developer to work with those options in a more textual way. Now, the.NET Framework Class Library will use enumerations extensively. In fact, even in the Console window, if we were to set the foreground color, notice that IntelliSense automatically pops up to the console color enumeration. See, it says enum over there. Let’s see. I don’t think you can see that. Let’s go up a little bit here. All right there. Now, you can see the word enum right here. It’s a console color in enumeration. When I hit the member access operator, the period, it will show me all the colors that we can choose for the foreground color of our console window, so I might choose dark red. Again, enumerations are great because they are descriptive and they limit the number of possible values for our applications, for the properties of our classes. The next thing I want to talk about is the switch statement, and these two are going to marry together here in just a moment. But a simple switch statement is going to look something like this. In fact, I’m just going to go switch Tab Tab, and that will create essentially the outline for it. In this case, we can use an individual to do item and choose it’s, or let’s say estimated hours. If the estimated hours are, for example, case four, then we might perform some operation until we hit the break statement. Or we could go case five, so we could perform something and then we would hit the break statement and break out. There’s also a default case, that would be the catch all just like the else statement and the if else, if else, construct. But the most important aspect of this, is the construct of the switch statement. You have the keyword switch, then you have a variable that’s under evaluation and then a series of cases where we would try to match it up with one of these cases, and then we use the colon after that case. We write our code below it and then we use a break statement to break out and continue on our execution of the code. Now we might choose to, for example, do something like this. This makes a little bit more sense to work with the statuses. In this case, we might go status.completed. We might do something versus case studies.deleted and we might perform some operation there and so on. Now the beauty of the switch statement and the enumerations are that they can conspire together here. Watch what I do. I’m going to type Switch Tab, Tab and then I’m going to do, which is each the items in the to do collection dot status, and then I hit the “Enter” on my keyboard twice. The second time I hit “Enter”, see the macro that will actually blow out, each of the individual statuses so that I can write code associated with each status. Isn’t that crazy? Now, I can do here anything that my business logic would require. I have an idea though. Let’s use the foreground color and change it up for each of the individual to do items. It can be as simple as this. Let’s just copy this, and we’ll put it here for each of these statuses. Then here at the very end, we will actually then do a console.write line of the actual item itself, the todo.description and here will change up the color. If it’s not started we’ll leave it as dark red, if it’s in progress, we’ll use green. If it’s on hold, maybe we’ll use the dark red. Let’s use the red here in the dark red there. Completed, let’s mark those as blue and if it’s been deleted, we’ll mark that with yellow. Now let’s go ahead and save and run our application. You can see we have a very colorful list of tasks that are color coded by their current status. See how cool that is. In this lesson, we talked about enumerations and why we would use an enumeration to constrain the possible values for a given property of our classes. We saw how it was used in the.NET Framework class library, one little instance of it here, where they’ve created their own enumerations. Just be aware of as you’re trying to work with a given class and its properties, always look at, for example, in this case, what data type it is. This is a data type console color and typically IntelliSense will point you in the right direction. As you hit the equal sign it will pop down to that data type, that enumeration so that you just hit the period then you can make your selections there. That’s a really good hint. Then we looked at the structure of the switch statement, where we’re evaluating something in between the opening and closing parentheses. We looked at the body, the opening, and closing curly braces, the entire body of the switch statement. Then inside the creation of the number of cases, each case equating to one possible value of the item that’s being evaluated and then a colon and after that, then any of the code that we want to write, and finally, a break line which will pop us out of that switch statements body. Then finally, we saw that there is a catch all the default colon, which we can use to write any code for cases that we haven’t accounted for in any of the other previous cases. That wraps up this lesson. Doing great work, getting close to the end now. You feel pretty confident C-Sharp. You’ve got the majority of it under your belt. Just a few more topics we want to cover, and then we’ll wrap this up. We’ll see in the next lesson. Thank you. Hi. I’m Bob Tabor with Developer University. For more of my training videos for beginners, please visit me at devu.com. In this lesson, we’re going to talk about handling exceptions that occur within our applications. We’ll discuss what can go wrong, why things go wrong, and how to build resilient applications that are impervious to crashing through the use of C-Sharp’s Try Catch block statement. When the compiler catches a data type mismatch or an unresolved reference to a class or some malformed C-Sharp instructions, it’ll refuse to compile your C-Sharp code into a.NET Assembly until you fix the problem. These type of errors are called compilation errors, and that’s not what we’re talking about in this lesson. However, there are other types of errors that happen during runtime or in other words, they happen when the compile.NET Assembly is actually in the act of executing. Honestly, there are countless number of reasons why you could encounter an exception while the application is running, depending on the kinds of things you’re trying to do in your application. Many times these things are outside of the direct control of the software developer. For example, if your application can’t read or write to disk because a folder or a file is missing on disk where it expected to see it, it could trigger a runtime exception. Or maybe the files corrupt, or maybe the network access to that resource is unavailable, or it attempted perhaps to access a database, and it couldn’t find something in the database that it needed. A table, a column, whatever the case might be, because the structure has changed of the database out from under the application. All of these things and many more could cause the absolute failure of your application, and the user will see a nasty error message at runtime and the frustration, say the developer. In some cases, the developer may have not even foreseen that that could have potentially been an issue. If they didn’t see that it could be an issue, they couldn’t have accounted for it. Maybe the developer for example, allows the users to type in a country, but the user misspells the country name. Now, maybe they did that intentionally or unintentionally. Perhaps they maliciously use numerical characters instead of alpha characters. But as a software developer, your job is to make sure that you account for all of these possibilities. A friend of mine was fond of saying that 80 percent of all code exists to solve 20 percent of all the potential problems that could happen in your application. Generally, software developers should be pessimistic regarding the reliability of everything outside of their control. Whether that be input from an end user, any connection to a network, to the file system, anything that the developer cannot directly control be should be held a great suspicion. Again, if you rely on a file or a network resource, you should treat it with great suspicion. If you rely on the user to type data into your application, definitely treat that with great suspicion. It’s absolutely evil. This is the software developers equivalent to driving defensively. Always code defensively, which means you are always looking for problems all around you-all the time. Now, the way that the C-Sharp developer codes defensively, or one of the ways in which they do it is through the use of a try catch block, and I’ll demonstrate that in this lesson. Up to this point, we’ve been reading or actually writing files to disk. This time I want to read a file to disk. We use the same file class like we’ve used previously. I noticed that I already have a using stand for system that I also finds it. This time I want to use the read all text instead of the write all text. Here, let’s just go ahead and set this example. Notice that I’ve already got a project created called handling exceptions. Please, pause the video and catch up with me if you like. But here we go. String content equals file.read all text. Then let’s just for the sake of argument, hard code a location. I’ll put this at less than 22 slash example.txt and then we’ll do console.writeline.content. Then finally, console.readline like so. Great. So far, so good. Now, here I just want to demonstrate that this actually will work. You can see that I created of my route a Lesson 22 folder with example.txt in it. Let’s go and run the application and show that there is actually text in that file. It’s just a quote from Mark Twain. Now that we’ve got it working correctly, let’s break the application by giving it a fake name just by removing the E on the end of example.txt, and now you can see that we get an exception. This is a file not found exception was unhandled. I’ll tell you what, let’s do this. Let’s stop the application. That’s what the developer will see while they’re debugging their application if they were to run across this issue while they’re building the application. But what if we were to build a release version of the application by changing the solution configuration and then selecting “Build Solution”? Now we’re going to go out to our projects directory and I’m going to act like an end-user and actually attempt to use this application outside a visual studio, so outside of the debugging environment, just to see what the end-user would see if they ran into this exception at runtime. The name of this is handling exceptions, there you go. Let’s go into the bin directory, into the release folder, and then I’m going to double click the “HandlingExceptions”. Whoops. Notice that I get this ugly little message here and it’s trying to help me figure out what happened and I get all of this text here with all this ugly information just spewing out all this information that an end-user probably would have absolutely no idea what the issue was. Although we can read here near the very top that the problem is that it couldn’t find the file Lesson 22 slash example with no E dot txt. Now, a very observant user might be able to look at this and figure it out if he stared at it for a while. But most users are going to be scared off by this, and I don’t blame them. We ideally would like to make sure that the end-user never sees anything like that whenever they run our application. Again, Windows will then close the application, notify you if there is a solution available. There’s just a mess going on there, and we want to protect our end-users from this mess, from ever seeing this. What we can do is actually wrap a try and a catch around this. We’ll do this. There’s a couple of different ways to go about it. I’m just going to take the easiest approach to begin with. What I might do is just go ahead and let’s switch back to the debug configuration and run the application. Now, you may have noticed that the application ran briefly and then went away, and the reason it did was because we ran this, we hid an exception here, the catch statement kicked in, there’s nothing defined in the catch statement, and we continued on. What if we were to move this to right there? We would at least see the application now run for a little bit and we would see no output. So still not an ideal situation, but at least we’re not seeing any exceptions. Let’s go one more step with this and let’s actually catch the exception that occurred. Here we’re going to catch an exception that we’re going to call ex. Now, exception is the most general type of exception that can be thrown. What we’re going to look at in a few moments are very specific versions of exceptions. But this is the most general version so that at least we can see what the problem is and we might do something like, there was a problem, something like that, and even we could provide a description of the problem, so the message from the exception. Let’s go and run the application. At least this time we’re giving the user some feedback here. There was a problem, could not find the file Lesson22/example.txt. That’s better. Again, it would require an observant and slightly more technical end-user to be able to resolve this issue on their own to say, “Wait a second. I wonder if that file might be named something different here on my own hard drive.” As they traverse through and look for the file, they’re, “I see what the problem is. There’s no e on the end of example.” That’s asking a lot of your end-user, but that at least is a step in the right direction, at least we’re giving them some clue as to what the issue is. Now, really what I would like to do is account for all the possibilities and be a little bit more specific. If I were to hover my mouse cursor over this read all text where the issue seems to be mostly, you’ll see that we’ve only been looking at the return value and the input parameters for a call to a method. But notice below that there’s a list of possible exceptions that could occur. Also, if we were to go to System.IO.File.ReadAllText. Let me just copy this and let’s go to Bing. We’re basically going to be searching through Bing here for System.IO.File.ReadAllText. That should help us find an article in MSDN that has a full description of this method. You’ll see that there’s two overloaded versions. We’re using this 1st overloaded version of it. Then if we were to scroll down and pass some of the initial information, there’s a list of exceptions, and it would provide us some scenarios why that particular exception might happen. Like a security exception, the caller doesn’t have the required permission. The path is an invalid format. Interesting. File not found, the file in the path was not found. There’s also a directory not found. Interesting. Maybe the path was too long or maybe we provided a no value. There’s a lot of things that could potentially go wrong whenever calling this method. As developers, we really need to, to the extent that we can, account for all those potential situations, at least the ones that make sense. For example, I could rewrite this code example to begin catching some of those specific examples. For example, let’s take a look here. Let’s, first of all, make sure that the directory exists, and then if it does exist, then we’ll check to make sure the file exists. Then if the file in the directory exists but we’re still getting exception, then maybe we let it drop off to this most generalized exception here. I’m going to start from the most specific case and then work my way to the most general case. In this case, I think the file not found exception is probably the most specific of the ones that we’re going to work with. Then we’ll catch the directory not found exception. Then if that doesn’t work, we’ll just print out whatever the exception was. Here I might do something like Console.WriteLine and say there was a problem. Then give it a specific. Make sure the name of the file is named correctly, should be example.txt Then we can do something similar here and say there was a problem. Make sure the directory:\Lesson 22 exists. It’s something like that. Remember, we’re getting the red squiggly, why? Well, because we either need to add another backslash here or add the @ there, remember that from earlier? Let’s go ahead and test our application. I’m going to set a breakpoint here whenever we hit this line of code so that we can watch this execute. Let’s go ahead and step over. Looks like it found the file not found exception, and so we will see, there was a problem. Make sure the name of the file is named correctly. Then let’s go out and let’s rename this to Lesson22a. I think we’re still going to get the same actually exceptions. I know we did get the directory not found exception, good. In this case, we’ll see that error message, make sure the directory Lesson22 exists. Then for any other exception, maybe there’s a permissions issue on the computer, maybe the file is corrupt somehow and we can’t read from it, we would get this last catchall, where we catch just the general exception and print it to the user. When you read the key to this is that we check the most specific exceptions first and the most general or generic exceptions last. There’s also one other item I want to add here, and that’s a finally statement. This is where we would write any codes to finalize, which might mean setting objects to null, closing database connections, but this code is going to run no matter what. We’re just going to go console.WriteLine, closing application now that like so that we can see that this code will run regardless. It’s just that one last chance as a developer to clean up our mess before we stop the execution of the application. You can see that now represented here by closing the application now. Great. Now, you might look at this and you might think to yourself, great, I’m going to use this try catch around everything in my application. Every single line of code in my application, I’m going to wrap it with a try catch. I’ll just take every method and I’m going to blindly just copy and paste everything in there. That’s definitely a strategy that some people take. It’s a little bit lazy, quite honestly. Some developers have done that, but they’re often ostracized by their end users for providing very cryptic error messages. If we were to leave all of these off and just wrap everything and just only show the exception ex, we would just be saying, hey, there was a problem here, figure it out for yourself. That’s not really advocating on part of the user. We’re not protecting the user. Furthermore, we might be tempted to provide some type of debugging information for ourselves as developers. Sometimes you’ll see some error codes pop up that no human couldn’t be expected to understand, except the guy who originally wrote the application. The reason developers do this because sometimes they take that exact approach where they just say, hey, we’re going to forget about the user. I’m just going to wrap everything in a try catch and it will pop the error message to me, it’ll flip back to me. I’ll fix the problem and everything will be okay. But, again, this catch all is convenient to the developer, but it’s really frustrating to the end user so you shouldn’t do that. You should strive to put the same amount of attention into protecting your user from having these issues and protect them from having to guess at what to do next by simply helping them fix the problem. Tell them specifically if you possibly can. If you the developer can fix the problem, or at least you can point the user in the right direction, then that’s awesome. You should do that. But if you can’t, well, then at least try to identify the exact nature of the problem and then ask the user for the type of input that you would need as the developer to fix the issue. You don’t want to leave your users feeling stupid that they did something wrong. You want them to feel empowered and you want them to feel like your application is well built and it considered them whenever you were building it. That’s what makes your application polished. It’s what users expect, a reliable experience with no surprises. To recap this lesson, we talked about a number of different things related to exceptions that can happen essentially any time that you the developer are not completely in control and you’re accessing things outside of your boundary of control, outside of your domain. You need to wrap those in a try catch and be thoughtful about the types of exceptions that you’ll be handling, listening for specific exceptions that you know a particular method could raise, and it’s easy to find out. All you got to do is hover your mouse cursor over it, or you go to mstn and you find that method and you look for it like we did, all the potential exceptions that could happen. Then be reasonable about it, but then write the code necessary to handle those exceptions and protect your end user. We looked at the try, the catch, we looked at catching the exception and then using certain properties of the exceptions, like the message property to print out to the user what the issue was. We could also use this to log the error, even send it to a centralized logging service like application insights that’s available from Microsoft Azure to report back to the developers what the issues were. Then you can use the finally code block to clean up any connections you have to file systems, databases. You can set objects equal to null and go ahead and remove all of your references and be very explicit about that before you shut down the application. Hopefully that was helpful. This is a great lesson in building resilient applications, giving the users the experience that they would expect whenever things go wrong in your apps. We’ll have one or two more lessons and then we’re done. We’ll see you in the next lesson. Thank you. Hi, I’m Bob Tabor with Developer University. For more of my training videos for beginners, please visit me at devu.com. In this final tutorial video, in this course, we’re going to discuss event-driven programming. Event-driven programming is really at the heart of Microsoft’s presentation APIs, whether it be for web or Windows. Really, for that matter, it’s at the heart of just about every other API in the .NET Framework Class Library. It is so essential that we have to spend a little bit of time here near the end talking about it because it’s that next step that will help you graduate on to building real applications with real user interfaces beyond this course. Events allow you, the developer, to respond by handling those key moments in the lifecycle of the application’s execution, allowing you to write code to respond to an event being raised. Up to this point, in our simple console window applications that we’ve been building, there’s really only been one event that ever gets fired off, and that is the application startup. On application startup, the static void Main is executed, so it’s handling that event, I guess you could say, and this is where we write the majority of our code, and that’s why it actually executes whenever we run the application. Now in a modern user interface, whether it be for Windows or for web, users can interact with the various elements that they see on their screen. They can hover their mouse cursor over given things, like buttons or graphics or text boxes, and they can see maybe a change in the visual presentation, maybe they see a pop up that explains the usage of that given item, perhaps they can click on an item to enact some business functionality inside the application, they can press keys on the keyboard to make things happen, they can type inside the text fields, or they can drag and drop items around the user interface, and each of those will raise a number of events. As a software developer, you can decide to write code that responds to those interactions between the end user and those various user interface elements on screen, and you can also choose to ignore those that really don’t make sense that you really don’t care about, you don’t implement for your application. A given component, let’s say a button, for example, in its development by Microsoft, they included or defined an event, let’s say it’s the click event for that button. Now the developer, you and I, we say, “Hey, I’m going to write a code that performs this business logic that I’m writing here in C# whenever that event, the buttons click event is raised, that I want this code that I write to be executed.” So the developer creates a method and attaches that method to the event, and I’ll show you how we do that in just a little bit here. As the application is running, the user is interacting with the application. Eventually, they click that button. The .NET Framework runtime says, “Okay, if you were listening for the button click event, here it is.” It just happened, and it will notify every one of the methods that you and I, as developers, have attached to that specific event notification, that we’ve registered to that event. Now I’m going to show how events are used in a simple Windows application near the end of this video in a more realistic scenario, but first, I want to start with the absolute basics and keep things as clean as possible, so we’re going to work purely in a console window application. We’re going to work with a timer class, a timer object, and it has one event which is Elapsed. We can say after a certain amount of time, we want you, timer object, to execute or to raise an Elapsed event, and then we’re going to attach our event handler code to that event so that it gets executed every time that event is raised. Maybe it’ll be easier to see this in action than explain it. There are a number of different timer classes inside at the.NET Framework Class Library. We want to make sure we get the right one. I want to work with System.Timers.Timer, and I don’t want to use that long name every single time, so I’m just going to add that to my using statements up here at the top like so, and I’m going to say “Timer myTimer equals new Timer”. One of the overloaded versions of the constructor for this timer class allows us to pass in the interval in milliseconds. So every, let’s say, 2000 milliseconds we want the Elapsed event to fire, to be raised. So 2000 milliseconds would be simply two seconds, which is an eternity as far as a computer is concerned. Next up, what we’re going to want to do is say, my Timer we know that it will raise an event called Elapsed, and so we want to create an event handler, a method that will be executed whenever the Elapsed event is raised by the.NET Framework runtime. Here, you can see that we get this little message on screen that says, “Press TAB to insert,” and I press “TAB”, and it automatically creates the method stub called MyTimer_Elapsed, and it creates in a very specific way with a very specific method signature, and it also gives us this little stubbed out, “Hey, don’t forget you did not implement me,” so throw new NotImplementedException. Let’s go ahead and remove that for the moment, but notice what happened here as well. We are attaching or registering an event called MyTimer_Elapsed to the Elapsed event, so this references this code block right here. Inside here, we can write the code that we want to execute each and every time the Elapsed event is triggered inside of our application. This is where I might write something like Console.WriteLine, and the Elapsed event will send along some event arguments. One of the interesting event arguments is actually the signal time, and that will give me the exact, down to the millisecond, when that particular event was raised. Here we go. Actually, let’s do it this way so that we can format it nicely. We’ll call this Elapsed, and then hour, minute, second.fff. That should give us down to milliseconds. Now what we’ll do is actually tell the timer to start ticking by calling the start method, and then we’re going to go Console.ReadLine, and we’re going to say, “Continue running until somebody hits the Enter key on the keyboard.” Hopefully that makes sense, and let’s run the application. Now we see every two seconds, we get this message. You can see at 32, 34 seconds, 36 seconds, 38 seconds plus some thousandth milliseconds there. We get that MyTimer_Elapsed method executing. Now let’s do this. Let’s say that we want more than one event handler to execute whenever the event is raised. I can do this all day long. I can say, “Hey, well, let’s go ahead and add another method.” This time it will be called MyTimer_Elapsed1. Notice the 1. I’m going to get rid of this little box again. Inside of this new method, I can do essentially the same thing, and I’ll just use that Elapsed1 versus Elapsed. In fact, just to make this obvious, I’m going to use that little trick we learned just a lesson or two ago, where we set the ForegroundColor equal to the ConsoleColor.Red. That would be for the second one, and then we’ll do Console.ForegroundColor equals ConsoleColor.White. We can see clearly the two different event handlers that are both executing whenever the Elapsed event is raised by our timer. Let’s run the application, and now we see the pair running every two seconds. We could continue adding additional event handlers to the event. That’s what this little operator is doing. It’s saying, “How many current items are subscribing or have been attached to this event, I want you to attach this other one too.” Now we can do the opposite as well. In fact, let’s go Console.WriteLine line, and we’ll say “Press enter to remove the red event.” There’s probably a better way to say that, but hopefully you get the idea. Then after this ReadLine, we’ll add another ReadLine for the very end of the application, and in between what we’ll do is actually unregister, detach this second event handler for this event, so we’ll just do the reverse. myTimer.Elapsed minus equals MyTimer_Elapsed1. So now we have removed it, and it should no longer execute whenever the Elapsed event is raised. Let’s run the application. You can see, here we go, and now I’m going to hit the Enter key on the keyboard, and we should only see the white, the first version of our event handler firing every two seconds. Hopefully, that all makes sense. This is the most simple scenario that I could think up without having to actually create a real application, and by real application, I mean, one with a graphical user interface. But now that we’ve broached the topic, let’s go ahead and build an example WPF application. WPF stands for Windows Presentation Foundation. It’s one of the APIs inside of the.NET Framework class library that you use to build Windows applications. In other words, applications that are executed on the Windows desktop, not web pages that are executed on a server, and their markup is delivered into a browser but a true application that’s running on the end-users desktop. I’m going to File New Project, and here I want to make sure to choose WPF application. It should be one of the templates that are installed in the new project dialog. What I’m going to call this is, “WPFEvents” like so, and then click “Okay”. This is not going to be a tutorial on how to create Windows presentation, application, interfaces, or how to work with it, but I just want to show you the basics are generally the same. Here we have a basic application, we can actually run the application, it will do nothing at all. We just get a white form on screen. But I want to go to my toolbox over here on the left-hand side, and I’ll even pin it down briefly. Then inside of here, where I’ll go is to this rolled-up area called common WPF controls. Now. What you see on screen might be a little bit different than what I see on my screen. Just make sure you’re working inside of this MainWindow.xaml, and that you see some visual representation of your form here in the main area. You can ignore everything below. That’s the actual markup that will generate what you see visually here. We’re only going to work with the visual editor but know that there’s some markup that’s going on to produce this. But again, that’s a topic for another day. I’m going to drag and drop a button onto the design surface like so. I’m going to go over here to the Properties window on the right-hand side. This will allow me to set various attributes of that object, that visual object. For example, I can change the content to, “Click me” like so. I’m also going to add a label, control. I’m going to drag and drop it anywhere on this design surface. I’m going to remove the content completely, but I am going to change the name to, “myLabel” like so. Now, what I want to do is print out the phrase Hello world, whenever somebody clicks the “Click me” button. I’m going to choose the “Click me” button again by selecting it here in the visual editor, and then I’m going to look for this little lightning bolt over here in the Properties window and click the lightning bolt. This will show me a list of all of the events that this single control, this button can raise. Now, a lot of these are going to be for very specific situations, and we can ignore the vast majority of these. But the most important one here at the top is the click event. Now, I can write C# code that will execute as a result of this click event being raised by the .NET Framework because somebody, the user clicked on that “Click me” button. I’m just going to double click here in this white area, and when I double click in the white area, it created this button click method stub. This is going to be my event handler code. Let me use the auto-hide PIN to get rid of that, so I can see this. Here what I’m going to do is type in myLabel.content equals “Hello world” like so. I’m going to save my work, and then I’m going to start the application by running it. I’m going to click the “Click me” button and it displays the word “Hello world” inside of the little label. Now, what you didn’t realize is that maybe that whenever we double-clicked inside of Visual Studio in this little white area right here, it created a event handler for us, and it wired up or attached or registered that event to this button. You might wonder, well, where is the code that looks something like button.click plus equals, where’s that code at? Well, that’s a little bit difficult to describe if you take a look at this markup code here at the bottom and we scroll all the way to the right, that is essentially what happens right here. This code will get converted into C# at the point of compilation, and it will create that little snippet that we were used to looking at in the previous code example. However, I can create a second event handler in C#. By doing this, let’s go to the toolbox and I’m going to actually grab another label and just plop it down anywhere, and I’m going to select that label and they go to the Properties qindow and I’ll change this to be named my other label, and then I want to change the actual content of that label to be blank as well. Now what I want to do then is go to the MainWindow.xaml.cs, and here I’m going to go button underscore dot click plus equals and I could click “TAB to insert” but I’m not going to do that. I’m going to name this manually myself. But in my other click like so and then I hit Enter on the keyboard or actually I’m going to hit that. Now I’m going to, you can see that I get a red squiggly line. I’m going to put my mouse cursor on that line and hit Control period and then choose generate the method button underscore MyOtherClick and it does it for me, and you’ll see something that looks very familiar here, a method stub with the not implemented exception. Here I’m going to say myotherlabel.content equals “Hello again”, like so. Now if I did this correctly, whenever my user clicks the button, it will not only fire off this event handler, but then also this event handler. This event handler, we wired up manually using the technique that we learned in the previous code example. Let’s just run it, and make sure this actually works, and it does. The same principles are at play here. The difference is the vast number of events that are accessible to every single visual control in your toolbox whenever you’re working with the Windows Presentation Foundation API. The main takeaway of this lesson is that events are all around us, and especially whenever we’re working with Windows and web applications, we’re going to write our code in methods that respond to specific events that are raised by the .NET Framework runtime in response to events that are published by the various objects inside of our Windows apps, our web apps, and so on. We can either rely on Visual Studio to wire things up for us in a very quick and elegant way like we saw here in our XAML code, where we let it essentially do the wiring for us. Or we can take control of that process of wiring, of attaching, of registering an event handler to a specific event, and then write the code ourselves to actually respond to that event being raised. Again, extremely important. Hopefully, that’s the next logical step for you is to move on to other APIs, whether it be something like ASP.net or WPF like we’ve worked with a little bit right here or the Universal Windows Platform to build Windows Store applications. You’ll need to know these concepts for all of those. That pretty much wraps up this lesson and this entire course. We’ll have a couple of closing comments in the next video, and then that’s it. We’ll see you there. Thanks. Congratulations. You did it. You made it all the way to the end. That’s a huge accomplishment, don’t sell yourself short. When I look at the given views for our course, I typically see the first two or three videos in the course have an astronomical number of views, and then it begins to tail off over time until you get to the very end, you see a rather small percentage of those who started actually finishing the course. Then it used to concern me thinking, maybe I could be a little bit more engaging and keep people’s interest longer but good folks at Microsoft Virtual Academy assured me that that happens across the board with every course. I think what’s really going on is that everybody has the best of intentions to follow through to watch an entire course. But then life happens, distractions pop up, maybe changes in priority present themselves they interrupt or completely halt progress. But the good news is that is not you, you were able to make it all the way through to the end and now you’re well on your way to mastering C-sharp, or at least learning more about C-sharp. Then from here, learning more about.Net, picking a user interface technology, whether for web windows or mobile, maybe learning more about databases, maybe using C-sharp to access APIs around the Internet. We’ll talk about some of those things that you could learn as you move on from here a little bit later in this video. But soon you’re going to be building your own applications, whether for yourself, for your employer, maybe your future employer. But whatever the case might be congratulations, I really encourage you to continue your momentum. Don’t stop here, keep pushing forward, keep taking baby steps on a daily basis. As you know, daily progress, no matter how seemingly insignificant, is how you make real improvements in your life, it’s how you add skills to your skill-set and you’ve taken this great first step in the right direction and I’m proud of you can’t say that enough, great job, and I’ll continue to encourage you throughout this lesson. But I want to wrap up this series in this lesson and provide a few suggestions about where to look for answers whenever they pop up from here on out, as inevitably you can continue by building applications and learning more about the various APIs that are available to C-sharp developers. We’re also going to talk about the right way and the wrong way to ask for help out on the Internet. Then I’ll make a few suggestions on topics that you might want to investigate from here on out as you continue your self-directed training. But before I get started in earnest, let me say that some of the ideas that we talked about here, especially some of the more advanced concepts that I hinted at or that I said, you can let this go in one ear and out the other, I just wanted to show it to you up briefly. Some of these things could require weeks or months or even years of thought work before your mind is really able to accept them and digest them. I know I’ve personally spent hours just staring at a wall, thinking about some programming concept, trying to wrap my head around it. The mind needs quiet time to reflect. You need to put yourself into a position to succeed by giving your mind the time to discover, the time to ask the crucial questions, the time to allow those neurons to make those vital connections. Honestly, there are things that I learned about 10 years ago that I’m still trying to really wrap my head around, figure out what this means overall, in what contexts it applies, things like that. Many times I might need to read different books and articles, watch videos, and to hear what different authors have to say about a given topic before it finally resonates with me and I really understand what the topic is, how it pertains to me, what do I need to do with it? Things like that. Each author who talks or writes about a given topic can say things in just a little bit differently, and sometimes that can finally unlock something for me. Don’t forget to keep pushing forward and keep learning because there’s always answers out there. But ultimately, I hope you realize that you really don’t need to know everything right away to get started and to be productive right now. You don’t have to be an expert first before you can begin to write software. In fact, some concepts really only make sense after you have more experience. Once you’ve made some mistakes, or you can see where a given idea and Oh I see where that applies, I could use that here in this scenario, that finally makes sense to me why I would want to do that. Adjectory in a program is a good example and there’s some others too. I titled this lesson, where do you go from here? My intent was to answer that in two different ways. First of all, where do you go from here whenever you have a problem and you’re having a hard time resolving the problem and getting back on your feet again? There’s a good chance at some point as you’re learning, as you’re building applications, you’re going to run into error messages or something that just doesn’t make any sense. It happens to everybody. But I would encourage you do not fret, I think in fact, what makes programming such a vital skill is learning how to solve problems that combine your existing knowledge with your ability to reason through what could be the problem and then your ability to research and perform research on a given problem until you come up with the solution to it. The good news is that there is this large community of other developers inside and outside of Microsoft that can help nudge you and get you past these problems. These people write blog posts, they answer questions in the various forums, they write books, they record screencast video tutorials like this one. You can tap into that community of knowledge at any time. But let me give you a few tips on how to utilize that community in the most effective way. Let’s suppose that you do hit a wall. You’re experiencing some issue with your application, it’s not behaving the way that you expected, maybe you’re getting some strange error message popping up every time you try to debug your application. Where do you start to debug this to pick apart the problem and get to the root issue? First of all, I research using key phrases directly from the error message itself, and I can’t emphasize how vitally important that is. If there’s an error number or a specific phrase that I can latch on to and I can “surround those” as I type in my query to Bing.com, that always helps me get closer to a resolution. I might spend 10-15 minutes scanning through various blogs and forum posts or on MSDN as search results pop up for these sources in order to find a potential solution. If I’m mindful about my search terms, I almost always find a solution. I think the reason why a lot of beginners fail with finding solutions to the problems is because they become impatient or they don’t use the exact error messages in their searches, and they don’t know how to search correctly, and they’re not willing to put in the time to actually read through pages and pages of content to find a solution to the problem. I can’t emphasize enough using the exact phrase inside the error message that you see on screen “surrounded”, will get you closer to finding a resolution to the issue that you’re having. There are always usually people with other similar issues that are posted and then explain what they did to actually solve that particular issue, so research is vital. I think actually one vital skill as a modern software developer is you become great at search. Searching on the Internet to help solve issues that you run into is such a vital skill. Now, it might seem easier to go directly to the forums immediately and to post a question to the forums in hopes that somebody else can help solve your problem for you. But I assure you that it will actually take longer to ask the question and get an answer than it would if you spent the time searching, refining your searches, and so on until you find a solution to your problem. Frankly, I almost never have to ask a question in the forums because a simple search will almost always yield a clue to what I did wrong or what the core issue is. In fact, I get embarrassed when I have to ask questions. Maybe that’s a bad attitude to have, but I don’t want to burden other people with when they could be answering other legitimate questions, so I go overboard and try to figure out the issue on my own. Now, virtually any issue that you run into, I’m almost positive that somebody else has at some point run into that issue before you have, and they’ve already posted the solution to that problem online. You just need to get out there and find it. If you get good at doing research, doing searches on the Internet to help find solutions to your problems, then it’ll get you back on your feet faster than again, posting into a forum and asking other people for help. Now, let’s suppose that you’re at your wit’s end and you’ve done searches and there’s nothing out there that really seems to apply to your situation. Nothing you’ve tried actually works to resolve your problem, maybe at that point, you need to ask for help. That’s fine. Here’s what you really need to do whenever you ask for help. You need to ask your question in such a way that you’re going to get a resolution and how you ask your question is important. You need to be an empathetic requester. In other words, you need to get people who are willing to help all the information they need to get you back on your feet again to pinpoint the issue and then isolate the issue and prescribe a solution. This means that you need to, first of all, clearly state your request. There’s a checklist that I have in my mind of the things that you need to do. Some of these will be obvious and some of them you may have not considered before so let me just go through them real quick here. First of all, you should start by posting your question in the right place. Find the right category in the given forum or use the correct tags for your post so that the right people are looking at your questions. Posting a C-sharp question in a visual basic form is not going to be all that productive, in fact you’ll probably get chided for it. Secondly, you also need to choose a simple clear title for your post so that it attracts the attention of the people who can help so that it saves everybody a lot of time. If I see a forum post that just says, please help, I usually just skip it. If it says link to objects queries yielding unexpected results, well, okay, that’s oddly specific. That might be something I can help with. I look like the person put some effort into concisely stating what the issue is. I’ll read the question and I’ll see if I can help. Third, a short synopsis of the issues that you’re having, including the exact error message, including the exact behavior you are expecting, and what you’re actually experiencing. Describe what you expected to happen, but what if it happened instead and keep it concise. Fourth, if you can at all possible include screenshots, and ideally you would go one step further and use some screen editing or image editing software and draw circles to draw the eye’s attention to those parts of the screenshot that are pertinent to your question. Fifth, if possible, include a code example. Make sure to change any super-secret information, passwords things of that nature before you post it, but without a code snippet, many problems are unsolvable. I can’t tell you how many times I get people writing e-mails and they’ll say, I’m having a problem with this what do you think the solution is? I’m like, show me some code, I need to see what you did to get to that point, and then maybe I can help you figure out what your issue is. Always include, if you can, a code snippet of the code that you think is causing the problem. Then be choosy about which code you choose to post. There’s nothing more frustrating than looking at somebody who posts like 200 lines of code and expecting me to go through it all when a lot of it doesn’t even pertain to the question at hand. I mean, you had to spend a little bit of time narrowing it down to a few things. You need to help me be empathetic to me, the person who’s willing to help you to identify those lines of code that might be involved in the issue. Number 6, if a given forum has special HTML tags or shortcodes that you can use to format the code or some other aspect of your question to help it stand out in the post, then you definitely should use it. Number 7, tell me, what have you done so far to try to resolve the issue? Did it change anything at all? Did it change anything? Did it help? Did this lead you to rule out certain possibilities? Again, empathize with me, the person who’s reading your question, trying to help you. This will result in a faster resolution. Otherwise, people will start with the obvious issues and then move forward. There’s that old joke, Hey, I’m having a problem with my computer and the technician asks, well, do you have it plugged in? Everybody says, Oh, the technician was, but there’s a reason why they do that. It’s because the most obvious answer is the one that most of the time works for people. Don’t be that guy make sure you already list out what it is that you try to do and you’ve eliminated it as a possibility. Number 8, tell me which operating system you’re using, which version Visual Studio and the DUnit framework, which programming language you’re using, which updates, service packs have been applied anything that you feel is pertinent to help me help you diagnose the issue that matters more than you might realize. Number 9, suppose that there’s a resolution to the issue, you figured it out. Awesome. Very cool. Maybe somebody made some suggestions that led you down to investigate some things and you finally figured it out that’s great. Take a moment, go back wherever you asked the question, and describe exactly what you did to resolve the issue step by step. Use that as a means of better understanding it yourself and articulating that will help you better understand the issue and what the solutions are. It makes you part of the community of the wealth of information that’s out there so that others in the future who have the same issue can look at your post and that you are feeding it back into the community just like you’re taking out of the community. Chances are that honestly, that person that you help in the future is you, I can’t tell you how many times that I found a resolution to something, and then months later, I hit up against the same thing, thinking to myself, I know I’ve solved this once and I’ll go searching for a solution and I’ll find the exact solution and I’ll read it. Oh, that was me, that was me who answered the question. It would be nice just to search for your solution online if you knew where it was, or at least be courteous to everyone else and your future self to post the answers to the questions that you have. Finally, absolutely 100 percent be polite, people don’t owe you an answer they don’t owe you anything there. If they’re going to help you, they’re going to do it out of the kindness of their hearts. They’re going to be doing it in their spare time as a means of maybe, furthering their own understanding, helping themselves grow but then also to help you grow as a software developer. Say please and thank you and be nice and then help other developers as you have the opportunity. I do sell training content, but I give a lot of it away for free. I do ask questions in the forums, but I answer a bunch too, make sure that you become part of that community and that you are feeding back into the community, you help, and you support, just like you’ve been supported by others. You might be wondering, where do you go to find this level of support, where you can ask questions. That depends. Typically, I’d recommend that you go to msdn.Microsoft.com here, let’s go out real quick. So, msdn.Microsoft.com/forums and it might redirect you based on where you are in the world. But typically you can choose from a number of different forums, so you definitely want to find the specific technology or language or whatever the case might be, or do a quick search for those keywords again right here inside of the MSD and forums. It is monitored by Microsoft employees, as well as people called Microsoft Most Valuable Professionals, or MVPs. MVPs are usually knowledgeable people who’ve demonstrated their willingness to help, and they’ve been identified by Microsoft as people who are willing to help and so they qualify for that based on some criteria, not the least of which is participation in these forums. Then there’s also another more comprehensive place you can take a look at Stack Exchange, programmers.stockexchange.com, and there might be one other place where you could go but also by the same company that has similar forums, depending on the type of information you’re looking for. Now, in my experience, Stack Exchange is a little bit iffier. It’s a little less beginner-friendly. Maybe that’s changed by the time that you visit, and I only say that it’s a little less friendly because not only will you be critiqued for how you ask your question, but very often if people do a search to help you and then they find that there’s already an existing question that’s similar enough to you, they’ll shot your question down. Just follow the rules, do an extensive search before you ask a duplicate question, don’t take offense to criticism about your question. Again, I’d recommend that you search long and hard before actually posting the question because I’m convinced that virtually everything that you could run into has been asked by somebody already. You just need to spend the time to find the answer that you’re looking for. I said that I would answer the question of where to go from here in two different ways, and I’ve answered the question of where to go whenever you have problems. But now I want to answer the question where do I go to learn more about application development, where do I go to learn more about software development? At this point, you’ve got a pretty good foundation of C-sharp basic knowledge of the C-sharp programming language.Net and a little bit about Visual Studio but there are still a lot of opportunities to practice what you know and to grow beyond that. No matter what type of application that you want to build, there are a few fundamental ideas that you need to be fundamentally acquainted with before you move on. First of all, I would recommend that you learn about relational databases like SQL Server, you learn how to access data that’s stored in a database using the Entity Framework, part of the.Net API for accessing data in your applications. Both SQL Server and the Entity Framework have visual tools that you can use inside a visual studio to drag and drop and configure your settings and selections and so spending some time not only learning about the tools and the APIs themselves, but then also these visual tools and start a visual studio can pay big dividends. You’ll want to quickly grow past that and learn how to write code and rely less on the visual designers and visual studio. But still, it’s a great tool to help you get to that point where you can be productive quickly. Next, you’re going to need to choose a presentation technology that you want to master, and this is really more about platform, honestly than just simple UI. So you have no lack of options, whether you want to build web or Windows applications or mobile applications or games or backend processes, whatever the case might be. So let’s say, for example, you want to build web applications. There’s a couple of different platforms. The older API is called asp.net web forms, and there’s a lot of code that was written on the Web Forms Platform API. But then there’s also a newer API, that’s called asp.net core MVC, and there are some huge differences between the two, but we don’t have enough time to talk about those. I had content on both of those topics on my own website debut. There’s Windows Forms, which is the older desktop API. Then there’s Windows Presentation Foundation, which is a newer API that companies use for building applications internally and then there’s the Universal Windows platform, which you use to build more consumer oriented applications, typically for sale on the Windows Store. There’s also the Xamarin platform, which Microsoft recently purchased at the time, I’m recording this for building true cross-platform apps for iOS and Android and even Windows Phone. Then there’s a third party called Unity 3D or 2D, depending on the type of game that you want to build. And so you might want to check out unity for building games. Now, if you’re not really sure about where you should go next and what you should learn next, I really would recommend that if you don’t already know each HTML5, CSS3 and JavaScript, that’s a great place to start. And I’ve created several fundamental series on Channel 9 that are aimed at each of these topics. They’re also available here on Microsoft Virtual Academy again at the time when I recorded this. Then beyond that, I recommend learning about the basic tenets of application architecture, particularly how to structure your code into layers of responsibility and what that even means. So splitting your code into layers of responsibility will help you build applications that can withstand the impact of change. And like I said earlier in the series, change comes from a number of different places. It could be changing business rules. It could be changing requirements, changes in the technology that’s available. It also comes from defects in your software, bug reports that come in and you need to make changes to fix those. But in each case, you can mitigate the negative impact of making changes in your code by encapsulating the responsibilities behind well-established APIs between each of the layers of responsibility. And I spend a lot of time talking about this, about application architecture on debut. So if that’s something that interests you, you definitely want to check that out. But from there, you want to learn more about basic software design patterns and tactics and techniques. And there are a few keywords that you’re going to want to learn about, and each of these can spawn an entire book, an entire video series. And I’ve already alluded to the topic of object oriented programming. That’s a huge topic that you definitely want to learn about first. If you can just get your mind wrapped around object oriented programming and how that changes the way that you create solutions to programming problems, then that’s a huge step in the right direction. But beyond that, you’re going to want to learn about the principles of software development, principles that guide you to write your code in a very object oriented way. There are some more generalized principles, like the drive principle. I don’t know that I’ve ever given it a name, but it’s essentially don’t repeat yourself. I said be leery of copying and pasting. When you do find yourself wanting to copy and paste code in multiple places in your application, you should be stopping yourself and thinking, how can I create this in such a way that I can reuse it? So don’t repeat yourself, put code, extract it out. That will be reused into its own method or class, and then reuse it from there. There’s also another principle called Yagni or Y-A-G-N-I. Which is you ain’t going to need it. Which means, yeah, you could probably set yourself up and architect your application in such a way that in the future you could expand, but you’re probably not going to need to do that. You ain’t going to need it. All right. Then there’s another principle or idea called dependency injection, which is super important. It’s a design pattern that guides you towards building loosely coupled objects that then can be swapped in and out of the solution, and you’ll want to learn about dependency injection. It’s really crucial to building some of the new style applications using like the ASP.Net Core MVC, which relies heavily on dependency injection. There’s also a set of principles called Solid, S-O-L-I-D. Each of those stand for a different sub principle. They help you realize the promise of object oriented programming inside of your applications. So again, a lot of ideas that are more conceptual in nature and less code syntax or tool oriented. All right. You’re also going to want to learn about the process of software development, so the workflow surrounding software development and managing software projects. So specifically, you’re going to want to learn about the tools and the techniques that you use whenever you work inside of a team sharing source code between team members using a source code repository like Git or like Visual Studio Online’s implementation of Git and their own internal source code repository tool. You’re definitely going to want to learn about building unit tests, which are tiny little code based tests that continually are testing your code every time you write. Some people have even gone as far to say that you should be writing those little unit tests first and then you write the actual production code that satisfies those unit tests. Now, whenever there’s a change made in the system, you can see what the impact of that change is because you’ll begin to see these little tests start filling, that is a process called test driven development, and some people swear by it. Other people swear at it. So you’re also going to want to learn about agile project management, agile software development techniques, defining requirements in user stories, playing a game called planning poker to determine what features can we include in a given iteration of our software building process, using agile boards to manage assigned tasks between the various software developers on the team. You’re going to want to learn about the nature of iterative development. How to use that term iteration. So you want to learn about what are iterations and one of the goals of iterations and why they’re useful. You’ll want to learn about developing a spike of functionality all the way through all the layers of responsibility in your system. So I’ve given you probably what, several dozen different key terms that you could use as a launching point to search on. Honestly, if you were to look at all those terms that I just use, it’ll take several years to learn about all those things, even in a general way. But fortunately again, you don’t have to know at all to get started to be productive today. So, yeah, there’s so much to learn in so little time. But it’s what makes software development fun and makes it exciting because there’s always something new to learn and some new technique to try. I’ve had friends at Microsoft, actually confide in me that it’s a challenge for them to keep up with it all. Nobody just knows all this stuff automatically. It’s a challenge for everyone, everybody to keep up with. Nobody just knows it all. It just keeps evolving. You just have to really commit yourself to learning. I realized some time ago that my full time job is not creating video content or training content for developers, my full time job is really learning. And then if I create training content, that’s really a byproduct of all the learning that I’m doing, the value that I have to somebody else is my knowledge. And so without that is the core piece of what I do. Whether it’s building applications for somebody or creating training content, they’re only interested in me because of my knowledge. Then how I apply that knowledge is a byproduct of actually gaining the knowledge. So you have to really commit to learning. And I know since you’re here on Microsoft Virtual Academy that you’ve already done that to some degree. There’s a whole bunch of great resources out that are available on the Internet, not the least of which are Channel 9 and Microsoft Virtual Academy. Obviously, there’s MSDN, as we looked at earlier in the series. However, before I close this out, let me make one final plug for you to visit my website. If you haven’t already developeruniversity@devu.com, is there on screen. I’ve designed the courses there specifically for someone who’s a beginner to help them get up and running as quickly as possible, pointing out what I feel like they really need to know in order to master key ideas that will lead them to get jobs in the software development industry, providing homework exercises and quizzes. But more importantly, coding challenges that force people to write and to develop the muscles of your mind that allow you to pick apart a problem and create a solution for it. All right. So please check out devu.com whenever you get a chance. All right. So as I close here, I hope you found this course to be valuable and this lesson to be valuable. If there’s anything that I can ever do to help you, please let me know. You can find me out on Twitter. I sometimes go out there. Hit me up on Facebook or, you can write me an e-mail. But finally, as we close out, I sincerely wish you the best of luck in your career. C-sharp in software development is such an exciting field to be a part of, and I’m really excited for you. So good luck. Thank you for watching this series.
Affiliate Disclosure: This blog may contain affiliate links, which means I may earn a small commission if you click on the link and make a purchase. This comes at no additional cost to you. I only recommend products or services that I believe will add value to my readers. Your support helps keep this blog running and allows me to continue providing you with quality content. Thank you for your support!
The source provides instruction for learning JavaScript. It covers creating files and directories for code and explains the structure of JavaScript statements, including operators, operands, and expressions. Various JavaScript concepts are explained, such as variables, data types, coercion, scope, functions, objects, arrays, loops, the ‘this’ keyword, classes, inheritance, template literals, regular expressions, and built-in native objects. The material also touches on design patterns like the module pattern and new JavaScript features like arrow functions. The explanation includes examples in both Node.js and web browser contexts.
JavaScript Fundamentals: A Comprehensive Study Guide
Quiz
What is JavaScript and who is it designed for? JavaScript is a programming language designed primarily for beginners, especially those familiar with HTML and CSS, and even individuals with no prior programming experience.
What problem is Babel.js designed to solve? Babel.js addresses the issue of browser compatibility by transpiling newer JavaScript code into older, more widely supported versions, ensuring functionality across various browsers, including outdated ones.
Explain the difference between declaring and initializing a variable. Declaring a variable involves creating a named storage location in memory using the let keyword, while initializing assigns an initial value to that variable at the time of its declaration.
What are the rules for naming variables in JavaScript? Variable names must begin with a letter, dollar sign ($), or underscore (_), can contain letters, numbers, dollar signs, or underscores, and cannot be keywords or contain spaces. Additionally, variable names are case-sensitive.
What is coercion in JavaScript, and why can it be problematic? Coercion is the automatic conversion of one data type to another. This can lead to unexpected behavior, especially when performing operations on values of different types (e.g., adding a number and a string).
Explain the difference between the equality operator (==) and the strict equality operator (===). The equality operator (==) checks for value equality after performing type coercion if necessary, while the strict equality operator (===) checks for both value and type equality without coercion.
Explain the purpose of the break statement within a switch statement. The break statement is used to exit a switch statement after a case match has been found and its corresponding code has been executed, preventing the execution of subsequent cases.
What is variable scope? Variable scope defines the accessibility and lifetime of a variable within a program, determining where it can be accessed and when it is removed from memory.
Explain the module pattern in JavaScript and what problem it aims to solve. The module pattern is a design pattern used to encapsulate code within a module, creating private and public scopes. It primarily solves the issue of polluting the global namespace.
What are template literals in JavaScript? Template literals are string literals that allow embedded expressions. They use backticks (`) instead of single or double quotes and can contain placeholders (${expression}) that are replaced with the values of the expressions.
Answer Key
JavaScript is a programming language designed primarily for beginners, especially those familiar with HTML and CSS, and even individuals with no prior programming experience.
Babel.js addresses the issue of browser compatibility by transpiling newer JavaScript code into older, more widely supported versions, ensuring functionality across various browsers, including outdated ones.
Declaring a variable involves creating a named storage location in memory using the let keyword, while initializing assigns an initial value to that variable at the time of its declaration.
Variable names must begin with a letter, dollar sign ($), or underscore (_), can contain letters, numbers, dollar signs, or underscores, and cannot be keywords or contain spaces. Additionally, variable names are case-sensitive.
Coercion is the automatic conversion of one data type to another. This can lead to unexpected behavior, especially when performing operations on values of different types (e.g., adding a number and a string).
The equality operator (==) checks for value equality after performing type coercion if necessary, while the strict equality operator (===) checks for both value and type equality without coercion.
The break statement is used to exit a switch statement after a case match has been found and its corresponding code has been executed, preventing the execution of subsequent cases.
Variable scope defines the accessibility and lifetime of a variable within a program, determining where it can be accessed and when it is removed from memory.
The module pattern is a design pattern used to encapsulate code within a module, creating private and public scopes. It primarily solves the issue of polluting the global namespace.
Template literals are string literals that allow embedded expressions. They use backticks (`) instead of single or double quotes and can contain placeholders (${expression}) that are replaced with the values of the expressions.
Essay Questions
Discuss the importance of browser compatibility in web development. Explain how tools like Babel.js contribute to addressing this challenge and enabling developers to use modern JavaScript features while maintaining support for older browsers.
Explain the concept of variable scope in JavaScript, detailing the differences between global, function, and block scope. Provide examples to illustrate how variable scope affects the accessibility and lifetime of variables within a program.
Discuss the advantages and disadvantages of using the module pattern in JavaScript for code organization and encapsulation. Compare and contrast the module pattern with the revealing module pattern, highlighting their differences and potential use cases.
Explain the concept of closures in JavaScript. How do closures enable the association of data with functions? Provide an example to demonstrate how closures can be used to create functions with persistent state.
Explain the behavior of the “this” keyword in JavaScript and its implications for object-oriented programming.
Glossary of Key Terms
Transpilation: Converting source code from one programming language (or version) to another.
Variable Declaration: Creating a named storage location in memory to hold a value.
Variable Initialization: Assigning an initial value to a variable at the time of its declaration.
Identifier: The name given to a variable, function, or other programming element.
Keyword: A reserved word in a programming language with a specific meaning and purpose.
Coercion: Automatic conversion of one data type to another.
Data Type: The classification of a value, determining the kind of data that can be stored and the operations that can be performed on it (e.g., number, string, boolean).
Assignment Operator: A symbol (=) used to assign a value to a variable.
Operator Precedence: The order in which operations are performed in an expression.
Operand: A value or variable on which an operator acts.
Expression: A combination of values, variables, and operators that evaluates to a single value.
Statement: A complete instruction in a programming language.
Function Declaration: Defining a function with a specified name, parameters, and body.
Function Expression: Creating a function as part of an expression.
Function Invocation: Calling or executing a function.
Argument (Parameter): A value passed into a function when it is called.
Return Value: The value returned by a function after it has completed execution.
Code Block: A group of statements enclosed in curly braces ({}).
Decision Statement: A statement that allows the execution of different code blocks based on a condition (e.g., if, switch).
Iteration Statement: A statement that allows a block of code to be executed repeatedly (e.g., for, while).
Variable Scope: The region of a program where a variable is accessible.
Lexical Scope: A variable’s scope is determined by its location within the source code.
Global Scope: Variables declared outside of any function or block have global scope.
Function Scope: Variables declared inside a function have function scope.
Block Scope: Variables declared inside a block (e.g., if statement, loop) with let or const have block scope.
Module Pattern: A design pattern used to encapsulate code within a module, creating private and public scopes.
Immediately Invoked Function Expression (IIFE): A function expression that is executed immediately after it is created.
Closure: A function that retains access to variables from its surrounding scope even after the outer function has finished executing.
Truthy/Falsy: Values that are implicitly converted to true or false when evaluated in a boolean context.
Template Literals: String literals that allow embedded expressions, enclosed in backticks (`).
Regular Expression: A pattern used to match and manipulate strings.
Arrow Function: A concise syntax for writing function expressions.
DOM (Document Object Model): A programming interface for HTML and XML documents. It represents the page so that programs can change the document structure, style, and content.
Event Listener: A function that is called when a specific event occurs (e.g., click, mouseover).
JavaScript Fundamentals: A Beginner’s Guide
Here’s a detailed briefing document summarizing the main themes and ideas from the provided source, with direct quotes to illustrate key points.
Briefing Document: JavaScript Fundamentals
Overview:
This document summarizes a JavaScript course intended for absolute beginners to programming. The course covers basic JavaScript syntax, data types, operators, control flow (decision statements, iteration), functions, object-oriented programming principles, DOM manipulation, and more advanced concepts like closures and the module pattern. It emphasizes the core language rather than web development aspects, initially focusing on console-based applications before discussing browser implementation.
Main Themes and Key Ideas:
Target Audience and Scope:
The course is designed for individuals with HTML and CSS knowledge who want to learn JavaScript, targeting absolute beginners to programming in general. “This course is aimed at those who are absolute beginners so beginners to javascript and frankly given that we’re going to discuss some very basic things like if statements and for loops it’s really designed for those who are beginners to programming in general.”
It focuses on the JavaScript language itself before delving into web browser implementations. “My focus is the javascript language the pure language not web development necessarily although we will discuss javascript in the context of the web browser at the very end of this course.”
The course covers the Javascript language itself, not necessarily web development.
JavaScript Versions and Compatibility:
The importance of browser compatibility is discussed, highlighting the issue of older browsers not supporting newer JavaScript features. “hat are viewing web pages with browsers that were created 10 years ago so clearly in these cases the newer features of javascript many of which we’ll discuss in this course will not be available in those browsers and your javascript won’t even work in those web browsers.”
Two approaches to address compatibility: writing code friendly to older browsers or using transpilers like Babel to convert modern JavaScript to older versions. “You can either attempt to write your code in such a way that it is is as friendly as possible to those older web browsers or you can use a tool which will transpile your javascript code.”
The website “can i use” is suggested to check browser support for specific JavaScript features.
Importance of Precision and Syntax:
Emphasis on the need for precise syntax in programming languages. “You can’t just write a text message full of lowercase letters and things of that nature that would make it a well-formed english sentence and you’re you’re relying on your the person receiving that text message to understand what you’re trying to say the computer doesn’t work that way it means it needs to know exactly what you’re saying and so you have to be precise precision is the key as a software developer.”
Use of code comments ( // ) to exclude lines from compilation.
Variables and Data Types:
A variable is defined as “basically just a a an area in the computer’s memory where we’re storing a value.”
Explanation of variable declaration using the let keyword. “A keyword is something like let… essentially think of it like a verb in the english language. It’s an instruction to the javascript compiler that we want to do something…”
Rules and conventions for naming variables (identifiers). Identifiers must begin with a letter, dollar sign, or underscore and can contain letters, numbers, dollar signs, or underscores, but no other special characters. Identifiers cannot contain spaces and cannot be keywords.
Discussion of data types: number, boolean, string, and undefined. “The variable itself does not have a data type only the values that we store inside the variables have the data type.”
The typeof operator is introduced to determine a variable’s data type.
Coercion is introduced with example let a = 7; let b = “6”; let c = a+b;, which yields 76, and parsing b to an integer solves it.
NaN is discussed as representing “not a number” if an illegal parsing is attempted.
Operators and Expressions:
Definitions of operators (keywords, symbols) and operands (identifiers, variables, functions).
“By combining operators and operands we create expressions that are then used to compose statements”.
Examples of different types of operators: assignment (=), arithmetic (+, -, *, /), increment/decrement (++, –), comparison (==, ===, !=, !==), logical (&&, ||).
Discussion of the order of operations and using parentheses to control evaluation.
Introduction of the member accessor operator (.) for accessing object properties.
Control Flow: Decision Statements and Iteration:
Explanation of decision statements: if, else if, else, switch, and the ternary operator.
The structure of the if statement is outlined. if (some expression that expression should equal true or false)
“The ternary operator has kind of got several parts here there’s an expression there’s a question mark that that has true or false ramifications.”
The structure of the switch statement is outlined, and the need for break statements to prevent fallthrough.
Explanation of the ternary operator for inline conditional evaluation.
Iteration statements: for and while loops.
“Iterations allow us to loop through a body of code a block of code a number of times until a certain condition is met.”
Example showing how to iterate through an array using a for loop.
The “break” keyword allows premature escape from iterations.
Variable Scope:
“When I use the term scope I mean variables are a little bit like people in so much that variables have a lifespan they’re born they do work and then they die and they’re removed from computers memory when they go out of scope”.
Scope is defined as the region of a program where a variable can be accessed.
Variables declared outside a function have global scope and can be accessed within the function. Variables declared inside a function have local scope and cannot be accessed outside the function.
Functions:
A “method as a function that belongs to or rather is defined inside of an object”.
The declaration and invocation of functions. “To actually invoke a function we have to use the function invocation operator in this case it’s the opening close parentheses.”
Passing arguments to functions. “We’re able to reuse that code but change it up by passing in the name that I wanted to say hello to.”
Returning values from functions using the return keyword.
Discussion of avoiding creating variables in the global scope.
Returning a function from a function.
Example of a function that returns a string like one using the keyword return.
Objects and the this Keyword:
Definition of objects as collections of properties (key-value pairs).
“Simple objects have a single property, other objects can have more properties including other objects.
Creating object literals.
Accessing object properties using dot notation (.) and bracket notation ([]).
Introduction of the this keyword, emphasizing that its value depends on how a function is called.
this depends on how a given function is called, not necessarily how/where it was created.
Examples showing how this refers to the global object in a regular function call (in non-strict mode).
Using the call() and apply() methods to explicitly set the value of this.
The this keyword in the context of a web page is different.
Use the id property in a button to specify a unique identifier for this DOM element.
Template Literals:
Template literals are delimited with backticks (“`) and allow string interpolation.
They’re “a nice addition to the javascript language here again they can make your code more compact and readable allowing you to do some interesting things in line that would require a lot of appending of strings previously.”
Example demonstrating how to embed expressions within template literals using ${expression}.
Ability to create multi-line strings without concatenation.
Regular Expressions (Regex):
Regular expressions are patterns used to match character combinations in strings.
Use the test() method to check if a string matches a pattern.
Use the replace() method to replace matched patterns with a new string.
There’s a lot to Regex, but a simpler approach is to use methods for checking if a string matches a pattern.
Arrow Functions:
Arrow functions provide a more concise syntax for writing function expressions.
“Arrow functions… just get rid of the keyword function but this remains and it allows us to define an input parameter name inside of or after the fat arrow and inside of the body I can go console.log…”
Creating Date objects using the new Date() constructor.
Using methods like getDate(), getDay(), getMonth(), getFullYear(), and getTime() to extract date and time components.
Calculating elapsed time between two dates.
String Methods:
split(): Splits a string into an array of substrings based on a separator.
slice() and substring(): Extracts a portion of a string.
endsWith() and startsWith(): Checks if a string ends or starts with a specified string.
includes(): Checks if a string contains a specified substring.
repeat(): Repeats a string a specified number of times.
trim(): Removes whitespace from both ends of a string.
Array Methods:
push() and pop(): Adds/removes elements from the end of an array.
shift() and unshift(): Adds/removes elements from the beginning of an array.
splice(): Adds or removes elements from any position in an array.
concat(): Concatenates two or more arrays.
slice(): Creates a new array containing a portion of an existing array.
join(): Joins all elements of an array into a string.
sort(): Sorts the elements of an array.
Error Handling (Try/Catch):
Using try, catch, and finally blocks to handle exceptions.
The try block executes normally, but if an exception occurs, the catch block gets invoked, and at the very end, regardless, is the finally.
Throwing custom exceptions using throw new Error().
DOM Manipulation:
Selecting DOM elements using document.getElementById().
Adding event listeners to DOM elements using addEventListener().
Creating new DOM elements using document.createElement().
Manipulating element attributes and styles.
Module Pattern and Closures:
The module pattern uses an immediately invoked function expression (IIFE) to create a private scope and return an object with public methods.
“These are topics that could have easily been covered much earlier in the course but because I was trying to get somewhere I left those details off till now so hopefully you don’t mind that we’re going to circle back and fill in some of the or backfill some of the topics that we just didn’t cover in a lot of depth”.
“Closures create lexical environment” for each closure instance to define an environment.
Closures allow a function to access variables from its surrounding scope even after the outer function has finished executing.
Conclusion:
This course provides a comprehensive introduction to JavaScript for absolute beginners. It systematically covers fundamental concepts, equipping learners with the knowledge to write basic JavaScript programs. The emphasis on core language features and progressive exploration of concepts helps build a strong foundation for further exploration of web development and other JavaScript applications.
JavaScript Fundamentals: A Concise Guide
Javascript FAQ
What is the primary goal of this JavaScript course?
The course aims to teach JavaScript to absolute beginners, even those new to programming in general. It focuses on the core JavaScript language itself, rather than web development aspects, although it does touch on JavaScript’s use in web browsers at the end. The course emphasizes precision and understanding how to write code that a computer can interpret accurately.
What are some important considerations when choosing which JavaScript features to use, given different browser compatibility levels?
You have two main choices: write code that is compatible with older browsers, or use a tool like Babel to transpile your modern JavaScript code into an older version that is compatible with most browsers. The website “Can I Use” helps determine browser support for specific JavaScript features.
What are variables in JavaScript, and what are the rules for naming them?
A variable is a named storage location in the computer’s memory that can hold a value. Variable names (identifiers) must start with a letter, dollar sign, or underscore. They can then contain letters, numbers, dollar signs, or underscores, but no other special characters (including spaces). Variable names are case-sensitive, and you cannot use reserved JavaScript keywords as variable names.
What are data types in JavaScript, and how does JavaScript handle them?
Data types define the kind of data that a variable holds. JavaScript is dynamically typed, meaning the variable itself doesn’t have a fixed data type; only the value stored in the variable has a data type. Common data types include number, string, boolean, and undefined. JavaScript can perform type coercion, automatically converting data types in certain situations (like concatenating a number and a string), although this can sometimes lead to unexpected results.
What are operators and operands, and how do they relate to expressions and statements?
Operators are symbols or keywords that perform actions (like +, =, let), while operands are the values that operators act upon (variables, literals, function calls). Operators and operands form expressions, which evaluate to a single value. Expressions are used to compose statements, which are instructions that the JavaScript interpreter can execute.
What are functions in JavaScript, and how can we define and use them?
Functions are reusable blocks of code that perform specific tasks. You can define functions using function declarations, which involve giving a name to a code block. You can also create function expressions, assigning a function to a variable. To execute a function, you call it by its name followed by parentheses (the function invocation operator). Functions can accept input parameters (arguments) and return values.
What are the module pattern and revealing module pattern, and what problems do they solve?
The module pattern and revealing module pattern are design patterns used to encapsulate JavaScript code, reducing the impact on the global namespace and promoting code organization. They use immediately invoked function expressions (IIFEs) to create a private scope and return an object that exposes only specific variables and functions (the module’s “public” interface). The revealing module pattern makes it clearer what the “public” methods will be by declaring them at the end.
How does the this keyword work in JavaScript, and how can we control its value?
The this keyword refers to the context in which a function is called. Its value depends on how the function is invoked. When a function is called as a method of an object, this refers to that object. You can explicitly control the value of this using the call() and apply() methods. In arrow functions, this is lexically bound, meaning it inherits the this value from the surrounding scope.
JavaScript Syntax Elements
The sources discuss JavaScript syntax and some of its elements.
Key aspects of JavaScript syntax:
Statements JavaScript files contain one or more statements that execute sequentially from top to bottom. A statement is a complete instruction, similar to a sentence in English.
Expressions Statements are made up of one or more expressions. Expressions consist of operators and operands. By combining operators and operands, expressions are created that are then used to compose statements.
Operators Operators are keywords or symbols that perform actions, such as the addition operator (+), string concatenation operator (+), and assignment operator (=).
Operands Operands are identifiers, such as variable names and function names. Programmers give operands their names.
Keywords Keywords are like verbs that instruct the JavaScript compiler to perform actions. Examples include let, var, and const.
End of line character A semicolon (;) typically indicates the end of a statement.
Code comments Double forward slashes (//) can be used to comment out a single line of code, instructing the compiler to ignore it. Multi-line comments can be created using /* to begin the comment and */ to end the comment.
Precision Being precise is key when writing code. The computer needs to know exactly what to do.
Data Types Values, not variables, have a data type, which describes what you intend to do with the data. Examples include number, boolean, and string.
There are naming rules and conventions that need to be followed as developers. For example, code conventions include using camel casing, using descriptive and clear names, and being consistent in the style and naming conventions used.
JavaScript Variable Declaration Guide
Variable declaration in JavaScript involves reserving a space in the computer’s memory to store and retrieve data during an application’s lifespan. There are several parts to a variable declaration.
Key aspects of variable declaration:
Keywords Keywords are a way to declare a variable.
let The let keyword is an instruction to the JavaScript compiler to create a variable. It declares a block-scoped local variable, optionally initializing it to a value. The recommendation is to abandon var unless it is required, and to use let instead.
const The const keyword is used when it is intended for the variable to never change its value. If a new value is assigned to a const variable, the JavaScript compiler will throw an error.
var The var keyword was the original way to declare a variable in JavaScript. However, its usage is nuanced and can be problematic for new developers.
Identifier An identifier is the name assigned to a variable so it can be referenced later.
Assignment operator The assignment operator (=) assigns a value to a variable.
Initialization Initialization refers to assigning a value to a variable at the same time it is declared. When a variable is declared but not initialized, its value is undefined. It is preferable to initialize variables at the moment of declaration.
Scope Scope refers to the accessibility of variables in different parts of the code. Variables declared outside of a function have global scope and can be accessed from anywhere in the code. Variables declared inside a function have local scope and can only be accessed within that function.
There are several rules for naming variables:
All variable names must begin with a letter, a dollar sign ($), or an underscore (_).
Variable names can contain letters, numbers, dollar signs, or underscores, but no other special characters or spaces.
Keywords cannot be used as variable names.
Variable names are case-sensitive.
There are also code conventions that are good practices to follow:
Variable names should be descriptive.
Camel casing should be used for multiple words, where the first word is lowercase and subsequent words have a capital letter.
Be consistent by following the same naming convention throughout the application.
Do not rely on case; avoid using the same name with different casing for different variables.
JavaScript Function Execution: Definition, Syntax, and Scope
Here’s a discussion of function execution based on the provided sources:
Definition A function is a block of code with a name that can be called to execute the code within the block. Functions are a primary construct in JavaScript for getting things done.
Parts of a Function A function includes a name/identifier, parentheses for arguments/input parameters, and curly braces to define the body of the function.
Function Declaration A function declaration begins with the keyword function, followed by an identifier (the function name), then parentheses (), and finally curly braces {} enclosing the code to be executed.
Calling a Function To execute a function, it needs to be called or invoked by its name, followed by parentheses (). This is the function invocation operator.
Arguments Arguments are values passed into the function when it is invoked, which the function can then use.
Function Expressions A function expression is similar to a function declaration, but it does not require a name. Function expressions are useful when a function is needed temporarily and will not be called again.
Return Values Functions can return values using the return keyword, passing data back to the caller.
Variable Assignment Functions can be assigned to variables, allowing the function to be invoked using the variable name and the function invocation operator.
Scope The location where a variable is defined determines its accessibility. Variables defined outside of a function are accessible inside the function, but variables defined inside a function are not accessible outside the function.
this Keyword The this keyword refers to the object that a function is associated with. The value of this depends on how the function is called.
Call and Apply The call and apply methods can be used to explicitly set the value of this inside a function.
Hoisting Function declarations are “hoisted” to the top of the execution environment, so they can be called before they are defined in the code.
Immediately Invoked Function Expressions (IIFE) An IIFE is a function expression that is defined and then immediately executed. This pattern is often used to create a private scope for variables and functions.
Arrow Functions Arrow functions provide a shorthand syntax for defining functions.
JavaScript Object Creation Methods
Object creation in JavaScript can be achieved through several methods, each with its own nuances.
Object Literal Notation
Objects can be created using object literal notation, defining the object and its properties directly using curly braces {}.
Properties within the object are defined as name-value pairs, separated by colons.
The property names are identifiers, similar to variable names, and the values can be of any data type.
Each property definition is separated by a comma, except for the last one.
After the object is defined, it can be assigned to a variable.
For example: let car = { make: “BMW”, model: “745li”, year: 2010 };.
Constructor Functions
Objects can also be created using constructor functions.
A constructor function is a regular JavaScript function that is called with the new keyword.
By convention, constructor functions are named with a capital letter.
The new keyword performs the following actions:
It creates a new empty object.
It sets the this value of the function to the new object.
It executes the function, adding properties and methods to the new object.
It returns the new object.
Inside the constructor function, the this keyword is used to refer to the object being created.
For example:
function Car(make, model, year) {
this.make = make;
this.model = model;
this.year = year;
}
let myCar = new Car(“BMW”, “745li”, 2010);
Classes (Syntactic Sugar)
JavaScript also has a class syntax, introduced in later versions, which provides a more structured way to create objects and deal with inheritance, but it is essentially syntactic sugar over the existing prototype-based system.
Classes are declared using the class keyword, followed by the class name.
The constructor method is a special method within the class that is automatically called when a new object is created using the new keyword.
Methods can be defined within the class, outside the constructor.
Classes support inheritance using the extends keyword.
mySportsCar.revEngine(); // Output: Vroom goes the Viper
Object.create()
The Object.create() method can create a new object, using an existing object as the prototype.
This allows for prototypal inheritance, where the new object inherits properties and methods from the prototype object.
Changes to the prototype object can be reflected in the new object, and vice versa.
“`javascript
let originalCar = {
make: “BMW”,
model: “745li”,
year: 2010
};
let newCar = Object.create(originalCar);
console.log(newCar.make); // Output: BMW
“`
JavaScript Fundamentals for Absolute Beginners
The Original Text
hi my name is bob tabor and in this course you’ll learn about javascript the language this course is aimed at those who are absolute beginners so beginners to javascript and frankly given that we’re going to discuss some very basic things like if statements and for loops it’s really designed for those who are beginners to programming in general so if you know some html and some css and you want to learn javascript awesome you’re in the right place also there’s nothing specific to windows in this course the tools that i use will be free and available in mac and linux as well so you should be able to follow along no matter which operating system you’re comfortable with using now my background is really not all that important but in case you’re curious i am a software developer by day and by night i run a website called developeruniversity or devview you can visit me at http://www.devview.com occasionally microsoft invites me to create courses and what you see here is a collaboration between myself and the good folks at microsoft virtual academy i’ve been creating courses like this since 2004 and i created a very successful version of a javascript course way back in 2011 it’s been viewed millions of times and i’ve got a lot of very positive feedback about it this is a rewrite a complete rewrite of that course because uh frankly javascript has changed dramatically in the what six or seven years since i originally recorded that course uh and so if you’re already a software developer coming from a different programming language just kind of pick back up what i said earlier this might move a little bit slow for you it just wasn’t designed with you in mind there might be some other courses that can move you through the the introductory material a little bit more quickly than what i plan on than than the pace that i plan to take with this course and my focus is the javascript language the pure language not web development necessarily although we will discuss javascript in the context of the web browser at the very end of this course but i felt like teaching javascript and how it’s implemented in the web browser clouded the discussion of javascript the language itself so we’re going to be writing what amounts to console or rather command line style applications to isolate javascript the language as purely and simply as possible without clouding it with a bunch of html and css and things like that we’re going to discuss the language we’ll discuss popular patterns that have emerged from the javascript development community to help overcome some of the challenges associated with working with such a highly dynamic language such a unique language and sometimes kind of a quirky language the last time that i recorded the course uh about javascript way back in 2011 the the course actually had a fairly long shelf life and so much has changed with javascript since then that uh i necessitated that i actually play catch-up and kind of learn some of the new features that were added because i wasn’t keeping my skill set uh up that’s how quickly things change out from under you if you’re not careful if you know anything about javascript you know that the community around javascript is moving extremely quickly it’s the most popular programming language not just in the web browser where there are hundreds of javascript frameworks and libraries that you can leverage in your own applications but it’s also becoming one of the most popular languages for server-side web development meaning the code that actually runs on a web server that can perform business logic that can interact with data storage uh databases and and other uh styles of data storage and we’re not going to talk about any of those topics in depth per se but it is important to know that it all starts with a basic understanding of the things that we will discuss in this course the absolute basics of javascript so since this course may have a long shelf life it’s important to know that some of the features of the latest version of javascript which i will be covering in this course may not yet be implemented in all web browsers depending on when you view this course and then uh you have to take into account that some of the people viewing your website for example might be using very old web browsers and so you have to keep that in account as well so i’m going to make two general suggestions and i’m going to try to remind you about these at the very end there are still people on the internet that are viewing web pages with browsers that were created 10 years ago so clearly in these cases the newer features of javascript many of which we’ll discuss in this course will not be available in those browsers and your javascript won’t even work in those web browsers so you have a choice at that point you can you can go one of two routes you can either attempt to write your code in such a way that it is is as friendly as possible to those older web browsers or you can use a tool which will transpile your javascript code that you write using the latest features of javascript it’ll transpile it back into a version of javascript that is compatible with all web browsers even those that were built 10 years ago and it uses a combination of techniques to accomplish this we’re not going to get into any of those but if you want to take the first tact if you want to be careful with the javascript that you write and only use those those original features i guess you can say of javascript or the early features of javascript there’s a website for you you want to take a look at this website called can i use and so we can take a look at maybe one of the newer features of javascript the let keyword i’ll type it in here in the search box can i use and it will show us the let keyword gives a quick description of what it is and then it will show for the current versions of each of the web browsers whether it’s supported or not you can see that the let keyword does have wide adoption across all modern web browsers with a couple of exceptions now if you want to go ahead and use the absolute latest version of javascript and then take that second text where you transpile your code so that it’s come backwards compatible with as many versions of the various web browsers as possible then you want to check out a website like anatool actually called babel js so you can find it at babeljs.io and it will again use a combination of techniques to uh to take your code you can see some of this little animation that’s on the page right now you can actually use this to type in some code here and see how it converts it into the older style javascript and i’m not going to cover how to use babel in this course but you should know up front that writing javascript for web browsers requires that you give some careful thought to how your javascript will ultimately be consumed and who your targets are and that definitely means that you’re going to have to take into consideration uh the fact that some people will be using older web browsers writing javascript that will run on a web server using a framework like node.js is a bit easier because well you’re going to have some some upfront knowledge about where that code will ultimately be executed but this is not a course specifically about node.js either even though we’re going to use node.js uh as a as a lightweight means of executing these little tiny javascript examples that we’re going to create throughout this course so you’re going to learn enough about node.js to be able to write a simple tiny application but it won’t do anything cool like serve up a web page however i’m sure there are other courses here on microsoft virtual academy and elsewhere that will help you kind of take that next step so the game plan for this course is to start in the very next video by installing the tools that you’ll need to get started and then we’re going to start with the absolute abcs of javascript and programming in general and i encourage you to follow along by typing in the code that i type in the video and that’s you know typing it yourself is the absolute best way to learn it starts to develop muscle memory you’ll have many of those aha moments where you realize oh i see how these two things are related you can hear it and that’s one thing but to type it in and to see it on your own computer working is something entirely different i highly recommend that you you become an active learner by typing in the code yourself but i encourage you also to pause and rewind the video as many times as you need this isn’t a race you don’t consume these kinds of videos the same way you would watch a tv show a movie or a youtube video if something’s not clear to you don’t just let it go in one ear and out the other and worry and say i’ll figure it out later no stop down and figure it out now because you never know it might be something foundational you’ll need to know uh in the next lesson and the next lesson but by the end of this course you’re going to be well positioned to move on to a more advanced javascript course to learn how to use modern client frameworks like react and view and angular or you’ll be well positioned to learn more about server-side framework libraries frameworks and libraries like node.js and express.js and others but no matter what you’re going to have a great foundation to build on if something i say doesn’t make sense again i can’t stress this enough seek out other sources online and you’re going to ultimately want to know something from me i’ve recorded enough of these courses i know the questions that are already coming you’re going to ask me if there’s a book that goes along with this course that i could recommend and i’m sorry i don’t really have a specific recommendation honestly my recommendation is that you exhaust the dozens if not hundreds of javascript online resources uh where you can simply use them for free and find them in an instant uh if you want to get more explanation about any given idea that are that’s covered in this course okay so let’s go and get started i want to encourage you to take your time don’t feel overwhelmed stick with javascript stick with this course and you’re going to be well rewarded i promise it’ll be more difficult than playing a video game then watching a movie or reading a book but i promise you you’re gonna wind up enjoying it even more than any of those things even if i wasn’t paid to write code i would do it because it’s fun it’s mentally challenging and you get this rush whenever you you write code and you see it working and you’re like wow this is awesome so i’m glad you’re going to get an opportunity to do that it’s the most fun you’re going to have on a computer i promise and you’ll you’ll wind up enjoying it so stick stick with it and i’ll try to encourage you along the way all right so we’ll get started in the next video see you there thanks all right so let’s get started uh we’re going to install the tools that we’re going to need for the remainder of this course fortunately we don’t need a lot and everything is free and everything i show you will work regardle regardless of which operating system you currently have installed so uh regardless of whether you’re using windows mac or linux everything i show you will be available for those platforms the first thing we’re going to need is a web browser i’m pretty sure you already have one of those installed any will do i would probably recommend that you either use microsoft edge or you use uh google chrome the second tool that we’re going to need to install is node it is the the javascript runtime it’s what will actually execute the code that we write and we’ll talk about that more in just a moment and then we’re going to need an authoring tool something where we can actually type the code in now in the past i’ve used notepad to actually demonstrate because i didn’t want to like you know recommend one tool over the other but then microsoft came out with visual studio code it’s available on uh all three platforms so it’s also available for free so no matter what you’re using you should be able to download and follow along now you may already have a favorite tool for creating web pages and so forth feel free to use that i’m not going to do anything that’s so visual studio code specific that it will exclude you please follow along no matter what tool you prefer but let me put in a good plug for visual studio code i’ve been using it pretty much as one of my exclusive tools in my full-time job for the last three months and uh it’s it’s really good so i highly recommend it let’s get started we’re going to need node and you may already have node installed so let’s just see if you do or not let’s go and in windows i’m going to open up a command prompt and i’m going to type in node dash v if i had node installed it would display the version of node that i currently have installed i don’t have node installed on this computer so i get an error message that’s good so to begin we’re going to go to node js i can type there we go nodejs.org and again regardless of which operating system you’re using you should be presented with an opportunity to download either the supported version or the current version which has like the latest features you don’t need that just just use the lts version which is recommended for most users as long as you’re using the version that i’m using or greater we should always be in sync again we’re not going to use any really advanced features of node so this shouldn’t really matter much i’m going to go ahead and run it run the installer here what you see next depending on which operating system you’re using uh will you may see something a little bit different than what i see on screen but hopefully you’ve installed things frequently enough that you can work your way through it so here we have the node js setup wizard and i’ll just walk my way through agree to the license i’m going to pick a place on my hard drive to install this there are some options i’m not going to really do much of anything but i do want to make sure that in windows that this is added to my path this will make sure that node is available in any directory of my hard drive so when i type in node v from anywhere in my command prompt it’ll it’ll pop up okay so just make sure that everything is selected you’ll be fine it’s not that large next i’m going to have to agree to windows uac you might see something different here on the mac or linux i’m going to go ahead and agree to that little security prompt and it only takes a minute or two to install node and then we’ll move on but basically node in a nutshell is uh the v8 javascript engine that they ripped out of chrome they added some tooling around it to support things for like http working with with requests and responses and with the file system and they created one of the most robust web server tools that is available today and many large applications are using node currently to host their applications we’re not going to use it for that we’re going to use it for something much more mundane which is to really just write out little text messages to a console window as we get started then we’ll graduate on and use it in web pages much later in this course all right so i should have it installed right so i come over here and it still says it’s not installed i’m going to have to reboot my computer so let’s pause i’m going to pause the recording of the video right here i’m going to reboot and then when i come back in we should be able to move on from there all right so i’ve rebooted let’s open up a command prompt type in node v and i can see the version number so we’re successful the next step is to install visual studio code visual studio code is different than the full version of visual studio so visual studio community professional or enterprise visual studio code is a lighter weight code editor mainly used for web development but i know people that use it to develop c-sharp applications and other type of of applications where you can uh use the the command line tools to compile your code and things of that nature that’s not something i would ever want to do it’s great for web development and that’s what we’re going to use it for for authoring our javascript files and then executing node commands in a built-in little command window command prompt like we see there again available for all uh operating systems you just go to code.visualstudio.com it should be able to detect which operating system you’re currently using and it gives you a download option for that all right and we’re going to go ahead and run it in place again windows uac prompts me to make sure that i am authorized to install it we get to the to the code setup wizard i’m going to go ahead and accept the agreement and we’re going to work our way through the defaults sure and you can see that we can also add visual studio code to the path which will become available after restart i don’t need that necessarily for this course but hey you know it doesn’t hurt in fact let’s go ahead and use it for everything here that’s up to you you can read those options and choose what you want but for my purposes this will work just great and we’ll see throughout this course some of the things that visual studio code will do for us as we’re typing our our code simple things like uh like code coloring and code completion managing our files giving us an environment to execute command line tools like the node command line tool and there are many things like that intellisense others that will give us the tools to to hopefully allow us to author our javascript code accurately so let’s go ahead and launch it and let’s just do what i call a quick smoke test and we don’t need get for this course i’m just going to hit close on that so what we’ll do is go to the explorer it’s the little icon in the upper left hand corner here let me kind of pull this out and make this a little bit sized a little bit more nicely here i’m going to close down the welcome screen i am going to click open folder and i’m going to go on for me i’m going to go to my c drive and i’m going to create a source folder now depending on your operating system or what your preferences are you may want to create a folder somewhere else but create a folder because we’re going to put some some javascript files and later some html and css files in that folder and we’re going to want a folder structure so right here in the open folder dialog i’m going to right click and select new folder i’m going to call this source lowercase s and source and then select that folder now that becomes the working folder that i’m going to use to add additional files and and all the work that we do for this course inside of there here it doesn’t really wants me to put get install get and i don’t want to do that what i really want to do is go to terminal all right and depending on which operating system isn’t you’re installed on you might see something different here in windows you see powershell doesn’t really matter as long as you get a command prompt and here i’m going to type node v and i can see that that’s awesome and then what i want to do is add a file inside of this folder this working folder so i’m going to click on this little file with the plus symbol in the upper left hand corner i’m going to type in app.js and it opens up a new file here in the main area with a little js icon right next to it and here i’m going to type all lowercase console.log hi i’m gonna go to the end and hit a semicolon so let’s kind of walk through this the word console a period on your keyboard the word log log and then an opening and closing parentheses inside of there i want to put an opening single quote mark and a closing single quote mark and then some word i put hi you could put your first name it it really doesn’t matter but what does matter to me at least is that you end it with a semicolon and as you’re going to come to learn writing code is an exercise in precision if you don’t write exactly what i write there’s a chance that you will not get the results that i get and so you want to double check and make sure there’s not extra spaces you want to double check to make sure that you’re using the right characters like this is not a comma it is the period on the keyboard all right this is not a curly brace it is a parenthesis this is not a double quote although that would be acceptable in this particular situation i would prefer if you use the single quote which is on the same key you just have to hold down the shift key all right to get to it all right so now i’m going to use control s on my keyboard to save or it might be command s if you’re on the mac or something else on linux i don’t know whatever you you use or you can just go file save all right now watch what happens when i just use the space bar on the keyboard did you notice see that little symbol there it went from x to a circle that means that file has not been saved yet that change that i made is not saved so here again i’m going to use the keyboard shortcut to save it then i’ll come back down here into the terminal now how can i do this easily well on windows the keyboard shortcut is control and then the back tick that’s usually next to the number one kind of to the left of it on most keyboards so the back tick will close and open up that little terminal window at the bottom and now i can type in node space and then i want to use the name of this file so app dot js and hit enter on my keyboard and it should print out that word hi that i have inside of those two single quote marks in console.log all right now we can also shorten this up node space app we don’t have to use the file extension and it will work as well all right so assuming that you were able to follow along and you got to this step then you’re ready to move forward and we’re ready to get started actually writing some javascript let’s start that process in the next video we’ll see you there thanks so our job as software developers is to author code which is using a language that’s human readable and author in such a way that can be understood and parsed and interpreted and ultimately then executed by a computer and the code that we write we save into files and we ask we ask some execution environment whether it be a web browser or in this case node to to take a look at this this code that we wrote in this file and to interpret it and to execute it all right and so it’s important first of all as we get started understand that how our code is going to be used we’re working and learning the javascript language but ultimately the code that we write will be executed in let’s be honest one of two maybe even a third environment we’re either going to write javascript code that will ultimately be executed in node and typically when we’re writing code for node-based applications we’re writing applications that we can access the file system access the network respond to http requests and provide an http response things that are more server-side in nature all right and then we’ll also then write javascript code that will execute in the context of a web browser and we would expect for that code to be able to dynamically interact with um with elements html elements on a given web page all right but we might also use javascript to uh to write video games in an environment like unity for example and be able to author and control the various objects on screen and their animation and and their interaction and so on so there’s what i’m trying to get at here is that there’s a difference between the language itself and then the environment that it runs inside of and we need to be aware of that that those are two separate things even though sometimes they feel like one thing in this case console for example the console.log function is provided to us by node it allows us to tell node that we want to print something to the command line like we did just a moment ago now there’s also a console.log function in most web browsers and it allows us to print little debugging messages or console messages that can only be viewed inside of a web browser whenever we have the developer tools open and we’ll see how to do that much later in this course once we start building uh web pages uh and and javascript that can interact with them but at any rate let’s get back to the matter at hand here if i write my javascript incorrectly then the run time what whether it’s node or a web browser will won’t be able to compile it and it’ll give us an error and so javascript is similar to english in so much that javascript has a syntax and it has a proper syntax versus a syntax that’s incorrect so if you’ve ever taken an english english class you’ll know that there are parts of speech that that you’re supposed to use punctuation at the end of a sentence to indicate the end of a complete thought there are nouns and verbs and adjectives and adverbs and and propositions and all these sorts of things right and so you know in general terms the same thing is true with javascript there are parts of speech we’ll talk about those and so you will be learning a new language starting with your abcs and and with with uh i guess uh vocabulary words so to speak and then to move on to authoring sentences that are complete thoughts and then stringing those sentences together into paragraphs in order to accomplish some higher level task and even kind of arranging those paragraphs together to create entire applications all right so hopefully that analogy will serve you well as we get started here our goal as we get started is to author javascript statements and a statement is basically just one complete instruction it’s like a sentence in english and each javascript file that we create like this app.js it’ll it’ll contain one or more javascript statements that will execute in sequential order from top to bottom at least usually and i’ll talk about the exception of that as we get further into this course but there are some other similarities between javascript and english for example there’s an end of line character i was very very specific about adding that semicolon at the end of of our statement and that just is an indication to the compiler that this is a complete thought and it should be carried out as is all right um now we also see that we have our statement all on one line of code and generally speaking as we’re getting started we’re going to write our javascript statements one per line now we’ll bend those rules as some of our statements become very long we can actually for readability’s sake from a human perspective we can split things up onto multiple lines if we need to javascript specifically node doesn’t really care about that that’s really more for human readability it can deal with multiple lines or a single line for a given statement but be that as it may we’re going to try to strive at the beginning to write one statement per line in our files and a statement usually consists of one or more expressions so uh we’ll talk about expressions a little bit later but this particular expression is essentially just executing a function that’s built into node it’s the log function it belongs to an object called console we’ll talk about objects and functions a little bit later here and we execute it by using operators those in this particular case this is the function invocation operator or the method invocation operator it’s those open and closing parentheses and we can even then pass in what are called arguments to those functions so you can see that each little piece of this has a name and it has a role to play in creating our functions and we’ll learn more about that as we move on here one thing to note is that javascript is case sensitive and this trips up a lot of people to begin with that’s why i was very specific to say hey don’t accidentally or mindlessly use capital letters make sure everything is lowercase let’s see what happens if i were to save this work that i did here with the capital c in console and the capital l and log let’s go node space app and hit enter and we get a reference error console is not defined it’s not defined inside of node console doesn’t exist with a capital c inside of node it exists with a lowercase c inside of node the same thing is true with the function name log let’s go ahead and i’ll just use the up arrow on my keyboard that’ll give me the last command that was used in the in the terminal so again node app and i’m going to try to execute this little program again and i get another error this time console.capital l log is not a function that’s true it’s because it’s lowercase l and log and i’ll save that change and then we’ll re-execute this and it will work now there are some things that especially when your application is small don’t matter so you might have accidentally left off that semicolon at the end and the application still runs but that’s a bad practice to rely on that you should always try to create properly formed sentences even though you could write an english sentence or a text message that somebody could understand that has no punctuation has no capital letters and things of that nature that would make it a well-formed english sentence and you’re you’re relying on your the person receiving that text message to understand what you’re trying to say the computer doesn’t work that way it means it needs to know exactly what you’re saying and so you have to be precise precision is the key as a software developer all right so what i want to do here as we kind of start wrapping things up for this first first foray into javascript i’m going to comment out this line and add some new code below it and use that as kind of the next step beyond where we’re at right now so to uh to tell the compiler to ignore line of code i’m going to add a code comment and here i use two forward slashes i added also a space but that was really just for readability’s sake so that myself as a human i can kind of make an easy distinction because sometimes all these characters run together i like adding a space between this but these two characters say forget everything on this line of code don’t compile it don’t try to use it all right and we’ll see in a moment that there’s another way to create code comments as well but here let’s create something a little bit more interesting i’m going to say let x equal 7 i’m going to say let y equal 3 let z equal x plus y and then we’ll do console.log and then i’m going to use open and close parentheses i’m going to use a single quote i’m going to type in the word answer colon space i’m going to go outside of that quote so it’s i’m going to go between the closing single quote and the closing parenthesis and i’m going to make some space for myself in there i’m going to use the plus key or the plus operator and the letter z i’m going to go to the end of the line and use the end of line character the semicolon i’m gonna save it all now before we actually execute this what do you think this will do what do you think will be printed to our console window do you have any guesses i’m betting that your background in math or algebra probably will lead you to the correct answer and i think that your intuition in many cases is something that’s important as you’re learning javascript it is human readable it should be somewhat understandable it might require a little extra explanation because there’s some things that are not extremely obvious but for the most part this shouldn’t blow you away and nothing we cover should ever blow you away it just might require a little extra effort than you’re normally used to putting into things but by no means impossible right so just take some comfort that this is well traveled ground and that if i can understand it i promise you can too let’s run the application see that we get the the the correct result which is answer colon space and the number 10. so how do we get that well we have something here let x and even though again you’re not a javascript developer you know or an advanced javascript developer just yet i’m willing to bet that you understood that we were creating a variable essentially uh a a bucket that could contain a value and immediately we set that variable equal to the value seven and then we did the same thing with the value of 3 we put that into a different variable a different bucket called y and then here we have an expression an expression that will add two values together what are the values inside of those variables x and y well we just assigned them in lines three and four and we know that that probably means we’re going to add those together to get the result of 10 and we assign that into a new bucket a new variable named z and then we merely print out that literal string but then we also say also append the letter or the value that’s in z now hopefully that made sense to you even before we ran the application but you can see here that for example the plus symbol has has double duty it’s it’s serving to be the addition operator but it’s also serving to concatenate two values together in this case to string values together so that we can print it out to screen so we’re going to use this kind of as a starting point and talk about this at more length in the next and subsequent videos but hopefully up to this point you get some comfort level you’re writing some code you’re getting your hands dirty in the code and you know i know you can do this so just keep pushing forward and let’s pick it up in the next video we’ll see you there thanks in this video i want to continue talking about line number three so that we completely come to a full understanding of what variables are in javascript so i’m going to add a new file and i’m going to do that by hovering over the source tab of the explorer and i want to type in variables.js like so and then i’m just going to copy in the code that we had here we’ll use this as a starting point all right so let’s focus in on line number one let’s just first of all let’s make sure this still runs and let’s go node and this time we’re going to give the new file name variables and we get the same result as before great so what is a variable i think i said at the very end of that previous lesson is that a variable is basically just a a an area in the computer’s memory where we’re storing a value we’re requesting or declaring our need for a new variable a space in the computer’s memory where we can put information and retrieve information and then we can from that point on continue to use that variable to to store different values and retrieve those values back out throughout the the life span of the application so there are actually several different parts to the variable declaration statement in line number one the first is the let keyword uh and let’s start start talking about the parts of speech in javascript a keyword is something like let and we’ll see some other examples a little bit later but essentially think of it like a verb in the english language it’s a it’s an instruction to the javascript compiler that we want to do something that we want to take action so we want to create a variable with the name of x and we’re expressing that intent to javascript using the let keyword all right so that’s the first part of it and then the second part is the name of the variable that we want so we’re requesting that a area of storage uh a unit of storage is assigned to our application that where we can put things but how do we reference that again it needs a name so that we can get the values and put new values in memory all right and so that’s usually called an identifier we want to declare a new variable with the identifier of x and we’re going to talk about naming our identifiers naming our variables there’s some rules and some conventions that we need to follow as developers all right we’ll come back to that at the very end of this lesson now before we get too far there’s actually a couple of different ways to to declare a variable in javascript the original keyword that you’ll see used and used in 99 of all tutorials and articles and books and videos is the var keyword and until recently this was the only way that you could declare a variable in the latest version of javascript however the recommendation is to abandon var unless you really need it use the let keyword instead or the const keyword which we’ll talk about in just a moment if we were to save our application using the var keyword in line number one and then rerun it nothing would change so what’s the problem with var there are some well i guess there’s there’s two ways to to kind of explain it at this point the first is that its usage is very nuanced it does stuff that somebody new to javascript may not anticipate the ramifications of until it’s too late and there are problems in code we’ll talk about the var keyword and how it relates to scope and so on uh in an upcoming video but we need to introduce some more concepts before we can get to the point where that discussion is even interesting okay so it’s usage is nuanced and the ramifications can be uh pretty challenging uh if you’re just getting started so that’s why the people who decided what goes into javascript said why don’t we introduce a new keyword called let it will work like most other programming languages as you try to learn javascript hopefully it won’t be problematic so that’s why we have the let keyword the other uh the other keyword for declaring a variable is const and we use that whenever we want to express our intent to the javascript compiler that we do not intend for that variable to ever change its value so what we initialize the value to in this case to seven we wouldn’t expect that to change throughout the lifetime of the application and if we try to change it like in the very next line of code we can attempt to set it equal to six i’ll save that let’s go over and try to run that code we’re going to get an error and it actually is pretty helpful it gives a little a little carrot right underneath the equal sign and it says assignment to constant variable that’s the problem and and the issue here is that we’ve said to javascript we never want to change that value and then the very next line of code we say yeah i’m going to assign it a new value and set it equal to six and it says can’t do that okay so for the most part we’re going to use the let keyword most of the time because that’s the recommendation now in as we learn javascript all right so uh just want to point out that we can uncomment out line number two as we assign the value of x to different values and we can keep doing this as many times we want to so at this point in line number one we’ve declared the variable set it to the seven then we’ve assigned the value of six then five then four we can keep changing the value in the computer’s memory uh and what is the value in line number six what’s x’s value well the last time we assigned a value to it was four so uh the application now whenever we run it will give us seven because three plus four equals seven right so that’s what we get in line number seven great all right so uh i guess this should be obvious at this point the equal sign here is actually what’s called an assignment operator this is how we assign a value into a variable and we can keep assigning values as many times we want but we can only declare value our variable one time so if i were to try and come down here and say let x equals you know seven again or let it equal eight i’m going to get an error whenever i try to run the application the identifier x has already been declared again you can only declare a variable once but you can assign its value as many times as you want to after that all right so in line number one not only are we declaring the variable then we’re also assigning its value right off the bat in the same line of code and when we do that it’s a technique called initialization this is actually two lines of code rolled up into one lines number one and two now are roughly equivalent to what we had before well roughly equivalent there is one difference here at the end of the execution of line number one what is the value of x well let’s let’s find out console.log and we’ll just say what’s the value of x at this point and then let’s run the application and you can see that first value that’s output above what we get now in line number 11 is the term undefined we’ll explain what undefined means in more detail a little bit later but essentially it is what it sounds like we’ve declared a variable but we’ve not defined it we’ve not put a value into it so it’s undefined all right and that’s generally not something we want it might be in some cases something we need uh but for the most part we won’t do that it’s preferable that at the moment of declaration you also in uh initialize your variables if you can alright so that would be valid right there um all right so now let’s finish this up and talk about the rules for naming our variables the variable name itself i think i’ve already referred to this as an identifier and so there are rules for identifier names and then there are some code conventions and these are not enforced by the javascript compiler but are rather things that are best practices as determined by the community of software developers who’ve come before you so let’s talk about those things which are hard and fast rules that will actually break your application rule number one is that all identifiers all variable names have to begin with either a letter a dollar sign or an underscore so that’s rule number one rule number two is that the variable names can contain letters or numbers dollar signs or underscores but no other special characters and you can’t use a space uh in between you know two words that you intend to be considered together as an identifier identifier can’t have any spaces all right and then rule number three is that you can’t use any keywords so i can’t do something like this let let equal to eight maybe if we try that we’re going to get a weird error let is disallowed as a lexically bound name all right and so it even if we were to scroll just a tiny bit it puts that those carrots right underneath the let the second one because we’re trying to use that identifier but it’s already a keyword right so you can’t do that all right so those are your own oh yeah there’s one other rule and that is that variables variable names identifiers are case sensitive so we could do this and it would be a perfectly acceptable application these are two different variables uppercase x and lowercase x so if you intend to do something like this let’s see what we get here all right it doesn’t it doesn’t blow up so we were able to use x and assign it to 8 but we didn’t declare the variable well something fishy is going on and we’ll get to the bottom of it before the end of this course but the key to this is that we did not we’re not working with the same x as we’re working with here all right so let’s just get rid of that but those are the rules has to begin with the letter a dollar sign or underscore the rest of the name can have pretty much anything including numbers but no spaces or other special characters can’t use any keywords for names of variables and uh be aware that uh variable names are case sensitive now there are code conventions and these again are just good practices the first one is that variable names should be descriptive and unfortunately uh x y and z are not very descriptive names ideally we would use something like maybe um let’s go down here so let uh first number equals seven and then let second number equals three and then we could use that in line number 12 instead here’s some better ones actually like if we wanted to capture information or represent information like the first name or let zip code and so on all right so use names that represent the thing you’re trying to store and it from an application perspective what meaning does this variable have inside of our application meaningful variable names the second is camel casing so if you are going to use multiple words you should use this format called camel casing and that means that the first word of your variable name should be lowercase so the f in first is always lowercase but then any subsequent names that we appen or words that we append together should have a capital letter so you can see that i followed this convention every single time in lines 15 through 18. lowercase z and zip code capital c in zip code all right so camel casing third one is to be consistent and that is to always follow the same kind of naming convention and this would be true kind of across not just the names of variable names but for every other type of identifier that we wind up creating in our application stay consistent pick one style and stay with it throughout the remainder of the application and then the other is to not rely on case we’ve already seen the danger of that but what if i intentionally wanted to let zip code equals 60459 what we’ve just done while it’s grammatically correct from javascript’s perspective and those are two separate variables in line 18 and 20. we’ve introduced some subtle um dissonance in the application now it’s more difficult for me to see that these are actually two different variables and maybe i intended to do that but that’s poor programming practice we might choose uh maybe a better name like first zip code and second zip code that might be a better way to go about that same sort of thing okay so those are the code conventions and the naming rules for variables and that’s just about everything you need to know about variables just about there’s actually a little bit more that we need to talk about and we’ll finish up this discussion in the next in the next video when we talk about the values that we’re actually signing into variables and their data types and we’ll talk about that next see you there thanks in this video we’re going to talk about the values that we store in variables and we’re going to talk about the types or rather the data types of those values and why they’re important so to begin with let’s go ahead and create a new file called datatypes.js and this is where we’re going to do all of our work and one of the things that makes javascript so unique when compared to other programming languages is that whenever you declare a variable like we do here let x equal 7 [Music] the variable itself does not have a data type only the values that we store inside the variables have the data type so we kind of see this whenever we’re working with variables we can use something called the type of operator and this will tell us the data type that we’re working with so well let’s go ahead and go back to here let x equals seven so let’s start off by just doing console.log and then we’ll say type of all one word lowercase and then x and let’s save that save it and uh here i’m going to type node and then data types and you’ll see that it outputs a number so that’s one of the first data types the x data type is a number and a data type is really just the kind of data that we want to store so if you want to perform math or some algebraic operation then you want to use a number and if you want to do a yes or no true or false uh evaluation then you’ll want to use a boolean and if you want to display something on screen then you’ll want to use a string which is basically a shorthand for string of characters and you usually represent those with single quotes with whatever string of characters you want to uh to use inside of it so let me give you a few examples here we’ve already looked at number let y equal true and so then we’ll do console.log type of y and then i’ll just go ahead and do z as well let z equals hello world and then console.log z whoops not to z i want type of z all right and so now let’s go ahead and run this and we can see that we get the three data types that we’re currently working with a number a boolean and a string so in the case of a number it can be any positive or negative number it can even have decimal values in the case of a boolean it can either be true or it can be false those are the only two values and then if we want to create a string it’s going to be anything inside of the single quotes it’s a literal string of characters but i literally want h-e-l-l-o space w-o-r-l-d all right and so those are your three of your seven basic um basic types data types there’s also another case let’s let a and then console.log the value of a and then console.log type of a all right and just to remove the confusion here i’m going to use a multi-line comment this allows me to avoid having to do this on every line right i can just do this little slash star at the top then go down to the bottom of where i want to comment out and then star slash you can see everything that’s highlighted in green or or turned to the green color is now commented out just as if i had commented out each individual line separately so here i’m just creating a variable a but i’m not initializing it to a value remember we saw do you remember what it output when we did this before it output the value undefined but we want to see what the type is because we said that it’s the the value that’s assigned to the variable that has a type so what is the type of a variable that has nothing assigned to it well that’s what we’re going to get to the bottom of right now so we see that the value is undefined and the type is undefined so now we have four types we have number boolean string and undefined and there’s two or three others that we’re going to look at here before the uh before the very end we’ll get to them they’re a little bit more complex but those are the four that we have to work with at least to start off with and so that’s all i really wanted to say the next thing we’re going to talk about very quickly is how i would convert one type into another type how do i force javascript to treat a string like this console.log and then a literal string of the value nine how do i make it treat it like the number nine well we’ll talk about that in the next video we’ll see there thanks in the previous video we learned that values not variables have a data type and that the data type is essentially a description of what you want to do with the data there’s more to it than that but for our purposes right now it’s essentially what we intend to do with the data and we learned of four data types and we’ll learn about a couple more a little bit later there’s the number data type the string the bool and the undefined so let me ask you this what happens when we need to use them together and they don’t quite work the way that we think they should what options do we have then so let’s go ahead and create a new file i’m going to call this coercion c-o-e-r-c-i-o-n dot js i think that’s how you spell it and uh let’s start off with a quick little example here so let a equal seven let b equals the string the literal string so i want to use single quote 6 single quote all right and then let c equals a plus b and then console.log answer and then c all right before we execute this application what do you think is going to be output uh when we run it what will the answer be all right get that in your mind and now let’s go node and coercion and uh looks like we don’t get anything at all oh i need to save it okay there we go let’s try that again there we go we get the answer 7 6. wait 7 plus 6 should be 13 right why are we getting 76 something i can see what’s happening it’s not treating these as to numeric values it’s treating them both as string values so it’s not adding two numbers together somehow it’s coercing that a from a string in from an integer into a string and then concatenating together a and b so this operator the plus operator we saw how we can use it for addition but we also it plays double duty and it’s the string concatenation operator but moreover javascript realizes that it can’t add a a number and a string those are it’s like adding you know an apple and a car together it’s not like making an apple and an orange even these it’s not like fruit salad it’s like two disparately different things what do i do well i will i will take the numeric value and coerce it convince it force it against its will to become a string and then i will concatenate the two together so that’s the notion of coercion and most people consider that to be an evil thing or a very dangerous thing and others just say well it’s just what happens you know it’s just part of the language now what if i really wanted to perform addition on two integers well then i would need to take steps to force the string six to become a number so that i could then add them together and so to do that there’s actually a special function that will force that conversion so let me uh change this just a little bit and um we already have the value b so i’ll just reuse the value b and i’ll set b equal to parse int now i want you to notice something i haven’t really talked about visual studio code much but one of the nice things about visual studio code is that it popped up this little box called intellisense and intellisense is just a visual cue as i’m typing to show me things that i might need to reference or things that will help me to to find the right command or the right idea in this case i knew it was something parse so i start typing in and i can then use the arrow keys to start looking i’m like oh yeah there’s po parse float that would give me a number with decimal values but this in this case the the string that i want to use i know that it will only be a value without without any decimal point so i want to use the parse int now what i can do is just use the space bar or like the opening parenthesis whatever the next logical character is to do what’s called code completion so i don’t have to type everything else now in this case i know that i’m going to need to use the parentheses for reasons i’ll talk about later so i’m just going to do an open parenthesis well it didn’t do it for me well there we let’s just go ahead and [Music] use the tab key instead all right so the tab key will give me what i want now i’m inside the the parentheses that i need to pass in first of all the string that i want to change so in this case take the value of b and then i need to give it optionally what’s called a radix or radix and that is essentially the base system so if i wanted to um to use like a hexadecimal i might give 6 but in this case i’m going to give it 10 because i want to be a base 10 or a decimal conversion all right so that’s a little technical but typically if we use 10 in there we’re going to be just fine so essentially what i want to do is take this 6 and based on the normal decimal system i want to convert that into a numeric value and then i want to continue on in lines four and five like we had before let’s see what we get this time the answer is 13 just as we had hoped all right so the parseint is a built-in function to javascript and i can count on it being available in node or in a web browser or any other implementation of javascript all right so i guess this begs the question what if i try to do something kind of evil with this so let d equal uh parse int and then i’ll use the tab key to do the code completion and this time i’m going to pass in a character that will not convert to a or or even a string that will not convert into a numeric value especially one that’s decimal so i’m going to save this well let’s go ahead and console.log it and d so let’s go that and then we’ll do this all right and i get n a n which represents not a number it’s not really an error it’s just telling us that the value we passed in is not a number um we could actually do something along these lines as well um let e equals is n a n and then i can give it some numeric value in this case i’ll give a d and i’ll do console.log e so let’s save that run it again and so this time now i’m evaluating whether d is not a number and that is true it is not a number because i can see it here that’s printed out all right so we saw two built-in functions but there’s a bunch of built-in functions for various things all kind of centered around in this case just working with coercion and checking the results of that attempt to to coerce or or convert one data type into another all right unfortunately there’s no parse boolean so you can’t take a string of true or false and convert it into a boolean you’ll have to take a few extra steps there’s a bunch of of examples online for that and so depending on the type of conversion that you’re attempting to to perform it may not be easy to convert from one to the other there’s always a way and usually you can find some code online especially on a site like stack overflow that will help you figure that out but that’s all i wanted to say let’s continue on the next video we’ll see you there thanks in this video i want to refocus on the javascript syntax specifically and the various parts of speech inside of a properly formed statement in javascript so i started by explaining javascript by saying that you write statements each of which are executed sequentially and statements are complete thoughts complete instructions to the javascript compiler of what we want it to do for us and i said the statements are made up of one or more expressions and that an expression is made up of operators and operands and i just made that statement in passing and kind of blew past it really quickly but i wanted to take a few moments and explain why that is an important statement whenever we’re setting out to write code and so we’ve already looked at a couple of different operators if we’re thinking about the most atomic level of our javascript statements we’re thinking about in terms of operators and operands so operators are things like keywords we’ve already looked at the addition operator using the plus symbol we looked at the string concatenation operator using the plus symbol so that one is doing double duty and it will be understood based on the context of how it’s being used and then there’s the assignment operator the equal sign that we’ve already looked at and soon we’re going to look at a few other common ones just to start building out a list of operators that we can use to do more interesting things inside of our application but there’s also an operand so operators are things like keywords and those various symbols that we’ve already looked at and we’ll add more operands are something like identifiers a variable name we’ll we’ll learn about functions soon and functions are another type of operand and so unlike keywords and operators in javascript which are fixed and part of the language we you and i programmers give operands their name and so by combining operators and operands we create expressions that are then used to compose statements and so sometimes it’s easy to spot an expression and then sometimes it’s not so easy but identifying several major categories of expressions we can better understand why javascript works sometimes and why it doesn’t work sometimes so for example in the english language we cannot write a sentence a proper sentence like this the dog period if we’ve said hey uh the dog some our friend would say what are you talking about the dog did what which dog you know give me some more information right why is that not a proper sentence in english because it didn’t have enough inside of it to be considered uh proper we have a noun we have the dog but we don’t have any verbs or adjectives or adverbs describing or or um you know kind of giving us more detail about the dog the same thing is true with javascript so we can’t for example let me just create a quick file here we’ll call this expressions.js so we cannot do something like this in our program right uh because the javascript compiler will say okay what do you want me to do with that uh that makes no sense to me whatsoever i don’t know what you want me to do with a i don’t see it it’s not one of my variables you’re not asking me to create a new variable there’s nothing inside of a a means nothing to me all right so at a minimum we’re going to need to either and these are the types of expressions in a very high level we’re going to either declare a variable so we would do something like this once again let a all right and even in this little tiny um two word line of code there’s already an operator and an operand here’s the operator the let keyword and here’s the operand a name we want to give to a new variable that will be created in memory all right so that’s one type of expression we’re going to call this types of expression here we’ll just use some comments types of expressions [Music] number one variable declaration i think i spelled that right all right so let’s go ahead and just move that up to the very top and say this is bad uh and then we’ll do something like this i kind of like doing some ascii art there whenever i create lists inside of my code all right so there we go the other one is to assign a value so the other type of expression we can assign a value so a equals three or four uh and then another type of expression is to perform an evaluation that returns a single value and so that might be something like and if we’re talking purely about the expression itself it might be something like that b plus c so in a more interesting example uh we might do something along these lines um and i’ll just comment this out because i want to reuse a there we go good all right so here we go line number 16 i’m going to go let b equals 3 let c equal 2 and then let a equal b plus c i just want to focus on line number 19 and i want to say that there are three expressions in here can you find them all right well let’s identify them so number one we’re gonna see that uh let a so that’s a variable declaration the next thing that’s going to happen is we’re going to perform an evaluation of b plus c right and that will basically add those two values together because we’re using the addition operator and then finally we’ll do um the result of b plus c is assigned to a so three expressions all combined into a single statement and there’s a lot more going on than meets the eye but that is the kind of thinking that will help you understand why your javascript code works sometimes and sometimes it doesn’t you have to think in terms of writing expressions that do things to form properly formed javascript statements all right so hopefully that little lesson in syntax is helpful let’s talk about operators and the different types of operators and again we’ve used this collection of five or six operators so far let’s let’s add to that collection i’m gonna go create a new file called operators dot js [Music] and so um there are several categories [Music] of operators and i’ll just kind of go through them really quickly here so there’s assignment like the equal sign it’s really the only one in this category but it’s a pretty important one we’ve seen it use quite a bit there’s maybe some other keywords and things that can fall into this category sort of but the assignment operator is usually the only one in this category and there’s arithmetic with which as you might uh suppose would allow you to do mathematical style operations so that’s the plus where we’re adding two numbers together subtraction multiplication that’s the asterisk key over the eight on most keyboards um there’s also the division all right and then there are some special ones like um let’s call these and i’ll they’re kind of arithmetic but i’m gonna call them increment decrement so this is the plus plus and the minus minus and used out of context these don’t seem so interesting but what we could do is for example um let’s go var a equals one a plus plus and then console.log a all right let’s save that and then go over to our terminal and i’m going to do node operators all right and so you can see that we increment the value of a so let’s do this let’s now increment it one more time and see and let’s save our work here and then let’s run it again and wait a second the value is still two how is that possible let’s do this console log a like that so now we’re going to print the value out twice we’re going to print it out i thought maybe we would get three but we didn’t but if we print it out a second time let’s see what value we get and so when we print it out the second time we get three and the reason is this because this operator this increment operator works after the line or after the value is already utilized inside of this line of code so basically hey console.log here’s a and after you print that to screen then let’s add something to it that’s why we’re able to see the new value if i print it a second time all right what we may have preferred instead of this is to go console.log and put the plus plus before the a that means i want you to first evaluate the increment of a and then print it to the console.log all right so let’s save that let’s rerun this and now we see three in both cases the same would be true with the decrement where we could subtract either before or after the evaluation of that variable all right just something to keep in mind all right so that’s increment and decrement um there’s also going back to arithmetic there’s the modulus and this will give me uh the the remainder amount so let’s go var m for modulus equals 10 divided by whoops whoops whoops that’s not what i wanted 10 modulus 3 and then i want to console.log m and just to kind of keep everything clean i’m going to comment out all this as well keep it around for posterity but otherwise that’s all i want to see what will i get back from this this statement and i get one what is one it’s the remainder so 10 divided by three equals three with one left over that one is the modulus all right and actually this becomes a lot more interesting and important when we’re looping through lots of values and every like 10th or 20th or 100th item i want to print a little message to screen to say hey we finished processing the the 10th the 20th the 30th the 40th the 50th item all right and i use that actually frequently so i’m a pretty big fan of modulus let’s comment that out so uh moving on to the different categories of operators uh let’s talk about the various string operators and we’ve already seen these so this is going to be like the literal string operator we’re using single quotes and then also we saw the string concatenation operator that will take two strings and and allow them to be appended together to create one new string other operators precedence so we might uh you know order of operations we actually use this quite a bit um even in non-mathematical situations so for example um let’s just do var b equals 1 plus 2 times 3. now if you’re coming from an algebra background there’s an order of operations where things should be done in a certain order and i’m pretty sure if memory serves me correctly it’s been a long time since i’ve had an algebra course but you perform algebra before you perform addition so if i were to do a console.log here i would expect b to output two times three plus one so that would be seven let’s see if my my memory serves me correctly here and yes it does but what if that’s not what i want well i can use just like in algebra i can use parentheses to kind of control the order in which things are evaluated so in this case i would do 1 plus 2 first and then multiply that by 3 which will give me a completely different result of nine because three times three equals nine okay so we’ll use this uh the um the opening and closing parentheses for different purposes uh for example um whenever we want to do console.log these parentheses are also used as the um the function invocation operators all right and that just says here’s a function name called log and we’ll learn about functions soon but i want to actually invoke the function now and i can even use the function invocation operators the opening close parentheses to pass in arguments we’ll talk about that a little bit later but again that is the open and closed parentheses um there are other operators and i’ll just put them here they may not make a lot of sense at the moment but they will soon when we look at decision statements so there’s the logical if i’m sorry the logical and in the logical or okay so when i want to add two things together and evaluate two things together either one of them needs to be true or both of them needs to be true and we’ll look at that in in a little while there’s also the member accessor operator so when we did console dot log if you look at intellisense as i hit the dot on the keyboard there’s that period why are we using a period there that allows me to access the various members of this object and we’ll talk about object and we’ll talk about properties and functions or methods of objects soon but that’s what allows me to access the log function of the console object inside of javascript so here again comment that out but we’ll use the period for that purpose we’re going to also look at the code block operator soon and so you know i’m going through all these i’m saying hey we’ll look at these soon really the point of this exercise is to say that there’s lots of operators and we’re going to have to begin to identify what all these special characters are and the only way to do that is to first of all learn that they exist what their function is and then use them as we’re writing programs and so i think that’s really the only thing i wanted to say i mean let me just put one more in here the array element access operator goes by different names but i’m just going to use that and so we’ll use square brackets for that purpose so almost every single character the special characters that are above our numeric values and we can access using the shift key and the various ones that are usually on the right hand side of the keyboard the various braces and brackets and colons and semicolons and and all these are are are used to um for various purposes in javascript uh and in most programming languages all right so i think that’s all i really want to say uh let’s pick back up in the next video you’re doing great hang in there with me we’re getting through uh some of the easy stuff and we’re gonna start moving on to some challenging stuff here really quick but you’re doing great see you in the next video thanks up till now each variable that we create can store one value at a time per variable but what if we need to work with lists of data in other words i need to keep track of several people or several numbers and i need to store them in such a way that it doesn’t matter if i have 2 or 10 or 100 i can kind of keep them together and move them all around and use them in my application as a list as a grouping of related values in that case i want to create an array and so let’s start by creating a file called arrays dot js and first of all it’s basically an array is basically a variable that can hold many different values and so we can declare variable and initialize its value like so so here we’ll do let a equals and here we’re going to use an opening and closing square brace or bracket and then i’m going to give it a series of values each value will be separated by a comma for 8 let’s say 15 16 23 and 42 all right and so now i have an array of those values now these are just numeric values what if i wanted to create an array of string values i can do something similar in fact i can use any data type inside of here that’s allowable in javascript and we’ll see some examples of that a little bit later but i might want david eddie alex and michael all right and then what if i want to get one of the the values i can just do console.log all right inside of here i’m going to use the variable name so in this case i’ll use a and then i want to provide a index to retrieve one of the elements so each of these is an element of the array and i want to use an index a numeric value that that allows me to get at one of those elements inside of the array the indexes are zero base that trips up beginners sometimes uh you for example to get at the number four the first element in the array i would use the index zero if i want the second item in the array the second element of the array i would need to use the index one and so on so to get at it i’m going to use a and then right next to the a square brackets and then i’m going to give it an index so here we’ll grab the first value and then i’ll grab the second value and then i want to do show you how console.log will just print out all of the values for you nicely if you just want to give it the name of the of the array itself the the variable itself so let’s save our work and um we’ll go node arrays all right so you can see the first element of the array at index 0 gives us the value 4. the second element of the array at index one gives us the value eight all right hope you can see the correlation there or if i just want to print out all the values in the array i can just provide the variable name that contains the array and it will print them all off for me just like i have kind of here when i actually initialized our array variable all right let’s comment this out now that is how we access individual elements what if i wanted to change or set the value of one of the elements the same would work so in this case i would say for example a0 and i would set it equal to 7 like so and so then we can just do console.log like so and then we run our application now you see the first element of the array has been changed from 4 to 7 because that’s how we can access a single element and assign it a new value all right all right so um what about these mysterious uh these mysterious arrays what is their data type so let’s do console.log type of a and [Music] we can see that it’s of type object and we’ll talk about the object data type later because there’s a lot more to it than just being able to create arrays but it’s a little bit more advanced at this point we’ll get into it soon just keep in mind that an array isn’t a data type of itself it is a type of something called object and we’ll talk about objects later all right um so the other thing that’s important to remember is that a array can can include elements of different data types so let me just do let c equals um [Music] we’ll start it with four and then we’ll do alex and then we’ll do true all right so we’ve used three different data types right there and we can just do uh node arrays oh i need to actually do a console.log c there we go there we go all right so you can see that a single array can hold different data types there’s no restriction there let’s comment that out what happens if i try to access an element that within with a index that does not exist so let’s do console.log and i happen to know that the b has four different elements in it four names and let me try to access the fifth element by using the index four and this will be undefined so just like a variable without any value assigned to it is undefined so is a element of an array undefined if we don’t give it a value now i can also just programmatically determine the number of elements in an array by using a special property called length so i can do console.log a dot and remember the dot is the member access operator so arrays are objects and this particular object has a special property called length which will give me the number of elements in that array so i would expect to see let’s see one two three four five six so the question is is it going to give us six the actual number uh or is it going to give it us to it in a zero based fashion and the answer to that is that it will give us the actual number not zero based and this will come into uh into play a little bit later when we use the length of an array and we uh iterate through each element of the array to print them to screen when we learn about looping all right so just keep that in the back your mind now there’s a lot of strange things that you can do with with arrays and some of them are not always intuitive like for example if i wanted to just randomly create a new element so in this case i’m going to create what the um use the index 10 which means this would add an 11th element to the array what happens with all of the elements between where we left off 0 1 2 3 4 5. so 6 through 11 what will we get so let’s just assign this to 77 and then i want to do a console.log of a and then i want to do a console.log and a dot length like we just learned about and kind of see what happens here and then let’s run that all right so we can see that it prints out four five six seven or i’m sorry 4 8 15 16 23 42 and then it says there’s four empty items and then there’s 77 and it says that there’s 11 uh there’s 11 items 11 elements in this array because we filled up the 10th index or index 10 with a value so it will create essentially what’s called a sparse array and that means that there are empty elements inside now this isn’t usually the way that you want to work with arrays if you need to add new values because it’s not as safe we’re inadvertently creating elements with nothing in them there’s a safer way to go about this using some additional built-in functions of the array and so if i wanted to add that value and add it to the end of the array no matter how many elements are currently in the array i could use the push method and so i say hey i want to push the number 77 to the end of the existing array so let’s um let’s copy this and paste it here and then if i wanted to remove it i can use a method called pop so this function pop will remove the last element of the array in fact i can call it several times to keep removing elements of the array and here we’ll just print out what the end result is just like we did previously so now this should put some fireworks into our terminal window so you can see that using push in line number 29 i was able to add the number 77 to the end of the existing array and that gives me seven total elements now i call pop three times it removes 77 42 and 23 leaving us with just four elements in our array okay so we’re going to continue to use arrays they’re a great way to to keep lists of things together and accessible and will become even more important again as we learn how to loop through arrays and to evaluate each element we can even use arrays to hold on to other things like like objects and functions and we’ll learn about some advanced use cases uh with arrays a little bit later all right so that’s all i have to say about arrays let’s move on and start looking into some things that are beyond the absolute basics we’ll start moving and talking about functions all right see you there thanks throughout this course even from the very first line of code that we wrote we use the console.log function to print things to our terminal window and i kept referring to log as a function as part of the console object in its simplest form a function is merely a block of code that we as programmers can name and once it has a name then we can call it by its name but it’s just one or more lines of code that we put into a block and then we say we want to execute that block over and over and over again throughout our application so again that’s a very basic explanation of what a function is but in javascript function functions can do so much more in fact most of this course will be devoted to working with functions because frankly they’re one of the primary constructs in javascript for getting things done so first of all uh let’s go ahead and create a new file i’m going to call this function declarations.js all right and first of all if i have some code that i want to reuse throughout my application i want to add it to a function so we can create a function and i’ll walk through and explain the parts of a function here in just a moment let’s create the most basic function that i can possibly think of and i’m actually going to copy and paste some code in so we don’t have to type at all but nothing here should be all that revolutionary so i’ve created a new function called say hello notice my name i use camel casing right in order to name my identifier my function name all right and then i have three lines of code notice that they’re inside of these curly braces notice that they’re indented so we see kind of a relationship between this code on the inside and this line of code and this line of code on the outside so it kind of represents a container relationship this code sits inside of or is part of or is rather the body of the function that we’ve just declared all right so um here we just declared a simple function this is called a function declaration this style we’re going to look at other ways to define functions later and i’ll draw your attention and why you’d want to choose one or the other later there’s at least two other ways that i’m thinking of off the top of my head but first of all notice that we use a keyword call function then we give the function some identifier that we come up with something meaningful we’ll use similar rules to what we use for variable names all right then we use an open and close parentheses you’ll see how these will be used a little bit to define arguments or input parameters to our function but right now it’s empty we don’t require that the caller give us any additional information uh and then we use the open curly brace here and the closed curly brace here to define the container to define the code that we want to be the body of this function and everything inside of that is just any javascript that we want to write for the most part all right so uh how do we actually then use this well we gave this block of code as defined by the open close curly brace we gave it a name and so we should be able to call it by its name so i should be able to do something like this right say hello and that will get me most of the way there but to actually invoke a function we have to use the function invocation operator in this case it’s the opening close parentheses and obviously we want to use our end of line statement here so let’s go uh node and then function declaration [Music] and let’s see oh declaration sorry there we go all right and we get hello so hopefully you weren’t expecting something uh super interesting we’re just printing out three lines with what i would call a flower box kind of a rounded some dashes to to style it up a little bit we can do some interesting things with regards to assigning the function to a variable so let’s do let a equals say hello now do i want to invoke the function here no and i’ll explain why in a little bit i merely want to get a reference to the function and then i’ll do um a and invoke the variable so this variable is now pointing to this function and now i say okay i have a function inside of this variable go invoke it using the function invocation operators in fact here let’s do it a bunch of times just to make sure that we’re seeing what we think we should see here and so we can see hello hello hello all three times in a row great so let’s comment that out now up to now this function’s not all that interesting let me just copy it and i’m going to comment it out and i’ll create a new version of this function down here beginning in line 17 and i want to actually pass allow me to pass in a name so i can say hello bob hello steve right so we’ll just create a new argument into our function say hello by giving it a variable name so essentially now we’re able to use this variable name in the body and we expect the caller to give us the name it wants us to say hello to so um here i’m just going to use some string operators here with name and make sure things are spaced nicely and so here i can do whoops say hello to bob [Music] say hello to beth say hello to mr tibbles my cat all right and let’s go ahead and run it and you can see now how i’m able to reuse that code but change it up by passing in the name that i wanted to say hello to all right so let’s comment all this out and let’s talk about one more thing that we can do with functions and that has returned values from functions so this first uh function that we created it’s merely just outputting we’re not expecting it to to perform some operation and then give us some value back but what if i wanted to create a more interesting function that implements some business rule in my system and my e-commerce system like to calculate the sales tax on a given amount say a subtotal of all the items that are in my shopping cart i might create a simple function called um calculate tax like so and we’ll get to the body in just a moment but i want to allow the caller to pass in the amount that we’re going to charge tax to all right then i’m going to say let result just the name of a variable result be the amount that value passed in times [Music] 0.0825 which is the sales tax where i live and then i want to use the keyword return and then the value i want to return so you can return one value from a function in this case i want to return the amount of tax so i’m going to return result now what i’ll do is i’m going to call calculate tax passing in an amount so let’s say a hundred dollars and i want to capture that into a variable i know it’s going to return a value to me so let’s just do let and i can reuse the variable word result but i might use something more descriptive like let tax equal calculate tax and then console.log the amount of tax like so let’s save that and then let’s run it and you can see that for a hundred dollar purchase it would charge eight dollars and 25 cents in tax okay but that’s what the purpose of the return keyword is to actually give me back something so this is an expression a function invocation expression it’s going to give me a value back that i can then assign to the new variable tax and then i can work with that that value in this case a uh a number representing the amount of tax all right and i think that’s all that i’m going to say about this for now but there’s lots to say about functions it’s going to again consume the majority of this course and you’re going to have to become very familiar with the ins and outs of working with functions and we’ll start that process in the next video we’ll see there thanks in the previous video we learned how to create a function declaration and a function declaration and a variable declaration are similar in so much that they both have an identifier or a name because we plan to call them later on in our javascript but what if we don’t need a name what if we’re in a situation where there’s just a need for a function but that function will never get called for the rest of the application we know that then we can take a different tact we don’t need to add a new identifier we can just create what’s called a function expression and we don’t have to supply a name we just give it the body of the function and say here go do this when you need to run some code all right and a good explanati a good use of that is whenever we need to create some code that should run in the future so here let me start off by creating function expressions.js a new file and here i want to use the settimeout method that’s available inside of javascript and if we use intellisense we can see that there’s actually two input parameters to this function set timeout we’re going to first of all need to give it something called a handler which i happen to know is just a function now i could give it a function declaration but usually people just create function expressions here for the handler and then the second thing we’ll need to do and i’ll use the down arrow to move from the first argument to the second argument is to give it a timeout and so that’s the number of milliseconds that i want it to wait before executing this code and i’ll show you how that might be interesting in just a few moments here but the first thing we need to do is create a function expression to pass in so just here right in line i’m going to create a function open close parentheses open close curly braces which denote the body of this expression i’m creating inside of here i’m gonna do something simple like console.log i waited two seconds and then here at the very end of the function declaration i’m going to give it that second argument the number of milliseconds that i want to wait before executing that function that function expression so i’m going to say wait two seconds and then i want you to call this inline function i’m creating and the body of it will merely just log out i waited two seconds all right so here we go let’s go and uh do node and then function expressions one one thousand two one thousand and you can see it prints to screen i waited two seconds all right now it’s kind of hard to read it like this all in line one of the things in javascript that’s a little bit challenging especially if you’re getting started is the number of curly braces that you’ll encounter and differentiating for example this outer set of parentheses and this inner set of parentheses and and visual studio code tries to help you like when i put my carrot right next to that opening curly brace it tries to find the matching closing curly brace and you’ll see as we add more curly braces for different purposes and indentation levels inside of our application visual studio code does a pretty good job most of the time of finding its match it’s just a matter of looking for that carrot that surrounds the closing one here over you can see whatever in column number 61 here i’m looking at the the bottom okay anything inside is just the body of the function and the same rules apply whoops i didn’t use a semicolon at the end of that line but i should have all right it shouldn’t change the function in this simple case but nothing changes about how we work with this now to be honest most people do not put this much code on a single line i may want to split this up into multiple lines so i would do this in a way that feels natural to me and you can see that as i put my mouse cursor next to where i feel like the split should have been like at the beginning of console.log and here at the end of our body of our function expression visual studio code naturally will create some indentation now if i don’t like that level of indentation i’m free to come in here and change it up like i would prefer to use a tab here so i’m using the tab key on my keyboard to move things out and the shift tab to move things back that only works if my mouse cursor is here right at the beginning of that line if i were to move one character in and use the tab key well that’s you know that’s not what i want at all that’s going to split that word up but if i use the keyboard the arrows on the keyboard to maneuver and then the shift tab to move it out i can move things in and out but that is pretty much how i would like to see that function represented right and then i use a comma to pass in a second argument in this case the number of milliseconds to my set timeout function all right so but the f the the focus of this is that function expression i never want to use that function again but i need it in this case as an argument to pass into my settimeout function all right so functions can take functions as input parameters okay so uh just keep that in mind because things are going to get a lot crazier than that and let’s move on and talk about using both the function declaration and a function expression to do something just a little bit more interesting here same basic idea here but what i want to do is start off with a counter and this will count the number of times that we actually execute our uh our function expression i’m going to start with a function on the outside function timeout let’s call this function timeout like so and then inside i’m going to set timeout using that built-in function to javascript and pass in a new function expression all right and then i’m going to here give set timeout say in two seconds i want you to basically run this function expression that i’ve defined right there so i pass in the second argument of 2000 again you using visual studio code to help me find the matching set of parentheses at the beginning and the end to pass in uh the uh input parameter to the set timeout function recognizing that the function expression is the first argument to that and 2000 is the second input parameter to that set timeout function here’s what i’m going to do now i’m going to append so i’m going to put a little space there between high and the closing single quote mark counter but i want to after every time i reference calendar i want to increment it by one so this will allow me to count the number of times that set timeout has run now one of the things that i want to do is after i have printed that line i’m going to schedule the next time that this code should run so i’m going to schedule and call timeout in a recursive manner so i’m using the name of this outermost function and saying hey uh now that you’ve run me run me again in two seconds because i’m gonna basically call set timeout again all right now i need to kick this off the first time so we’ll call set timeout once here on line number 15. and that will kick things off and then [Music] i’m going to hit ctrl and c on my keyboard to stop the execution because it’ll just keep looping and looping and looping all right hopefully in your mind you understand the sequence of events here i’m going to call this function declaration once the body of that function will create a set timeout in two seconds i want to execute this function expression which will not only show me the number of times that this function has been called because i’m keeping count of it in that counter variable but then also it’s going to call the timeout function again which will schedule two seconds from now the next call to the set timeout function okay so let’s see it run this all makes sense so i waited two seconds saw it run once all right twice three times and see it’ll just keep going every two seconds until i hit ctrl c on my keyboard to stop its execution all right all right the last thing that i want to show you now comment all this out is that you can create a function declaration i’m sorry a function expression that says something like console.log and i’ll make something a little bit more interesting later and then i can immediately invoke that function by first of all surrounding this function expression in parentheses just kind of say hey i want to group all this together and then using another set of parentheses as the function invocation operator like that do you see that format so there’s this intersect that we use just to define input parameters we don’t need any but we still need it in order to create a function expression then we’re going to group this whole thing together and say i want to execute it so there’s actually what four sets of parentheses we just have to keep them straight in our mind on what each of them are doing but this last set will do what’s called and this kind of structure is called an immediately invoked function expression in other words i will have a function expression and i want it to be invoked i invoked immediately when this application is run and this actually is a pretty common pattern in javascript development it comes in super handy and we’ll talk about why it comes in handy a little bit later but we want to just remember immediately invoked function expression it’s also just known as an iife sometimes i think it’s pronounced iffy all right so keep in mind if these and we’ll come back to them a little bit later all right so let’s move on and uh move away from functions just for a little bit we’ll come back to them later uh but hopefully you can now tell the difference between a function declaration of function expression most importantly for our purposes you want to keep in mind what immediately invoked function expressions are okay all right so we’ll come back to this and uh let’s move on see the next video thanks in this video we’re going to talk about decision statements there’s actually three that we’re going to consider the if the switch and a ternary operator and so whenever we need to add logic to our application in other words perform different blocks of code based on some condition that we evaluate we’ll want to use one of these decision statements and so let’s go ahead and start by creating a new file called decisions.js and here what we’ll do is start with the if statement so the basic structure looks like this if and then we’ll evaluate something here some expression so let me just kind of start off with this some expression that expression should equal true or false and there’s lots of ways to evaluate this we’ll come back to it in just a moment but we’ll consider those in between the opening and closing parentheses if that is true whatever that expression is then we’ll execute all the code inside so let’s begin simple bar count equals three we’ll just hard code a value and then say um [Music] so if count and then we’ll use the equality operator so this is going to evaluate for equality if count indeed is equal to 4 then we will that will that expression will return true if it’s true then we’ll perform whatever code we write here so console.log and count is four so the first time we run this we’re not going to get really anything all right so the first time we run this we’re going to not give anything it’ll just exit what we can do is change this to count equals three like so and now when we run it we’ll see the message count is three all right very uninspiring let’s set this back to four and here we can consider the alternative that the count is not equal to four and we could kind of give the counter message count is not for this much we know to be true all right all right so count is not four we basically skipped over this block of code because this returned false therefore we executed the else statement this second block of code and skipped over the first one okay so there’s actually several different variations of this we can use because there are some different conditions here maybe i don’t want to jump right to that else statement maybe i want to keep evaluating i can use an else if and so here i might say else if the count is greater than four then i could maybe do a message like console dot log uh count is greater than four and i can do kind of the opposite as well else if count is less than four so console.log count is uh less than four i guess i changed modalities there and then at that point this will never happen ever because one of these three conditions would occur we’d never get to this final else statement right it would just would never happen so it’s going to save our work here and see this run count is less than four because it’s three okay so that’s the general structure of the if statement it allows us to evaluate one or more expressions if it returns if that expression returns a true then we execute the code in the code block associated with that expression we can create optional else or else if statements to continue to evaluate other expressions usually you’ll want to make related ones but you don’t necessarily have to although that may not make a whole lot of sense depending on your business rules and then we can finally use a catch-all in case none of the previous else if statements uh are are correct and kind of capture that so let’s go ahead and comment that out that’s our first structure we’ll use the if statement a lot the next type of statement is a switch it’s a little bit more tricky to use let’s start off with just typing out the switch keyword and what we want to evaluate and so what we’ll do is actually evaluate whatever’s in this expression against a number of cases so i might for example let hero equal superman and then depending on the hero i might want to print out the um the super powers that that particular hero has so based on the hero if that hero so if the case is superman i would say well that hero has console.log super strength may also have x-ray vision [Music] alright let’s add another case here and say case batman and notice that kind of the the format of this to use the case keyword inside of this block that belongs to the switch the case keyword the value we want to compare our our case against and then a semicolon and everything underneath that will become part of the body of that of that case that gets executed so in this case we’ll say what are batman superpowers he has intelligence and he has fighting skills all right and then we can also then say well the default for that hero is that they’re a member of the jla now watch how this works it works a little bit different than the if and else if so let’s go ahead and say what we have and then rerun this all right so in this case it was superman and notice that we matched the case superman because it prints out super strength and x-ray vision but then everything else inside of all additional cases including the default case will be true as well so he also is intelligent he has fighting skills and he’s a member of the jla now if we were to change this to let’s say batman and we were to run the application you’ll notice that it skips over all of the console log statements that describe superman superpowers and they they come in however here at batman so console log intelligence fighting skills and he’s also a member of jla now we could try somebody like a green arrow not particularly one of my favorite heroes and um he’s just a member of the jla all right now if we don’t want that that flow through style what we can do is actually use a break statement in here so let’s go back through this now and see what happens whenever we break out of a given case so back to superman and now when we run it we only see that he has super strength and x-ray vision batman has intelligent fighting skills and then um green arrow is just a member of the jla okay all right one other quick tip here is that whenever you’re evaluating strings there’s a possibility like for example batman what if we had capital b in batman all right and then we run the application and you see he’s a member of the jla why didn’t it catch the case batman because capital b batman is not the same as lowercase b batman in that string now what we can do to circumvent that whenever we’re working with strings and we want to do some comparison with them we can use the two lowercase method of our strings so strings have a built-in method called two lowercase and that will take whatever that input is and we’ll make sure that all the letters are lowercase so that we’re really comparing apples to apples instead of apples to oranges so now when we rerun the application we get what we would expect with batman okay all right so let me comment this out we’ve looked at the if statement we’ve looked at the switch and then the third one we’re going to look at is the ternary operator and this is useful whenever i want to i want to just do a quick inline evaluation of some expression and then return back a value a string number boolean whatever probably just a string or a number back depending on whether that expression evaluates a true or false very small short concise inline statement so i’m going to create two variables i’m going to do something a little bit different though the first variable i’ll create like you would normally expect but instead of ending that line and moving to the next one i’m going to do another variable creation variable declaration and assignment right here in the same line so i’m going to create another variable called b and initialize its value to the string one all right so just a slightly different technique you might see that online moving on so we’re going to create another variable called result and we’ll set that equal to some evaluation of an expression does a equal b so two uh equal signs that are next to each other is the equality operator this is a check for equality to say does a equal b and if that is true then what we’ll do is return the word equal as a string but if it’s not true notice the colon that separates the true from the false will return the word n equal so the ternary operator has kind of got several parts here there’s an expression there’s a question mark that that has true or false ramifications and we’ll just do a console.log result like so so now let’s go ahead and run that and these are equal great um we could also do this in line so let me just take this part right and do that instead you can see how we can basically perform that same check without having to create a new variable to hold the result all right so it’s a nice inline way of running a quick check and then returning back a string one string or another string now let me just go back for a second here or actually let’s do this and then we’ll do console.log result okay let me comment this one out i want to keep it around for you in case you want to reference that in the future um we used two equal signs but there’s another another type of equality that we can check for and that’s strict equality and this will check to make sure that these two values are equal but then in addition to that it will not coerce for example the number one and the string one it’ll say are these absolutely equal even with the same data types all right and so in this particular case we should expect a different result these are n equal they are not the same all right so these are the same because i’m looking for equality but if i’m looking at strict equality and i’m not allowing javascript to coerce the integer into a string and then check for equality uh then i have to say no these are not the same because one is a number one is a string all right all right so let me comment that out and let’s take do one more check here um in this case i’m going to use a different operator the not equal to operator so i’ll use the word not in equal and not not equal and not n equal all right which would be the same as saying equal alright so now let’s see and run that and this produces a false so this would be returned back and then displayed on screen but then we can also do strict inequality by adding another equal sign to that operator and these are not equal again because it is true that a is not strictly equal to b because they’re different data types all right hopefully that makes sense all right so let’s go ahead and stop there um and hopefully all this ternary operator business and and equality and strict equality makes sense and let’s move on you’re doing great we’ll see in the next video thanks in this video we’ll talk about iteration statements iterations allow us to loop through a body of code a block of code a number of times until a certain condition is met and there’s a couple of different types of iteration statements we’ll look at two in this lesson and we’ll even look at them in relationship to arrays something i promised several lessons ago so let’s start off by creating a new file and call it iterations.js and inside of here we’ll create our first for loop so four and then there are three parts inside of the opening and closing parentheses first of all [Music] we’ll let i equals zero or we can actually just shorthand this and not even use the keyword let here i less than 10 i plus plus and so this is going to take some explanation but let’s just get this working first and then i’ll come back and i’ll talk about it and we’ll just print out the value of i all right what do you think is going to happen here if i didn’t tell you anything about how the for loop actually works what do you think will be printed to screen when we execute our script let’s find out so let’s go here and type in node iterations all right so we get ano uh several it looks like 10 different values printed to screen each on a separate line zero through nine and then our application exits all right so let’s talk about this it’s a shorthand syntax and there’s three parts as separated by these two semicolons inside of this this evaluation header for the four first of all we declare variable in this case i’ve declared i that’s why we use the let but then i said well we don’t really don’t need it let’s keep it short so we’re declaring it and then we’re going to um initialize its value to zero the second step we’re going to say continue running this for loop as long as this condition is true so as long as i is less than 10 continue running the the body of this for loop as defined with this a set of curly braces here and then finally after you’ve run an iteration increment the value of i by one all right and here we’re going to then print out the value and that’s why we start at the value of 0 and then we work our way all the way through this 10 times on the 10th time i gets incremented to 10 this this check is performed it’s false and then we exit out of the program all right now let’s do something a little bit more interesting like i suggested before let me comment this out here let’s go let a equal two and this should look familiar 4 8 15 16 23 and 42 whoops i guess i forgot equal sign there and now what we’ll do is four i equals zero i is less than a dot length i plus plus inside the body of this we’ll do console.log a and what element will we use i because i will start off with the value of 0 and it will continue until we get to the length property which is not 0 based and once we get to for example the 0 1 2 3 4 5 so length will be 6 elements so once i is 6 it’s no longer true that i is less than the length of this array and will exit out so let’s go ahead and save this and then run and we can see we get all of our values printed out to screen so that’s the proper way to iterate through or one way i should say to iterate through an array now one thing about visual studio code that i really love is that they have this notion of code snippets so if you ever forget this this syntax and it can be a little daunting at first there’s a way to remember it perfectly every time and that is to let the code snippets build it for you so i type in the four keyword intellisense pops up with a little window under it and i’ll use the arrow keys to go to the for loop javascript all right there’s a couple of fours but the one that we want has this little box with dots underneath of it that tells me that this is a code snippet i hit enter on my keyboard and now i get the basic structure of my um of my for loop already created for the purpose of an array now notice that every word index is highlighted and i can change that every instance of that by just using a letter like i’m going to change this to the letter b instead of index and notice that it changed it everywhere and then i’m going to hit the enter key on my keyboard which is the wrong move then i’m going to hit the tab key on my keyboard and i can change the name of the array now everywhere the word array is used i can swap that out with the letter a for example i’ll use the tab key one more time here it puts me to the another replaceable area for the element and here i’ll use c i’ll use tab one more time and then it kind of exits me out of that snippet replacement structure and now i can continue on and type like console.log we’ll just print out c okay so let’s grab a from our previous example and then we get the same results we got before but this time we didn’t have to memorize exactly how to use for the code snippet walked us through and allowed us to replace the names of the various replaceable areas like the name of the counter the name of the array and the name of the given element c that we extract out of uh out of our of our array okay let’s comment that out that’s four and now let’s take a look at the while loop um so we’ll talk about the difference between these it may not be obvious at first but essentially we’ll do this all right so take a look at this knowing what you know about loops what do you think is going to happen here well we start off with 1 and we’re going to continue to execute this loop until this condition is false so the very first time we run it one is indeed less than 10 so we’ll continue to run the body the block that’s associated with our while statement and we’ll print out the value of x and then increment its value by one we’ll continue to do this until we increment the value of x and it becomes 10 at which point this is no longer true it becomes false and then we’ll break out and continue on so let’s go ahead and see what value what what the values that we get so we get one through nine that’s expected and once we hit 10 we break out great all right so what’s the difference between the while statement and this first for loop that we did here at the very top well the difference is that the for loop first of all has a lot of infrastructure that we have to build these three pieces and um it uses a series of indexes that represent the number of iterations that will move through this block of code now the while statement is a little bit different anything can be used to derive the iterations as long as this statement continues to be true we’ll continue to execute this block of code and so we control the number of iterations in the body in this case here i do the x plus plus now we don’t have to use counters we could use anything any kind of business logic like we may want to read to the end of a file and once we hit the end of the file it no longer it makes sense to continue to read each line of the file then we would want to break out so the while is a little bit more flexible and so much that we can build the business logic for how many times we’re going to iterate in the body of the uh the while statement whereas with the four we’re pretty much limited to the number of times we want to run this being the number of times that we’ve kind of pre-set it up here in this top section outside the body itself okay now there’s also one last thing we can talk about and that’s a way in both the for and the while we can kind of circumvent this check right here and we may want to do a check like this so if x is equal to 7 then we’ll call the break statement all right so learning what we’ve learned about the if statement it probably should look more like that right so let’s first of all let’s make sure it works all right in this case we’ve got one two three four five six once we reach the seven we circumvent this check and just say hey i want to break out of this all right so we can use that always to break out just like we broke out of the switch uh when we wanted to not let it flow through additional cases now the one thing i will say if you notice how i typed this to begin with let’s retype that so hold on let me comment this out so that you can see it in the code if you want to download my code but we could also do it a little bit more shorter and in line since i only have one statement that i want to make right after the if statement i can do it on the same line and i don’t need to surround it with a code block a code block indicates that there’s usually more than one line of code in this case there’s just one line of code i could put it on the separate line and use some indentation like that or i can just keep it all on the same line since it’s so short but that might improve readability or i might decide that this is a more readable form that’s kind of up to me and if i’m working on it with a team of software developers i might want to get and kind of do it the way that they do it but stylistic for me this is so short i can read it all in one shot if x is seven then break out of it it just it looks good it’s very readable i’ll be able to understand what i’m doing later on it doesn’t take up and move the code down so i like that that format if i just need to create one statement right after my if right after my if so sometimes it makes sense to split things on separate lines sometimes it makes sense to keep two different statements on the same line again i think it kind of is a stylistic choice that you’ll have to make for yourself at some point all right so that is iteration statements we looked at two different kinds and we looked at how code snippets can be used to help us remember the format now i believe the while has in fact i believe most of the things that we’ve looked at has um uh some uh some code snippets available to them like if i find it here in intellisense you hit enter on the keyboard but there’s not as much to it there i mean while with the condition i can change this to x is less than 10 right it’s not so much for me to type that out but the four makes a little bit more sense because there’s so many parts to it and replaceable parts of that okay so let’s continue on the next video we’ll see there thanks it seems like quite a while ago we talked about variables but now that we’re working with blocks of code inside of blocks of code like we had here in lines number 23 and then 29 through 31 we need to talk about variable scope and when i use the term scope i mean variables are a little bit like people in so much that variables have a lifespan they’re born they do work and then they die and they’re removed from computers memory when they go out of scope and we’ll see an example of of that in just a moment but they’re also like people in so much that they have a citizenship i guess you can say in other words depending on where they were born they can work inside of some code blocks but not other code blocks and so the remainder of this video we’re going to look at lifetime and availability or citizenship i guess you can say inside of the rest of your application so let me create a new file and we’re going to call this scope basics and there will be more to say about scope as we move forward and learn more about functions and so on in just a little while here but let’s start and create first example here so let a equals first i’m going to create a function called scope test and inside here i’ll just do a console.log and the first thing i want to see is if i declare variable out here outside of my function can i reference it inside of my function and so to find out let’s just call scope test and see what we get so here we’re going to type node scope basics and we can in fact view the value of a variable that was declared outside of the scope of a function we can view it inside of the scope of that function all right so the next thing that i want to do is to say hey let’s create a variable here now if i create a variable inside of a function scope can i view it out here outside of the function scope so console.log b and let’s see and so no not only can i not see it but my application actually blows up and you can see the little carrot here is right underneath the b and it says b is not defined so in other words you could kind of think of it again in terms of the life span we created a function and we created a variable inside of that function that variable lives as long as that function is running but after the function after that code block is has completed executing then b is removed from the computer’s memory and essentially thrown away therefore we cannot reference that variable outside of the function because it no longer lives it’s dead all right so we’re gonna have to comment that out and we can go ahead and comment this out as well now let’s do one more thing here let’s say if a and we’ll just do something silly here if it’s not equal to an empty string so just two single quote marks next to each other so uh then can we still see the value of a even inside this innermost block of code that we defined with an if statement if we can we should see then it printed here a second time the value first so let’s save what we have let’s run this again and so we see first first the first time it’s printed out and the second time that is printed out all right so yes if something is declared in an outside scope it is visible or it can it has citizenship in every inner scope from that point on but here once again if we were to create a variable third and then try to reference it outside of the code block in which it was defined like so will this work what do you think we’re going to get that same kind of error before we get the little arrow pointing to the c and it says that c is not defined the variable c was defined inside of this code block and once we executed that code block and got to the end of it then it c was removed from the computer’s memory it’s no longer available to us in a sense dies and it’s no longer available okay now let’s just do one last thing here just to to kind of understand that we are in fact able to work with uh the variable that was defined in the outermost scope can we still work with it do it use it and change its value so i’m going to change this to changed and then i’m going to reference it here console.log a and so now let’s run our application one more time all right so the first time that it’s run this first console.log it will be the value first but then we change the value and we log it again and that’s where the second change comes from after we’ve executed that function then we execute line 20 and that’s where this third changed appears all right so i guess the moral of the story once again to kind of reiterate what we said when you declare a variable you have to understand in which scope it was defined because based on the scope or rather the code block in which that variable is defined it’s going to have a life span and it’s going to have citizenship if it was defined in the outermost scope it will have its life and its citizenship in all inner scopes but if it’s defined in an innermost scope it will not be available to outer scopes now one last thing and i’ll kind of end it right here if we were to take and this is probably just a question for you just a thought question if i define b here right above that if statement and then i attempt to use it right here and call this inside if do you think that will be able to reference that value well based on what we know about the rules i would expect to see the value second printed out so let’s try it and in fact we do see it so hopefully that supports your new understanding of the scope of variables defining them outside of a code block versus defining them inside of a code block and trying to reference them outside okay so hopefully that all makes sense want to make sure you’re clear on that we’re going to revisit the topic of scope because there’s a lot more to this but this is your first introduction so that you kind of understand what the rules of scope are at least in the most basic sense all right so we’ll continue on the next video see you there thanks scope is a topic that will keep coming back to over and over throughout this course it’s important because at least when your javascript is run inside of a web browser you must be aware of the topmost level of scope which is referred to as global scope in node like we’re using here it’s not as much of an issue they’ve got some safeguards against it but in web development working against the global scope is a crucial concern a lot of consternation and consequently a lot of effort has been exerted to preach that declaring variables at the global scope is a bad idea so you would never want to do something let me create a new file here returning functions.js js there we go you never want to do something like this note the use of the var keyword and you never want to do something like this although you’re more likely to do it than the previous line in line number one now the reasons why you would never want to do that this will require a little bit more explanation a little bit down the road and i’ll make this point emphatically when we start writing javascript for the web browser later on in this course but for now just understand that much of what i’ll say and why i say it over the course of the next five or six lessons or so will be working towards a solution to avoid writing your code in the global scope if at all possible now the eventual solution that i want to demonstrate relies on how javascript functions work but we need to take a few baby steps to get there so the first aspect of this technique that i want to demonstrate has to do with returning a function from a function now up until now our functions performed one or more actions and then exited quietly we may have returned a simple value like true or high or something along those lines however we can create functions that can not only perform some action but then at the very end can return a value to the caller and not just any value can return a function so let me comment all this stuff out and i’ll say don’t do this to make sure you never do that and this either and then i’ll just uh do a multi-line comment here and so let’s start off really simple this is something that we’ve already talked about when we talked about declaring functions or function declarations so here’s function one and inside of function one we can just return the string one right it’s not very exciting but it demonstrates the point that you can use the return keyword to return a value to which ever calls that so for example let value equals one for example and what would we expect to be in our variable value well we would expect to print out the string o n e or one so console dot log and value like so and just to see this working let’s go ahead say node and then returning functions and we see it returns here at the bottom of our screen the string one as we’d hoped now we could also kind of paraphrase this make this a little simpler by just doing it all in one line right so we could put the call to the function and not use a variable we could just make the call to the function which returns a string and that returned value will automatically be passed into the console.log method let’s save that hopefully that makes sense and that’s a common technique that we’ll want to use but things start to get a little bit more mind-bending when you start to think of a function as just another data type in javascript so for example um let’s go back to this i’m gonna copy this again and then let’s go um console.log and then type of [Music] value all right and [Music] well let’s do this let’s get rid of the method invocation operator so now we’re just setting a reference called value to our function one let’s see what we get and you can see the type is function so let’s think about all the data types we know about now we know about string we know about number we know about boolean we know about undefined and we know about function right and we’re going to learn a couple more before this is all said and done but at any rate notice here again i’m just in fact we could even do this a little bit differently i may have muddied the waters by introducing a variable let’s just do that instead and we should get the same the same result all right so uh i guess the heart of the matter here is that we can get a reference to a function and we can store that reference to the function out in uh in a variable which means we could also do something like this so now we have a reference to the function we can call the function using the method invocation or the function invocation operator so let’s go ahead and run it whoops i guess we need to actually then go console.log there we go now we’re getting somewhere there we go okay so hopefully that makes sense i just have a variable pointing to the function and then now that i have a variable point of the function i can execute the function by just using that variable with our method and location operator right all right hopefully you’re still with me so far hopefully this isn’t too mind bending let’s continue on here so since a function is just a data type like any other data type that we’ve learned about so far our functions could return a function because we’re just returning a value right and that value can be any type so in other words let’s let’s do this function two and here we’re going to return a function now this is a function expression inside of a function declaration right and here console.log and 2 like so and then do something like let my function equal 2 now what gets returned to my function it will be this inner function expression so i should be able to do something like my function with the method invocation operators and let’s see what we get and we get the value 2 like we’d hoped all right so hopefully you can see here we’re using the return keyword to return a function expression we get a reference to that function expression by calling the outer function declaration now we have a function in hand or reference to that function this inner function right here and we merely invoke it hopefully all that makes sense and we might be able to do this a little bit differently let’s try just a slightly different tact so here i’m going to go function three and here i’m going to return um function and then return three all right oops let me spell return correctly all right so in this case i’m returning a function that returns a string so i should be able to do something like console.log and then i’m going to call 3 what do i get back from that i should get back a function so i should be able to invoke that function to get back the string to give to the console.log now you would never do this and you would never see this but this is just to illustrate a point that um of what you’re really working with and that’s references whoops let’s actually execute it there we go three what you’re you’re working with here are references to functions that can return references to other things and maybe even other functions right so again that last one’s pretty far-fetched you probably never do that but the fact is that what gets returned from our three function declaration is a function and then it can be invoked with the function invocation operator here the second set of inner parentheses right there all right so on the surface this might not seem like a very significant development in our javascript journey but nothing actually could be further from the truth because this is actually a huge step towards moving our code out of the global namespace like i talked about the beginning of this video but to complete the story again we have several more baby steps to go we’re going to need to step away from functions for a little bit and come back to them once we’ve learned a little bit more about arrays and objects and objects specifically all right so just keep this thought in mind and we’ll continue on you’re doing great hang in there we’ll see in the next video thanks if you remember uh when we were looking at arrays i did the type of on an array and it returned back the word object so that’s actually another data type in javascript that we haven’t looked at until now obviously given the title of this lesson we’re going to look at objects so an object is similar to an array in some ways but its intent is dramatically different an array will hold a list of information in other words there may be many data items whether they’re strings or numbers or booleans or even objects each stored in a different element of the array contrast that to how an object works an object contains the related properties of a single data element so array many data elements an object one data element but has attributes so the settings of the properties define the characteristics of the object so let’s say for example that you want to have a car and so an array will only really let you save maybe the year of the car or the make of the car or the model of the car as a string or maybe some identifying number but an object would allow us to define all of those properties kind of in the same container so you know if you try to keep track of all the properties and maybe even all the methods that that belong to a car but you keep them as separate variables and separate functions you’d run the risk of clashing with other variables and functions that have the same name for a different car okay but objects let us keep that information kind of safely locked away in their own little container where the relationship between all those properties and functions uh are obvious that they all kind of belong together to describe one car and then you might have another object that describes a different car and you can keep both of those objects those two cars in an array of cars so hopefully you can see the relationship between those all right so um let me first of all start out by creating and you file object.js and so um you know i may have an object that has a series of properties that describe a specific car and i might want to for example keep track of the make the model in the year and so on and i may have some functions that i need as well things like uh getting the price of the car based on some criteria maybe you know the year of the car and things of that nature and i may want to print out a special description of the car that includes many things like the make the model of the year in a special format but i might define a car object like so so let’s do um let car equals and then we define an object using a um a code block so curly braces now i’m going to use kind of a name value pair here so let me just go ahead and start typing all right take a second and catch up with me there if you like now let’s go ahead and use the print description function like so and let’s use the year property like so i’m going to save my work and then i’m going to type in node object or object and the first printout is bmw 745li because we’re printing out the make and the model of this car all right and then here i’m just getting the year of the car and printing that out in a console window all right so in this case this object that i’ve built on screen we’re dealing with a tangible real world and very relatable concept that of a car we’ve all driven in cars or have driven cars in your javascript code you’ll occasionally be working with objects that define tangible real world things like cars but you’re also going to work with things that represent more abstract concepts that are specific to web client or web server development so in this sample i created an object using what’s called object literal syntax so i literally want to create this object and then i’m going to assign that object to a variable named car and the body of the object is like i pointed out defined with cur a series of curly braces here this set of curly braces at the outermost level here this defines kind of the boundaries for the object and everything that lives inside of it is either a property of that object or a function called a method inside of that inside of that object so let’s start off in lines three four and five here we have a list of name value pairs so here’s the name of the property and here’s the value of the property in this case notice that each of the names of the properties are just identifiers they’re just like variables in fact you’ll probably want to use the same naming conventions that you would use inside of with a variable and then the values can be any data type in this case i have a literal string bmw a literal string 745li and a literal number 2010 that represents the year all right and notice that the property and the value are separated by the colon character all right and then each property definition as well as each function up until the last one are separated by commas now i put each of these properties and functions on their own line or series of lines as it might be for the functions in order so that we could see some readability but that’s not entirely necessary from javascript’s perspective it would be fine if we put all this on one line of code all right so again with regards to the names of both properties and these functions which i call methods you’ll want to use the same naming conventions that we used previously when we talked about variables now there’s some other ways to create objects and i’ll discuss another technique in one of the upcoming lessons um so uh we’ll come back to that notion because it’ll lead into another discussion that kind of takes us off at a whole other tangent that i don’t want to go on right now so um here we define the function kind of the same notion i gave it an identifier the name of the function that i want to access and then i’m using a function expression and define the function expression within uh several lines here but essentially we’re just returning a value or in the case of the second one just just calling console.log we could write any uh number of lines of code inside of here these are just happen to be very simple for the purpose of illustration all right so i’ve defined my object now i want to actually use it and reference it how do i do that well you can see that whenever i wanted to access a specific property of my object or when i wanted to access a function i use the variable name that i set the object reference to and then i use the period on the keyboard that which i call the um the property access operator just the dot on the keyboard same is true when access of accessing a function as you can see here in line number 15 i used that period that member access operator and then you know all else is fair this is a function so i’m going to use the method invocation operator just like i would to invoke a normal function now uh i i keep referring to these functions that are inside of a an object as a method and and i think you should probably start referring to functions defined inside of objects as methods it’s a more descriptive term and i’m already used to it for my work with other programming languages but simply put a method as a function that belongs to or rather is defined inside of an object now there’s another syntax that i could use in addition to what you see here and it opens up some interesting possibilities that frankly i’m not very fond of but you could definitely do something like this so it almost looks like i’m using the array uh array element accessor to access a specific property here let me go ahead and save that so we’ll see the year appear twice here if all goes well and we do at the bottom so that’s one approach to accessing an individual property and the other is similar uh but it uses a um an index so it’s actually kind of interesting how this works um what is the the fur or index one of my car let’s find out the hard way here and it’s set to undefined in this case so actually it doesn’t reference any of these it basically creates its own new property and sets its value to undefined let’s never do that let’s not do that i prefer the dot syntax again it’s going to be most familiar to those of us coming from other programming languages but you could in some advanced scenarios use these techniques to do something a little more advanced and that’s way beyond the scope of this of this lesson all right so i recommend you just use that dot notation for now and all will be happy so um you can do like i kind of mentioned a second ago some pretty advanced things with objects and there’s a lot of room for variation so i’d recommend taking a look and taking notice of how other people work with objects in their javascript code as you’re perusing the internet because there’s always seems to be a new twist i think i understand how objects work and then i’ll see somebody do something really extreme and interesting and i’m like wow that that opens up my understanding a little bit more to how objects work i’ll just give you a quick example of what i’m talking about here here we can create a another car and i can create an empty object like so and then i don’t have any properties inside of my another car how do i reference another properties of inside of that well what i can do is just um kind of just say hey i want to create a property here called whatever and i’ll set it equal to bob all right and it’ll just automatically create a property called whatever and set its value equal to bob just like that no let or var keyword or anything if i do console.log um you’ll see that card.whatever in fact will come out to be bob or i thought it would oh i think what i’m going to need to do is save my work here and then do try this again whoops oh i’m sorry another car i had the wrong reference it didn’t exist on car but we need to look at another car dot whatever and there we go there’s bob okay so just an interesting feature of objects you can you can add properties kind of ad hoc and some people do that and that is kind of a feature not really a bug of javascript it’s dynamically typed you can just say hey i need a property here i need a function here and you attach it to an existing object and there it is it’s you’ve got an object now with this additional property called whatever all right um you can also do some other kind of interesting things might as well take a few moments and look at these so um i’m feeling unoriginal now so i’m going to create a new object called a and inside of this i’m going to create uh my property and i’m going to actually set that to another object i’ll define a property inside of that and i’ll just say hi because again i’m feeling rather unoriginal um and so let’s figure out how can we actually print out the a uh so console.log a dot my property dot b will i get what i think i’ll get i will so you can see how i can chain things together whether they be functions or properties by just continuing to use you know this is an object that has a property that has an object that has a property and so i can kind of chain through and and create essentially what becomes a namespace and we’ll reference that a little bit later when we talk about solving the global name space issue or or putting our variables at the at the global level of our of our applications which we’re trying to avoid all right let’s take a look at another quick cool example of things you can do with objects that might not be so obvious at first glance so here i’ll create var c and inside of that i’m going to create another my property and this time i want to create an array so this property the value of my property now will be an array of i could do strings i could do numbers but i could do an array of objects so all right and that’s perfectly valid so here think with me again i have an object that has a property that contains an array of objects that each have different properties all right and that’s perfectly valid so objects can contain properties of the type array that can contain other objects that can contain well really just whatever makes sense for your application however you need to store it and kind of represent the data that you’re working with also if you’re going to work with an array of objects it might make sense for all of them to have the same set of properties like in this case each object has a different property i’m not so sure that would be so useful but it might be something that you need to model in your application there’s nothing forcing you to keep the same set of properties for any given object uh inside of an array of that of that object you know as long as your javascript code or whatever will consume this object understands how to interpret it that’s all that matters so once you get past the simple hierarchy of values you typically refer to something like this that gets a little more complicated as an object graph a graph of objects all right just keep that in mind let me paste in some more interesting examples here just kind of get expand your thought process on how to work with objects let’s say i have a car lot and i want to store an array of objects each object has year make and model all right then i could iterate through the car lot and print the screen each of those individual car objects right so that’s one example how about um you know if this was more of a of a yes you could say a system that kept track of all of our customers and employees we might do something like this this gets a little more complicated and unfortunately runs off the side of the screen a little bit but you can see here i’m creating a contacts object here’s the start in the end and inside of that i have a property called customers and a property called employees now both of these have as you can see an array of objects and these objects look very similar and so much that they have a first name last name and then phone numbers and then phone numbers actually is an array of strings in this case this particular what is he a customer this customer bob tabor has two phone numbers richard bowden has two phone numbers and then but our employees like steve and conrad and grant well steve has two phone numbers but connor and grant only have one full number all right so you can see that things can get pretty crazy really quick but that’s a perfectly valid object initializer it just happens to be a little bit more complex than the ones that we started off with so now as you’re looking at this you might think to yourself wow this this actually looks similar where have i seen this before it looks a lot like jason have you heard of jason json or js json i guess some people put the emphasis on the on it’s short for javascript object notation json is both descriptive and compact and it’s probably the most popular way to send information between two disparate systems so we might in fact use it uh to store um settings or properties inside of um you know a more advanced javascript application or they use it in visual studio and c sharp projects to store application settings for example now in in the new version of c sharp and net um we might use it in our application sooner than later to send data between a single page application that lives on the client and the backing web api that on a web server that hosts a web api if none of that made sense don’t worry about it eventually you’ll get to that point if you continue on learning javascript all right so what was my point here well if you’re familiar with what json is you might notice that there are a lot of similarities between the object literals that we’ve looked at in this video and jason however there are some subtle but important differences between the two and i’m not going to take the time to go through that you can easily do a quick search online to see what the differences are between object literals and javascript and the javascript object notation or json just be aware that these are not one in the same there are subtle differences you cannot use them interchangeably but their syntax is very very similar and json or i’m sorry javascript has a built-in function that i’ll let you work with jason as you might expect okay wow i’ve really gone along on this one but objects are pretty important and we’re going to use them a lot and there’s a lot to them in fact we’re probably going to be talking about them a couple more times before the end of this course maybe even in the very next video so you’re doing great hang in there we’ll see in the next video thanks previously i said that much effort and education is centered around the dangers of defining variables and functions in in the global scope also referred to as the global name space especially when writing javascript that will ultimately be targeted at running in a web browser but i never really answered the question why is it dangerous i never kind of ventured into that and i’m going to illustrate more clearly later on why this is dangerous and how you can really hurt yourself when you are creating variables and functions in the global scope but in a nutshell the global scope is global so number one each variable that you define to the global scope is not removed from the computer’s memory until the web browser or the tab of the web browser navigates to a new web page so the more that you add into that global scope the more memory you’re taking up and that memory just is is consumed the entire time that that tab is open uh for that particular web page but more importantly number two um again emphasizing that this is the case with javascript in the web browser not so much true whenever you’re building these node style applications as you load javascript that you wrote and you rely on javascript code that others write whether that be code that javascript libraries that you’ve downloaded from the internet or that you include your project somehow maybe they’re ones that other people in your company have written and you need to include them in your project or perhaps even sold commercially online some product that you purchased came with a javascript file and you included maybe it hasn’t been updated in a number of years the variables and the functions that are defined in those files when you consider the the the variables and the functions that you’ve written in your files there’s a the more that you write at the global scope the more that they wrote at the global scope if they didn’t take precaution the more likely you’re gonna have a collision of names at some point somewhere down the road somebody’s gonna have a variable named what you named and they’re both trying to contend for uh the global for being the variable the winner in the global scope so we call these naming collisions uh and when these naming collisions happen either your data will get overwritten by their code or their data will be overwritten by your code but either way undoubtedly it’ll cause unanticipated uh bugs that are difficult to track down and quite frustrating and the reason why this is even a thing is because it’s happened okay so now that it’s happened everybody is extremely concerned about it and so a series of suggestions came out and and a lot of effort went again around trying to figure out how to solve this issue given the the tools in javascript that they had available and the first one that has come out and that i’ve recommended from the very first lines of code that we’ve written is to use the let keyword start the instead of the var keyword because the var keyword will attach variables to the global scope which in a web browser is the window object in the document object model we’ll talk about that a little bit later and i it’s also recommended that you use the technique the design pattern that we’re going to discuss in this video whenever you’re writing javascript code or there’s a third option too which is new in javascript in the latest version of javascript called modules unfortunately and i may even talk about this at more length later on the implementation of modules is a little bit uneven between node and the web browsing environment so i’m not sure how helpful that would be at least as we’re getting started and learning about javascript just keep in mind there’s several different attacks but this is probably the one that you’ll see used most often in at least uh javascript that’s been written over the course of the last five to ten years but there are some newer ways to to tackle this all right so any rate the technique that i’m going to discuss in this video or the design pattern actually uses a couple of techniques that we’ve learned about so far we’re going to use an iffy remember what that is an immediately invoked function expression to create a function and then that function will return an object and that object will have defined functions and variables that will then be kind of scoped to one variable so instead of having five or ten variables that we’ll have only one variable in the global scope uh or at least in some scope and then we’ll be able to reference the individual uh variables and property variables and functions of that particular object that gets returned all right so we’ll see how variables and functions can be made essentially private so that we can hide some implementations from the ability for just any code to call them this is often called encapsulation in software development terms and so these will be unavailable outside of the public variables and the public functions that we return and that’s generally a good thing so there will be a couple of benefits that come out of this all right so um let’s get started by creating a new uh a new file called module pattern dot js all right so let’s start by creating an iffy and to do that hopefully you remember how to do that we’re going to start with a function expression we’ll just create an empty one to start off with we wrap it in a set of parentheses and then we use another set of parentheses to actually invoke it all right so what i’m going to do before we get any further is actually set this immediately invoked function expression to a variable i’m going to call this counter so i’ll set counter equal to whatever is returned so eventually what we’re going to do here is return an object full of properties properties that have values properties that point to functions that can be called but we can also do some private stuff here and this will not be accessible outside of the calling the counter dot something to access it and so we can like have a private variable here like let count equals zero and we would not be able to do counter.count it just wouldn’t be accessible we’ll fix that here in a moment when we return an object will give an accessor to it we’ll take a couple of passes at that actually okay so um let’s go and create now a private function as well and this will just print out a message and style it up a little bit differently so we won’t get crazy here so console.log and um we’ll just uh say whatever the message is and then we’ll just do like three dashes like that just a little bit of style just to show that you know we have something here that could be private but now ultimately what we want to do is return an object that will get set to counter here all right so um [Music] we’re going to start off simple we’ll come back to this a little bit later because there’s going to be a issue with one part of this actually this part right here what if i just want to return back the counter the current value or rather the current value of count i can try that again we’ll come back to that in a minute but let’s say we create like an increment increment property and it will return a function and inside that function we can do something like count plus equal to one and uh then we can call the our print method and then just say after increment something like that we can also here let me use a comma right there because we’re going to create another property of our return object called uh reset and what this we’ll do is call a function so we’ll create another function expression print before reset then we’ll call the count and then print or we’ll set count equal to zero because that’s the point of a reset and then after reset it should always display zero but let’s just double check that all right so now we have basically our uh our module pattern we created a module which is essentially an iffy that returns an object that will expose functions and other properties like um like the current count and now here because i’ve invoked this immediately this is available it’s already been executed and counter now is fully populated and ready to be used in our application so i can do counter dot value i think wasn’t that the name of it i’ve forgotten everything already okay i value so console.log counter.value and um i could try to do console.logcounter.count [Music] just to prove that it won’t give me anything back so let’s just start there and we’ll go node module pattern and first time we get an undefined wide because count is not a property of counter uh because we didn’t it’s it’s not exposed and it’s not being returned in the return object all right so that is impossible now what we can do however or we will try to do is call counter oops counter dot increment you can see it shows up in intellisense that’s a good sign so we’ll call increment in fact let’s call it like three times whoops kind of ended up below where i wanted to go there all right great and then we’ll call counter dot reset like so all right so let’s see we get this time it’s not gonna quite be as satisfying uh because we’re gonna get a little issue on on uh on these lines here so you can see after the increment the value is one two and three but then when we attempt to get the value of counter from this property count it we would assume it would be three right but it’s zero what happened here well we accidentally created something called a closure and another little topic we need to talk about in javascript so let’s not do that we’re going to need a different way to implement this basic functionality so can’t use this technique what we’re going to need to do is take a different tact and we’re going to implement two more functions here we’ll create a set function or i’m sorry let’s start with a get function and get we’ll do something super simple it’s just going to return count and then and i did it all in one line put it down on multiple lines didn’t need to it’s pretty simple function and here we’re going to set count equal to some value that’s going to be passed in so here we’ll say we’ll accept an input parameter called value making sure we add some commas in between these new properties here that are set to these functions we’ll take in some value and we’ll set count equal to that so we should be able to come down here and now since we’ve kind of removed that let’s go counter dot get or actually let’s set it to the value of seven here we’ll do a console.log counter dot get to ensure that it is seven and then we’ll call our reset let’s see what we get when we run it this time all right this is a little bit more interesting well almost interesting i need to invoke get okay so i forgot that save it run it one more time [Music] there we go okay so here we go lines 33 through 35 will produce these three lines where we’re after increment one two and three then we call counter set passing in the value seven and so we then do console.log get counter get and we get that seven back out now we call reset and before the reset the value will be seven after the reset we reset it to zero okay so hopefully you can see that this technique of returning an object from an iffy will first of all allow us to keep some implementation deals details private like we couldn’t get to count and we didn’t try but we wouldn’t be able to get to print because only certain things are being returned um and mostly in in terms of functions uh that give us access to the private functionality and a little bit more but in addition to that we’ve reduced think of all these variables that we’ve reduced out of the global out of global scope there’s no count variable now there’s no print there’s no get set increment or reset they’re all part of this one variable called counter and so there’s less of a chance that we’re our namespaces are going to collide as a result of that now we want to pick something unique there maybe something that describes a little bit better uh what the intent of this is maybe something specific to our brand or company and and maybe pick something fairly unique there but as a result of that we’ve protected ourselves and written our code a little bit more defensively now there’s one more thing that i want to talk about here and that is that uh okay so keep in mind this this technique that i’ve just demonstrated here is so popular that it has a name this is the module pattern there’s another variation that was created on this called the revealing module pattern you might see this used as well let’s go ahead and create another file and i’m going to call this reveal revealing module.js all right and i’m just going to paste some code in so that we can kind of compare and contrast the two versions all right so it’s it’s nearly very similar in so much that we have an iffy that we’ve defined inside of that iffy we have some private stuff just like we had before here we have some more private stuff these are the implementations of get set increment and reset but i’ve created these as uh function declarations with names now here at the bottom we have this is the revealing part of the revealing module pattern here i’m revealing publicly accessible uh functions by including them as properties in this return object so i can call counter dot get in counter dot set exactly the way that i could before but behind the scenes they’re calling the implementations that are defined here and so there’s a couple of benefits and a couple of downsides first of all it what makes it a revealing module pattern is that it reveals the public functions through these properties in the return object okay and it’s a clear a cleaner clear presentation of what actually gets returned but there is a downside and that is you can accidentally overwrite these are just properties so i could set the value to the get equals seven and not and forget the method invocation operator and as a result i can pretty much just break the association between get and the function name getcount and so that’s a downside we could accidentally break this whereas in our module pattern um you can’t really do it that’s not possible okay so um that’s the module pattern the revealing module pattern it brings together a bunch of techniques we’ve learned all to the greater good of removing or reducing rather our impact on the global namespace by removing variable names and function names from our from the global scope okay and we’ll see why that’s important on the web development side of things as we move in that direction but wanted to kind of bring all that to a head all right so let’s uh continue on in the next video we’ll see you there thanks as you’re getting started closures can be another mind-bending topic in javascript but they don’t have to be if you understand them you can really unlock the power of javascript now having said that personally i don’t rely on them very often when i write code but i’m not a javascript ninja so your mileage may vary you’re going to see a lot of articles and tutorials out there that talk about closures and i think sometimes they make things more difficult than they really need to so i hope to provide a really simple uh explanation that will simplify this topic for you and you can get into some of the more advanced stuff a little bit later on but basically a closure allows you to associate some data with a function and then use the function with that data already kind of baked into it from that point on in my mind it’s kind of like this i’m basically taking a function and i’m marrying it to some data through an input parameter an input argument and then they live happily ever after in their own variable and from that point on they work together as a team whenever i want to invoke that function with that data already pre-filled i guess you could say into the input parameters i can call that new variable all right that’s all it is um and then well okay there’s more to it than that but for the most part that’s all there is so let’s just create a really simple example or two and and hopefully it’ll clear some things up so uh let’s create a new file called closures dot js and let’s start with just a function i’m gonna create something super simple say hello and then inside of here i’m gonna return a function because that’s kind of the point of this and it’ll just say howdy and i guess we’re going to pass in a name so howdy plus the name all right all right so that’s really step one i create a function that returns a function looks like i got a little problem here whoops i return a function there we go uh so here’s the function returns a function and i’m passing in a argument called name that i’ll ultimately use in the body of this return function all right so then i can actually make a call so for example let bob equal say hello and passing in bob now from this point on um i call bob well we’ll see what happens here node closures all right howdy bob all right so um by itself it isn’t all that impressive but that’s really kind of step two and three all in one shot so i can pass in some variable that slightly modifies the way that this return function operates so in this case it’s pretty simple i’m passing in a name and it will change what gets printed out every time whenever you call this function in the console.log all right so this value is basically um saved off in a variable outside of the returned function so we’re lying relying on how scoping works in order to get that closure behavior that name kind of follows along this return function everywhere it goes bob gets passed along from that point on and then this is step three where i save that off in its own variable so that i can call from that point on and i kind of get this say hello with bob prefilled right so i can do the same thing with um conrad say hello and then grant say hello and that’s all a closure really is so let’s those and there you go three versions of the same function that get returned we modify the operation by taking advantage of how scoping works in javascript by kind of giving it this value that it’s going to hold in its own in its own context from that point on uh in stored in the these separate variables all right so this is really uh just the binding process uh that binds these together and then stores them off that’s all it really is so another way to look at this the say hello method has finished executing and it returns a function but in the environment in which the method originally ran it preserved that so that whatever value we passed in is preserved inside of this return function the environment or in this case just the name input parameter this variable name remains available so now in step four i guess you could say if there was a step four i basically use the new variable which represents a call to the method and a preset input parameter to conveniently call that version of my function now all right so um the important lesson to take away is that each closer closure creates its own what’s called lexical environment and you’ll see the term lexical used a lot in javascript uh whenever you’re learning about scope i’ve tried to steer away from that term because i feel like maybe it clouds the issue a little bit it’s basically just a fancy word for everything that we learned about in the scope basics here previously where if you define a variable outside of a function it is available inside the function but if you define it in a child code block essentially it’s not available outside of that child code block so that’s basically what i mean by lexical scope it basically defines how a parser resolves variable names and functions when they’re nested and the word lexical refers to the fact that lexical scoping uses the location where a variable is declared within the source code to determine where that variable is available from that point on throughout the rest of your code so nested functions like we have here in our sayhello that returns a function have access to the variables that are declared here outside of it as well as any of the input parameters that are declared outside of it and uh outside of their original scope right and that’s just how the lexical rules work like we learned about in scope basics.js so when we create a closure each closure gets its own lexical environment meaning that they get each time we create one like we do here in lines 8-10 they get their own set of variables their own name variable and anything else we were to define outside of the function in this case we don’t have anything else and there’s more to closures you can get in some pretty advanced scenarios they’re a powerful concept in javascript the ability to retain or bind to the lexical environment of the variables that enclose the returned function like in lines three through five to create a version of the function with some values already pre-applied is pretty powerful if you don’t completely understand that that’s okay don’t get discouraged for now just understand that whenever you return a function from a function you also glue any of the variables that were defined outside of the return function including in this case our input parameters all right that’s all you need to know about closures well for now anyway all right so let’s continue on in the next video you’re doing great see you there thanks if you think back to the lesson on object literals i think we were working in the object.js file i created a car object with several properties and functions and you can see that i’ve created a new file called this dash keyword dot js if you’re pausing the video to follow along then you might want to go ahead and create this dash keyword dot html yeah that’s right we’re going to write some javascript in an html page for the very first time in this course here in lesson number 18 because i want to show you how this works in different contexts this keyword but getting back to the point at hand if you recall that example that i’ve pasted in from that object.js file into this keyword.js uh you see line number 10 and at the time i didn’t even proffer an explanation as to what this dot make and this dot model actually mean in our in our application uh the fact is that this keyword can be a little bit challenging uh so even people with a little bit of javascript experience from time to time get a little confused about this keyword and one of the reasons people get confused about it is because it means something different in javascript than in most other programming languages so you actually have to kind of fight your existing knowledge so if you’re coming from another programming language the best thing you can do is kind of just leave everything you think you know about ja about well a lot of topics but in particular about the this keyword at the door and if you are just coming to javascript as your first programming language then you might even have a slight advantage here because you won’t have to fight yourself in what you think you know but simply put the this keyword in javascript represents the way a given function is called the way a function is called will determine what this represents okay so you essentially bind this keyword to a given context uh and we’ll explain what that means based on how you call the function all right so up to now we’ve not really paid much attention to how we call functions i told you there was really only one way to call a function using the method invocation operator so we would do something like this a little bit later on right like a car dot print description all right and i used the method invocation operator and i didn’t even hint to the possibility that there would be another way or multiple ways to actually invoke a function but that’s actually pretty important when you consider what this keyword represents all right you’re going to learn in this video at least that there are other ways to call a method that allow you to set or rather bind this keyword to something so that you can do something interesting inside of in this case your object or your function or whatever the case might be now you may never need to do this but it’s important to understand the basic rules and how that this keyword gets bound to a context and gets referenced inside of your object or your function there’s an entire book written about how this functionality works and all the permutations and and it’s awesome it’s a little bit over my head at times so i’m going to give you an absolute beginner’s explanation as to how this all works but it should serve you well as you’re getting started and then you can refine your understanding a little bit later on but let’s start off and by commenting out everything and you know what there’s a really easy way to comment out everything that i haven’t talked about yet alt shift and a on your keyboard will add a uh beginning and an ending uh code comment character operator to whatever you have selected so that’s a nice quick way to do that great all right so let’s start really simply i’m just going to create a function called first and this function is going to return the value of this what is the value of this well um here if you go console.log first is it equal to the global object inside of node so the global object we’ll talk about that a little bit later i guess it is kind of the the most basic context of things that get uh executed inside of so when we create something in the global namespace a global variable we would create it essentially attached to the global namespace it’s available everywhere in our application all right so let’s see if when we call first from line number 20 is this which gets returned equal to the global the global object so let’s go this dash keyword and it is true all right so when i call the first method basically from the global context because i haven’t created it inside of uh using the module pattern inside an iffy remember what we talked about previously so i’m i’m basically just calling this here out in the global namespace and what gets returned back is the fact that this is equal to that global namespace all right so now let’s try something else actually let me just do this let me copy this little comment that i have in my notes because it might be helpful to you for reference all right so let’s start with another code example now function second and the only purpose of this is just to show that there is this little flag called ustrick there’s a strict mode in javascript we’re not going to go into it much but this will change how the this keyword is bound and so if you have ustrict turned on and you try the same thing that we just did here let me comment all this out using that alt shift a technique and we essentially do the same thing here where we go console.log and then um second equals global let’s see if we get what we think we’re going to get from the first time around false all right well i happen to know that it will equal undefined and that is a true statement what gets returned from this when we use you strict an undefined value it gets bound to essentially nothing all right so just keep that in the back your mind this the rules around binding to this keyword change depending on the context in this case the context is you strict it will fundamentally change how it works all right so with that out of the way let’s move on to the next example here let my object equal and i’ll create a property called value and set that to my object and then i’m going to use this um use create a global variable called value and i’m going to set it on the global object by doing this global.value at this point i’ve created a new property on the global on the global object and i’m going to call this global object all right again in node this has special significance if you’re doing web development it’s actually window and we’ll look at that here in just a little bit all right so now let’s go function third and we’ll return this dot value and then we’ll do a console.log third so by default what do you think will get printed out will we print out the value of the global object which i set to the string global object or the value of my object which is set to my object well hopefully based on what we learned in this first example you already know where i’m going with this and because we called third from the global namespace when we reference the this keyword it’s referencing this global variable so when we grab the value property it’s grabbing the value property of the global variable thus printing out global object let me show you that there are other ways to actually invoke the third function and we can control the binding of this keyword like so so here we’re going to console.log and then i’m going to call third but i’m not going to use the method invocation operator i’m going to use the call method or the call function of the third function all right so it has a built-in method called call and there i can pass in my object and this is how i will bind the this value to my object so the value will be pulled from my object not from the global variable so let’s save that here let me actually comment this one out save it and let’s run it okay hopefully that makes sense there’s another similar function called apply and they’re very similar um and in this context won’t be obvious how they’re different because i don’t have any additional properties to send in or rather input variables to the third function so if i had something like a name i would use then like called on you know bob but this can also take an array here in the apply which could include bob so just do this see what we get i haven’t tried this beforehand so yeah object bob okay now just out of curiosity what would happen if we did this probably nothing at all just be blank yeah object undefined alright global object with the word undefined next to it okay so hopefully that makes sense what i just added on there that’s really to illustrate the difference between call and apply and so if i had multiple input parameters in the call i would just add them on there if i had multiples i could add them on inside of an array all right that’s the only difference okay but let me just kind of annotate this and talk through it just for a moment so just to kind of recap this this keyword depends on how a function is called and an object can be passed as the first argument to call or apply and the this keyword will be bound to it like we did here in lines number 54 and 55 all right and so just to kind of remind you about this i’m going to go ahead and say this property is set on the global object and then kind of works inside of here this will return something different depending on how we call this method right and then i just want to add this little annotation here as well both call and apply allow you to explicitly set what you want to represent this or how we want to refine this the difference is how the additional arguments are passed in like i show you here okay so when it comes to calling a method of an object the call site will be the object itself and all of its properties are available to this in fact if we take a look back here that’s what happened that’s how i did this and why i use this.make and this dot model again when it comes to calling a method of an object in this case print description the call site in this case car dot print description will be the object itself and all of its properties like make model and year are available to this inner function only when i use this keyword because this represents this context all right because i’m calling print description using the car object all right so to call the function i would use the object reference that object reference car gets bound to this keyword so to further illustrate this idea let’s do this i’m going to actually select everything we’ve done from here down [Music] and shift alt in a and so let’s go function fifth and uh here i’m gonna go console.log and this dot first name [Music] and a space and this dot last name and hopefully this will make a lot more sense all right so now what i want to do is create two objects so uh let customer1 equals and they’ll go first name colon bob last name colon tabor and then i’m going to create a print property that’s going to point to the fifth function like so all right and i’m going to copy this and just duplicate it i’ll make this customer 2. call this richard bowen [Music] all right and then finally we’ll go um customer two dot print customer one dot print all right so now look at how this works what is the context how do i call the print method that’s pointed to fifth well in this first case i’m using the object if it’ll let me get in there object customer 2 that is the context we’re going to bind this keyword to customer 2 because i’m calling it as a property of customer two i’m going to bind next in line number 85 the this keyword of the print method to customer one so let’s go ahead and so to me this example is really interesting because the call site is the object’s reference to the function and this keyword can be used to reference the various properties of the object that was used to call the function so it becomes an interesting and elegant way to essentially pass values into a function without defining a bunch of input parameters to the function itself all right so now what i want to do is kind of stop working with node for a little bit and look at this keyword in the context of a web page so i’ve created a new this dash keyword dot html i can use the term doc you can see that when i type in the word doc it’s an emmet abbreviation if you’re not familiar with emmit just search for it online it’s basically basically a shorthand um syntax for i guess you could say for snippets for code editors right so when i hit enter on the keyboard it kind of creates this bam this whole document outline for me and um it has some replaceable areas like the device width and the initial scale and the content and all that business i don’t want to change any of that stuff what i do want to do is add a script section here near the bottom for reasons i’ll talk about in another video and then what i want to do is above that create just a simple button so here we go button and inside the button i’m going to say hey click me and then i’m going to set a on click equal and we’ll come back to that in just a moment here i’m going to actually create a function that i want to call whenever the button is clicked so function and i’ll give it a name click handler like so and inside of here i’m going to go first of all i’m going to allow something to be passed in i’m going to allow uh a value to be passed in and then i’m gonna print out whatever that is to the console so console.log arg all right now we’ll come back to this in just a moment i’m gonna leave a space and go console.log this all right and then in here i’m going to say the button on click equals click handler so i’m going to call the click handler i’m going to pass in this this keyword all right so if all goes well here i should just be able to right click on this and say reveal an explorer and then when it’s in the windows explorer i can just like hit the enter key on the keyboard to actually open it up in my default browser and what i want to do is i want to use the f12 tools in an edge i’ll bring up this little window at the bottom here and i want to look at logs so make sure that you’re on the console tab and select the logs sub tab now click the click me button and we’ll see the results of both of our console.logs in the first case what we get back is the this that was passed in as arg and printed out directly to screen and so in this case the this keyword will reference this entire element all right so let’s take a look at that again here you can see how button on it gives me the whole thing so i can do something like this which if you’re familiar with web development should not blow you away i should be able to do arg.inner text and if you’re familiar with web development at all you would expect to see what there let’s refresh the page click the click me button again and you see click me all right that’s the inner text of that button so i’m able to get to all the properties of this button but the key here is that that this keyword represents this entire button and i’m passing this keyword in so that i can look at this entire button inspect it grab a property out but when i use this keyword inside of my click handler function what do i get i get the global object now we said in node the global object the name of it is global in a web browser the global object’s name is window so i can actually just like use this little arrow this little carrot chevron right next to object window and it will allow me to view all of the objects the child objects of the window and there are literally maybe if not hundreds definitely dozens of different objects and properties uh that we can that we can inspect and and change programmatically we’ll come back to some of those ideas a little bit later but basically the takeaway from this is that whenever code is called from an inline on event like on click on event handler it’s this is set to the dom element on which that listener is defined that’s why we got back this entire button including the text including the closing button tag but we’ve not taken any special steps to bind this inside of this function we’ve defined here the click handler so it defaults to the global object the window object okay so the moral of the story is that what the this keyword is bound to is not always obvious it takes a bit of detective work more so than this keyword and other programming languages but it has to do with how a given function is called and the site from which it is called so in this case this keyword is used at the call site is at this element level in this case the this keyword is used at the global level all right and that will change the value of this and what it’s bound to but by default functions that are called using the method invocation operators alone will use the context in which the call is made so if the call is made in the global context then this keyword will be bound to the global object if the call is made in an object then like we saw here near the end the this keyword will be bound to that particular object and we were able to use it to grab out the values of the given object all right and you can take control of what the this keyword is bound to by calling it using either a call or the apply method of a given function and we talked about what they do and how they’re different and finally whenever you use them in a use of this keyword in a browser once again what this is bound to depends on how it’s being called and who is calling it alright so hopefully that clears up what this keyword is i’ve given you a lot of examples i’ve tried to speak a little bit more slowly and and hopefully you can wrap your mind around what this actually is and um hopefully from this point on you’ll be able to identify uh what this is in your code all right hopefully that helps all right you’re doing great hang in there we’re getting close to being more we’re more than halfway done and you’re doing awesome see you in the next video thanks in this lesson i’ll briefly demonstrate how to use destructuring which is a fairly new technique in javascript for unpacking values from arrays into individual variables or i guess into other array elements of a different array but you can also use it to unpack properties from objects again into other distinct variables or a different object so use this term unpack you’ll see what i mean here in just a moment let’s start by creating a new file called d structure ring dot js all right and the first thing i’m going to do is create a bunch of loose variables here just a b c d e will wind up using most of these at some point and then i’m going to borrow that array that we created a while back maybe recognize some of these names all right and then next up uh let’s start by just destructuring this names array into um a a set of variables so i’m going to start off by using names which i know is an array and i’m going to use this bracket syntax and say take the first element of the names array and stick it in a whoops whoops whoops there we go an a take the next element stick it in b you know c d and i could even change the order instead of um [Music] d and e i’ll go e and then d all right and so let’s just do um console.log a console.log b and then console.log d because that’s a little bit more interesting now we’ll just go ahead and print out c as well why not and then we’ll go e all right and so here we’ll go node d structuring all right so take a look at what happened we have this array we destructured it down to a set of individual variables and we start off with a representing david the first element of the original array eddie the second element of the original array alex because we’re grabbing them off in sequence i did something a little bit interesting here and so much that i switched e and d so that e will represent michael and d will represent sammy when i print them out going back to alphabetical order d then e sami is first then michael all right so but the key to this example is that i’ve taken everything inside of an array and using this style syntax i’ve destructured it down to individual variables all right so that’s just one example there’s some other interesting ways to to work with this let’s uh let’s just go here i’m going to take all this alt shift a comment it up and the next example i’ll do a let others so i’m adding an additional variable here in addition to the other ones that we created originally and here i’ll go um a b and i’m going to use this weird syntax of dot dot dot others all right equal names so now um console.log a console log b and then let’s see what gets put out into others all right so this time a is from david b is eddie just like before but this time i said basically everything else just go ahead and stick them in a new array called others all right so that’s what we see here printed out from line 21 we get this array uh representation in our console.log including alex michael and sami together all right so that’s just another twist we can basically take some elements one by one we can then also kind of combine together entire groups of elements together let’s move on to another interesting twist on destructuring so in this third case what i want to do is actually work with an object and so whenever you’re working with object in objects and you’re destructuring out one object into variables or even into another object it’s really like a form of projection if you’re coming from other programming languages grabbing out the parts that i want of the original object and putting them into a new object with a different shape all right without having to take all the contents of the original object so i may only want one or two properties where the original has 10 or 20 properties so here let’s start off with just saying let year and model then i’m going to start off with car equals but i’ll get rid of that here pretty soon we’ll start with let car equal but it won’t matter because we’re going to remove that and i’m just going to create a typical old object um like let’s say it’s bmw the model will be a 745li year will be 10. the value will be 5 000. all right and in order to destructure this what i’m going to do is actually remove this and say hey i want to take the year and the model and put that into a new object of its own and then what i’m going to do is wrap this whole thing in a set of parentheses and then an end of line character like so so now we’ll do console.log here that’ll be the value here that i’m pulling out and my model it’ll be that value there that i’m pulling from the original object and printing it out so let’s see what we get all right so 2010 from line 34 and the 745li from line number 35. all right so that’s just some examples of destructuring um pretty simple concept it really is just a compact syntax that helps to clean up code whenever you’re trying to map from one data structure into another or into a set of variables and that’s all that it is alright so hopefully that made sense in the next video thanks another new feature of javascript allows you to create better literal strings through the use of templates so the term template literal is a kind of oxymoron parts of the string will be literal and parts will be templatized they’ll be variable based on an expression and so you can inject in other values variables or you can actually run entire expressions and we’ll see the use of a ternary operator in just a moment let’s start off pretty simple begin by creating a new file called this template literal [Music] literals.js inside of here i’ll do something really simple uh let name equal bob all right that’s not the template literal this will be console.log and now i want to use the backtick characters usually over the tilde if you’re not familiar with that region of your keyboard it’s usually right next to the number one two three one right next to the number one you’ll have to hold down the shift key to get to it so there’s one back tick whoops i’m sorry you’ll you don’t need to shift the back tick key all right usually above the tab key on your keyboard and so inside of there i’m going to use hi and then whenever i want to add something variable that will get kind of injected in from the outside it’ll be interpolated from outside of this i’m going to use the curly braces and right before the curly brace i’m using the dollar sign character you can see that the syntax highlighting in visual studio code changes the color of this to this bright blue color that’s cool and so then i can just say hey i want to use the name variable inside of there like so okay so we should be able to do something like a node template literals like so and i get high bob awesome so um the other cool thing about temple literals is that they will allow you to create multi-line strings now before this you would have to do a lot of using the append operator and so on but in this case you can do something like and i’m just going to copy and paste this because i want to type all this out you can see here that i start my template literal with a backtick and i end it with a back tick down here at the bottom so i’m setting this sentence or sentences actually this paragraph equal to this whole string and i’ve split it up on multiple lines and there’s no append character or anything like that what i can do is console.log sentence and the neat thing too at least from what i’m concerned is that it preserves the indentation level and the line feed character so you know i could do something a little more i guess artistic here come on let’s do this just with spaces like that like that and it preserves that indentation level that i have pretty neat right all right and then the other cool thing is that you can do anything inside of the expression interpolation area that you can do in a normal expression so and let me comment all this stuff out control i’m sorry alt shift a and here i’m going to create a function real super simple function get reason count just a very pithy silly idea here and i’m going to hard code this to return the value 1 all right and i’ll change it a little bit later so you can see the difference here but i’m going to create a variable called interpolation interpolation equals i’ll use the backtick character give me dollar sign curly braces we’ll come back to those in a minute to try this all right inside of here make some space for myself i can create any expression i’m going to use the ternary operator and so i’m going to say hey if get reason count just because i wanted to make things a little bit more interesting so i’m actually calling a function if it is equal to 1. and here’s where the ternary operator comes in i’ll say one good reason otherwise a few reasons all right and this is starting to pop over out of the viewable area but hopefully you can kind of keep track of using this syntax coloring you can see where the expression interpolation begins and ends inside of that here i’m going to evaluate the call to get reason count and if it’s equal to 1 i’m going to inject that part of the string in here otherwise i’ll inject this part of the string inside of my template literal and now i guess the only thing left to do is just a console.log interpolation like so let’s go ahead and actually run this so give me one good reason to try this well maybe we should try two give me a few reasons to try this all right so you can and i see the need for this all the time especially in web development where you may have one item in your shopping cart or two items in your shopping cart to change up the the string that gets outputted to for the end user based on a quantity all right and probably other good uses of that as well so string template literals are a nice addition to the javascript language here again they can make your code more compact and readable allowing you to do some interesting things in line that would require a lot of appending of strings previously all right so doing great let’s continue on see in the next video thanks regular expressions allow you to create a pattern to determine if a given string matches that pattern that you created regular expressions or they’re often just referred to as regex or regex are not exclusive to javascript they’ve been around forever they can be used in just about every programming language and i absolutely hate talking about them because they make my head hurt i’ve not committed the syntax of regular expressions to memory and so pretty much creating a pattern to find something is hard for me and i’ve developed a few little crutches through the years so that i can you know approximate or fake my way through the usage of regex and i suspect you’ll probably find yourself doing that as well unless you’re one of those really annoying people that just commit to learning regex inside and out and then you know can impress people at parties based on that knowledge so um i i try everything i can do to avoid memorizing it or learning the syntax usually if it’s something simple like making sure that a string matches the pattern of a phone number a zip code some something that’s fairly common especially with data in the united states i can usually find a good example of what i’m looking for online using a search engine or stack overflow but if it’s something custom for the given project i’m working on then i have to go and relearn just enough regex to get through that project and then i tried to purge it from my memory again so i’m going to show you where to go how to find the answers and and cobble together your own little regular expression but i’m not going to pretend like you’re you should go out and memorize any of this i know people have done it but but i usually wind up hating those people because they’re smug know-it-alls but i digress let’s just take a super simple regic example and use it in javascript so i’m going to start by creating a new javascript file called regex.js and here we’re going to say i’m going to create a simple variable called pattern and you can create a regex pattern by beginning and ending with a forward slash so in this case i’m just going to say hey search for this pattern where there are the exactly these three characters x y and z all right super super simple pattern you would almost never use something this simple unless you were looking for specifically the letters x y and z inside of some long string that you want to search through or a series of strings all right but there’s my pattern and so before we get started in earnest let’s just say hey what are you really first of all um i want to print it out and then i want to console.log see if we’re working with a new data type or is this just a data type that we already know so let’s go um node regex and uh looks like just attempting to print it out will just be a string representation of the pattern the type of the pattern is object okay so it’s a special built-in object to javascript we’ll talk about some of those global objects this is just a shortcut to creating one an instance of that global object for regex patterns okay so um here let’s continue on now and actually create some text that may or may not contain that pattern that we’ve defined so let value equal and we’ll go up this is x y z a test all right and so what we can do is there’s a couple of um of methods built into both strings and regular expressions that allow us to use regular expressions against a string or use our pattern against this value in this case this variable called value so we’ll start with the console.log and we’ll use our pattern dot and intellisense shows us there’s actually quite a few interesting things that we could use i’m going to keep this example simple and just say use the test method and so intellisense tells us that this will return a boolean value that indicates whether or not the pattern exists in a search string so what i’m going to pass in is the string i want to search through so in this case value and i would expect to get back a boolean a true or false if we can find xyz in that string so let’s save it i’m going to comment out these guys now i’m going to save it and i’m going to get back here and go node rejects and it is true we do find x y z our pattern inside of this string so let me comment that out the next thing that i might want to do is actually replace that pattern that we found in that string with some other string that’s pretty useful and i do that sort of thing a lot in software development this time i’m going to start with the string itself say value.replace and so that strings replace method has the ability for me to give it a um a pattern and then i also want to give it the value i want to replace if i find that pattern replace it with this string and i’ll just use the word just all right like so this is just a test so i’ve removed xyz and i replaced it with the word just by using the replace method of the string passing in the pattern and the word okay there are a couple of other things those are the two that i find myself using most often here you can do something a little more interesting i guess log value.match and this match function will return back an object that it gives some information about what the string was what the pattern was if it was found what the indexes in the string like if you were to split it up uh into individual characters at what point in the string would we find an instance of that pattern match so here we’re going to pass in uh the pattern itself so i’ll save that and you can see it gives me back this this array with the pattern we’re searching for the index where we can find it so i think zero is t so zero one two three four five six seven and eight so the x the beginning of that pattern is found at the ninth character uh index uh eight oh i guess it would be seventh character ninth character index eight right so um at any rate and then the original input was the entire string itself so we can actually modify this or you know actually grab that object out and work with it individually so value dot match pattern now that i have that array that we saw down here i can grab an individual part of it so console.log and here i’m going to go match dot index so just shows you how to get an individual part of it so here i can grab the index itself and i could use that to do some sort of custom replacement logic if i wanted to do that i’m not sure i would ever want to do that okay so now this comes to the part where i teach you how to cheat and if you really want to cheat you go out to your favorite search engine and you type in something like zip code regex and then if you’re lucky bing will pop it up to the top uh whether it be from stack overflow or just gives you a nice little usage example right there in line that’s a little dangerous because you don’t know if this particular example was uh voted up or down you might want to go and actually search through the comments and see the one that gets the most up votes and the selected answer and the one that doesn’t cause any argument in the community the other way is to go and kind of trudge through this yourself by looking at this page that and there’s plenty of references out there i prefer the developer.mozilla.org website personally i think their documentation is awesome and here you can learn about the various special characters and regular expressions and try to cobble together your own regular expression to find what you’re looking for but that’s all you’re going to get out of me that’s all i can tell you about regex because i’m not a big fan of it but anyway just to recap what we talked about in this video you can create a regular expression um literal with forward slashes you can use the regex’s test method passing in a string to see if that pattern exists in that string you can use the match method to find more details about the match you can use the replace method of the string to replace a given match with some other string like we did when we replaced xyz with the word just and then like i just showed you here at the end i showed you how to cheat you already know this look online whenever you need regex whenever you need regex help okay so let’s continue on we’ll see in the next video thanks up to now we’ve looked at a number of types can you remember them off the top of your head we looked at string number boolean object undefined and function as its own type and there are a couple of others that we haven’t talked about yet we’ll talk about null later and then there’s symbol which is new uh in the latest version of javascript probably won’t talk about that in this course but um what i wanted to point out though when we were using string in particular but this is true of the other of some of the other types that we worked with it seems to have some methods that are available to it to do some interesting things so for example whenever we looked at uh regular expressions and let me just create a new file here called natives.js whenever we looked at the the regular expression lesson we did something like value which is a string set to this literal string this is x y z a test and we did value dot replace well how is it that this value has this dot replace method we really never address that how is it something like a string can have a method after all we said that methods are really just functions that are defined inside of an object so that would make a string an object right or no but a string is a string how can that be well actually both of those are true statements the fact is that the types we’ve been looking at so far like especially string and boolean and number are known as primitive types these primitive types that have corresponding built-ins or natives that are functions that return objects with a bunch of cool methods that are added to them by javascript so behind the scenes javascript does something interesting it the javascript compiler will coerce your primitive in this case primitive string into an object that’s returned from a native string function with all kinds of cool stringy type functionality included so actually although we haven’t demonstrated this yet you could create a string using the actual string function to do something like this and let me comment this stuff out here and so notice what we did here let string equal new then here’s our function that built-in function string notice that has a capital s we’ll talk about that in a moment all right and then if we were to save this and then um let’s go ahead and get down here and type in node natives it works well kind of works almost exactly like a normal string we’ll get to that in just a moment so uh before i address that specifically i’m going to work with strings in this particular video and with the string capital s string function the built-in the native string but what i’m about to say about strings is true whether we’re talking about numbers or booleans or other other primitives that have an equivalent uh native associated with it all right so i want you to notice a couple of things and we’ll work through this first of all this starts out just like any other variable that we’re assigning to a string except we use this new keyword and i’m going to explain what the new keyword means in the very next lesson when we talk about constructors but basically this is what creates a constructor call to this function and then here we are calling a string function capital or uppercase s in string but isn’t that bad form didn’t you didn’t you say bob that we should create our methods with uh camel casing and so string should start with a lowercase s uh actually this is a special situation it’s still a convention indicating that this is a function that should be called using a constructor call again more about that in the very next lesson so you’ll get a part two to this but just keep that in the back your mind we’ll come back to it all right so whenever we run this as you can see here when we ran node natives we didn’t get an actual a literal string output instead we got basically an object that has a string property and a value set to howdy we actually need to call a method on this native that’s returned from the string function to convert it into a primitive string for the proper display inside of a console.log so we’ll need to do something like um i’ll just comment this out and we’ll go to string like so and now there we go we grab uh we convert that native that object return from our native string function back into a primitive string and then display it on screen okay so and while we’re looking at things here um just out of curiosity let’s go console.log and then go type of my string what would you expect to see here well it is a type of object all right so again what’s going on here is that these built-in native types provide extra functionality like this tostring method like this replace uh method and others that we’ll look at in other lessons um and they provide this extra functionality to their corresponding primitive types and so just real quick here is a list of those built-ins those natives all right so it includes kind of corresponding to the primitive string lowercase s and string number boolean there’s also an object a function in a symbol and then there are built-in natives that do not have primitive versions um the primitive version as you know of array is object and the same with regular expressions here regex but it does provide this native built-in with extra functionality for our arrays so the same kind of thing happens it’s just not with directly back to a primitive it’s to an object but it still works with any time we’re working with arrays and then there are some other built-in natives that provide foundational data types i guess you can say uh for important features but are essentially just objects whose methods implement a lot of logic uh for their features so things like the date function and the error function we’ll look at these a little bit later but in this lesson i want to focus solely on the relationship between primitives and built-ins so whenever we do something like this and let me just copy and paste some more code in here so here we’re creating a literal string and then on this literal string i’m going to call this method to lower case behind the scenes what’s happening is that javascript’s compiler is coercing is wrapping it’s boxing that primitive string my primitive into a built-in native equivalent in order to provide that rich set of methods that transform the string in this case to all lower case letters instead of all uppercase so if javascript is coercing wrapping boxing our primitive into this built-in native equivalent then what happens whenever we need to get a value back out of it well the javascript compiler will do the opposite it will unbox that object back into a primitive without you having to do anything special it manages all on its own so in this case let’s just uh kind of see what what happens here just out of curiosity let’s get the type of here we’ll put the type up there all right and we’ll see that when we run this let me make sure there’s nothing else here let me comment all this stuff out too at the very top of that file all right so now let me save that and we can see that it treats this literal string as a string here in line 31 then in line 32 it does that unboxing thing that we talked about to take string make it into an object so that we can call the two lowercase method on it and then what do we get back well at the point when we attempt to find out what the type of my primitive is it already has for our purposes essentially unboxed it back into just a primitive string all right so it’s recommended that you stick with with using the primitive and you allow javascript to do its magic the compilers can do this sort of thing without breaking a sweat so don’t worry about all this boxing and unboxing and the pro its impact on performance but but let’s suppose that just for the sake of argument that you wanted to start out with a built-in native and you want to explicitly convert that built-in native version of a string into a primitive string how would you go about doing that well let’s take a quick example and let’s go ahead and move away from strings just to numbers but the same idea will apply no matter what so i’m going to comment all that out let’s go let my number equals new number notice capital n or uppercase n in number and then in the constructor argument we’ll pass in the actual value that we want to set that number to all right so at this point let’s do a console.log and let’s find out what the type of number my number is have any guesses on what it will be let’s go ahead and stop right there and let’s make sure we understand it’s a type object at this point now i want to take it out of that built-in native and i want to grab just the value of my number out and put it into a primitive so here we go let my primitive i’ll just reuse that variable name here equals my number dot and then to grab the value out regardless of whether we’re working with string number boolean whatever the case might be we’ll use this method on this object called value of so the value of method and now we should do a console.log my primitive actually we know what the value will be what’s more interesting is the type of so now let’s run that and so here in line number 36 we’re going to get it’s an object but we use value of to retrieve the actual value of our built-in native object into a regular number and we print that out in line number 38 okay so to recap the point of this lesson is to explain what these functions are that have the same name as our primitive types but with an uppercase letter they are built-in native functions that are intended to be constructor called we’ll learn about that in the next lesson and the javascript compiler uses these functions to return an object that supports lots of rich features to each primitive data type and we’ll see those in upcoming lessons but the javascript compiler will box and unbox your primitive types into these built-in native equivalents as needed and will do so without any help from you and we’ll do it all behind the scenes and you can explicitly create instances of these objects and then use the value of method like we saw here just a moment ago to convert them into their primitive equivalence but it’s not really necessary so you know what’s this new keyword all about and what’s this uppercase letter in this function name all about well i’m going to explain that in the very next lesson we’ll see there thanks so previously we saw how to create an object literal using this style of syntax and you’ll note that i’ve already created a file called constructors.js go ahead and take a moment to create that yourself if you want to follow along i’m just going to paste in the car that uh the literal car that we created in a previous lesson here with the make model and the year property uh set to bmw 745 li and 2010 respectively now there’s actually another technique for creating an object and that’s through the use of what are called constructor functions so let me go ahead now comment this out and then let me go and create a new function and i’m gonna name a car and it’ll have some input parameters uh one for each of the properties that i want to initialize upon the creation of the object that gets returned from this function so make model and year and then we’re simply going to say hey the object that gets returned set its make property to the make input prep parameter that was passed in as the first input parameter and you’re going to add a model property to that object and you’re going to set it equal to the model parameter that was passed in and then you probably guess what this next line of code will do same thing with year right and most importantly whenever you’re creating an object using this this function it requires the new keyword so let my car equals a new car with a capital c did you notice that i named my function with a capital c car i’ll explain that in just a moment and intellisense tells me that it requires three input parameters into this function so here we’ll go uh bmw 745li 2010 all right and so what’s really going on here and let’s just go ahead console.log just to prove there’s nothing up my sleeve here my car and here we’re going to go uh node constructors and there you go we get a car object that has the properties make model and year populated all right so what’s going on here is that the new keyword creates an empty object calling the function in this case car it will take that empty object as the this remember our discussion about how the this keyword gets bound to the context from which it’s called well in that case that new object gets kind of becomes the context for this function call and so this empty object starts receiving new properties on lines 8 9 and 10 with new values set to those new properties what gets returned from this whole thing with the call to this function is an object with properties make modeling your already set all right so it’s important to remember that the functions themselves that we define here beginning line seven are not constructors although if you’re coming from another programming language like java or c-sharp you might be inclined to think in those terms because that’s how they work but rather in javascript it’s the new keyword in front of any normal function any function that makes it a constructor call all right it creates a uh a new empty object and it will pass it as the this to that function call that you make so the new keyword kind of elbows its way into the execution pardon me excuse me i need to get in here and it uh and it says to the function first before you execute i need to create an empty object and give that to you into this so it’s bound so this is bound to this new empty object and then it says okay now you can continue to do whatever you were going to do now the function itself could ignore that new empty object or it could use it like we have an 8 9 and 10 here all right so just to kind of prove that let me go ahead and comment all that out and let’s create a function my function and i’ll just do something simple like hey uh i am a simple function [Music] right and then we’re gonna go var my function equals new my function and then we’re going to go console.log type of my function [Music] or actually let’s go lowercase m or my function i’ve got some things wrong here like there we go that’s what i want to do and um so let’s all kinds of problems with that line of code but i caught them before i executed it so that’s good all right so you can see here in line number 21 we are creating a new empty object and then calling my function my function executes but not before a new empty object is kind of passed to it into this now this is not used in the body of my function so it’s returned back to this variable of the same name but with a lowercase m probably should have chose a different name if that causes any confusion i apologize just remember lowercase m my function is different than uppercase and my function in this particular case but when we take a look at the type of my function it is object all right and so at this point you know it’s an object so you can’t really do anything interesting with it it’s no longer a function so you can’t do that in fact here let me just um kind of copy and paste this little note i put to myself in my notes here you can’t really do anything with this particular object it’s certainly not a function reference anymore it used the function as a constructor but the constructor function didn’t really do anything to populate the properties of it and you know this will not actually do anything in fact if anything let’s just see what happens if we run this yeah we get an exception here that my function is not a function so we really can’t we can’t do that all right hopefully that makes sense the right reasons why but the only thing you can do with you know what gets returned here my function is that you can attach properties and methods to that empty object which is kind of the point of the new keyword entirely all right so what about this upper case first letter convention i said that it was a convention what’s a convention what is this particular convention you know specifically well you’re basically saying my intent is that this function be called using the new keyword that’s what the convention is basically i am a function but i should be called uh i should be used as a constructor so you should only use a new keyword on me and i’m expecting an empty object be passed to me so i can set some properties on it or maybe add some methods as the case might be all right so just keep in mind that in javascript what makes a constructor function has nothing to do with the function declaration itself but rather how the function is being called it must be called using the new keyword in order to be a constructor function so in the previous lesson we learned about the built-in native constructor functions that return objects with properties and methods to wrap around the primitive types and give them essentially superpowers giving them new properties and methods that will operate on the primitive value new functional new functionality like the two uppercase uh the two lowercase the length property and others that we’ll learn about but that’s why they’re defined as uppercase s in string uppercase n in number uppercase b and boolean and so on that’s why you can explicitly create one of those built-in natives if you use the new keyword like we demonstrated in that previous video so hopefully that all makes sense uh if nothing else i hope you’re learning how javascript is all about functions first of all and secondly how you call a function really changes the the the meaning of the function and what it’s intended to be used for it changes even in some cases the functionality that’s defined inside of the function like in the case of this or perhaps changes the purpose of the function like we saw here um with the new keyword and calling into the function all right so doing great let’s continue on see in the next video thanks javascript has objects and we’ve seen how to create a literal object and we’ve seen how we can construct an object using a constructor function and the new keyword like in the previous video in some of the most popular programming languages you create an object using a pattern called a class or construct called a class in other words you create a class named car and then you create individual instances of the car class as individual separate objects now furthermore you can create specialized versions of one class borrowing all the properties of that parent class in the new child class so you have an original class and you say i want to inherit everything that that original class does in my new class and then you can extend it by adding properties and methods to it to make it a more specialized version of that original or parent class so to kind of extend the analogy here i may have a car class but i want to create a sports car class that extends the definition of just a normal car and it adds on things that make it sporty same thing with a minivan it’s just like a car it has some of the basic principles of a car but a minivan also has like number of passengers and cargo capacity things that make it unique a unique type of car all right and then i can create instances or objects based on that minivan or objects based on the sports car and those objects are both have similarities to a regular car but they have differences as well all right so that’s kind of the notion of of classes and inheritance and classes and inheritance are a foundational concept associated with object-oriented programming not sure if you’ve ever heard the term but it’s a pretty big deal among software developers so you might be asking yourself well first of all does javascript have classes well yes and no i mean in javascript you have objects and you can create an object and dynamically add properties and methods to it whenever you want to but objects are the focus in javascript in languages like c sharp and java and c plus you create a class and you add properties and methods to the class up front and they’re static in so much that they cannot be changed so you can’t be adding properties and method declarations to the object at run time i mean you can but it’s not the original intent of object oriented programming um [Music] they can’t be changed over the lifetime of any objects that are instances of that class so here in in object-oriented programming uh languages base languages like c-sharp and java classes are the focus javascript objects are the focus c-sharp java c plus classes are the focus the latest version of javascript does in fact have the concept of a class but it’s a weird little stop gap measure to help people that are trying to make the mental leap from an old language that they might be familiar with like c-sharp or java into the world of javascript to a dynamic object-based programming model so i talk about javascript classes in one of the upcoming lessons and we’ll get to that soon enough but i guess okay so javascript kind of supports classes kind of doesn’t support classes what about inheritance well here again javascript yes it kind of supports inheritance but not really the kind of inheritance from traditional object-oriented programming so in javascript you have something different called a prototype chain so let’s suppose that you define a literal object like our typical car example that we’ve seen so many times won’t even paste into screen you know what it looks like it has a make model in year property right and so you define this literal object like our typical car example you like the properties and the methods that you’ve already added to that object and you would like to use that car object as the basis for a new car object you’ll probably wind up changing some parts of the object’s definition maybe some new values and a few of the properties you might even add some new properties and methods to that new object and i’m going to demonstrate a technique that allows you to construct a new object based on an existing object here in just a moment but when you do that when you create a new object based on an old object something special happens in javascript there is a permanent link that’s created between those it the new object always knows who it inherited all those properties its original set of properties from how did it get created it always knows uh kind of the link between it and the prototype that came before it all right in other words the original object serves as a prototype for the new object and the new object is essentially chained to that prototype from that point on so in languages like c sharp and java and c plus those traditional object-oriented programming languages you create a class hierarchy where one class inherits from another class so whenever you create an instance of the child class there’s really nothing that’s linking that that instance of the child class back to the parent class so there’s nothing linking that child object back to the parent class definition so here again the focus is not on the relationship of individual objects that happen to be linked to each other and kind of have a brotherhood but rather more of a parent-child relationship in traditional object-oriented programming again the relationship between classes is the focus of object-oriented programming whereas in javascript it’s the relationship between uh between objects and how they’re chained together it’s a sub uh subtle but important distinction between javascript and other and traditional object-oriented programming languages so some people use the term javascript prototypal inheritance all right but i’ve tried to stay away from the term inheritance when talking about javascript because it might conjure up traditional object-oriented programming concepts that would mislead you whenever you’re considering how it all works in javascript one of my favorite favorite javascript authors kyle simpson called this style of object-based prototypal inheritance it calls it really objects linking to other objects or ulu o l o o objects linking to other objects and i really like that description and by the way i’m not sure one way is necessarily better than the other they’re different their pros and cons depending on what you’re trying to accomplish the given problem you’re trying to uh to solve so what i do want to do is have a better thorough understanding of how you know objects linking to other objects actually works and what are some of the ramifications of that so that’s what we’ll do in the rest of this video so you can see that i have a new file called prototypes.js and here i’m going to paste in my original car this looks an awful lot like the car literal that we’ve been creating object literal that we’ve been creating up to this point now i told you that there’s a way to create a new object based on an existing object and so let me do that we’re going to say hey let our new car equal capital o object dot create and then original car all right so this point if we do for example console.log car dot make for example so let’s um node prototypes okay so we have this new car and what it looks like at least at first glance it appears as though we have a new object called new car and the value of the original car has been copied in of the make property of the original car has been copied into the make property of the new car but that’s not exactly what’s going on here as we’ll we’ll talk about here in just a moment but at this point i have two objects i have the original car and i have the new car and i could do several things at this point with new car i could change the values of the existing properties that i have on new car i could add new properties to new car or new methods to the new car or i could delete the existing properties from new car all right but more interestingly i want to revisit something i said earlier about the relationship between the original car and the new car that there’s a link between the new object and its prototype its predecessor the original car and so if we do something like console.log and we say object dot get prototype of and inside of that method i’m going to say new car so tell me who the prototype of this object new car is and it’ll say it’s this object right here where the make is bmw the model 745li in the years 2010. all right so it’s pointing to this original car so let’s do this instead we can actually get a reference to my prototype get prototype of passing in new car and then i can do console.log my prototype dot make and so you can see here that i’m able to get back to that to the make property of the original car now there’s no way to really prove that because they both seem to have the same values right now but we’re going to push this a little bit further what happens if i were to add a property to the prototype in other words what happens if on original car i were to add a doors property like a doors count so if you remember all i need to do on an object to add a property is just go hey i just want a new property called doors so i do doors and then i’m going to create the value and say hey it’s for now let’s go console.log newcar.doors all right you can see that the new car gets this doors property and it seems to be copying that new property over but that’s really not true but we definitely see that there’s a link between the uh the new object and its prototype the original car but how do i know if the property is defined on the new object or on the prototype well here’s what we can do and so this is going to help us to kind of get to the bottom of this relationship right here we’re going to start with the original car and says do you have your own property does this property belong to you or are you essentially borrowing it from your predecessor so first of all it’s true that the original car has its own property called doors okay so console.log new car do you have your own property called doors and that’s false all right so kind of tying this all together and kind of explain what’s really going on here well whenever we attempt to get a property or call a method on an object javascript will go through a series of lookups to find the value or the definition of the property or the method in order to call it so after we created new car it had none of its own properties if we asked it for the value of one of its properties doing something like we did um here in line number nine it would find the prototype that new car links to and see that if it has its uh make property so we know that the new card does not have a property that we define on it called make but what about its prototype and yes the prototype the original car has a make property and it’s set to bmw okay but once we do something like this new car dot make equals audi all right so we are changing the property or we’re actually we’re creating a property on new car and we’re setting its value to audi at that point what happens is whenever we come down here and basically call the same essentially same line of code now in line 11 it’s saying hey new card you have a property called make and new car says yes i do now i have my own property called make and it’s set to the value audi all right so no longer do you have to continue and look at my prototype to find the property and its value you could look at me and find the property and its value all right so javascript doesn’t need to look at the prototype chain if the property is created and set on uh the new object that is essentially created from the prototype so if we ask for a property that’s not yet been defined so here we go let’s go here in line number 12. console.log new car dot whatever all right now think about this whatever property does not exist on new car whatever property does not exist on its prototype the original car so what happens next um well then javascript will traverse back and say hey original car what are you linked to and since we defined original car like this we’re linked back to type object actually it is the um the built-in native object function however the whatever property is not defined on that either so now what happens well finally javascript will do one final traversal asking the object built-in native object what its prototype is and by default it will return the primitive undefined so when we get to line number 12 in fact let’s go ahead and comment out just about everything else here i’m going to hit a there and we’ll get rid of all this just so we can kind of see what we’re doing here so at this point what happens we get undefined why because new car doesn’t have a whatever property we look back and the prototype original car doesn’t have whatever property it’s prototype object doesn’t have a whatever property and its prototype is undefined and that gets returned okay that’s the end of the chain so to speak and that my friends is basically how the prototype chain works in javascript you don’t have to use this you probably should know it although you could probably go your whole career and not really have to ever deal with it however this is fundamentally how all your objects work and why you get the undefined type returned when you attempt to access a property value that doesn’t exist so i tried to make this as simple as possible but this is a post-beginner topic in fact i was looking at some tutorials online and i saw that this was actually an advanced topic but if you kind of understand what we’re talking about here think about how far you’ve come in your javascript understanding to get to this point where you can kind of understand what’s going on that’s impressive so i would just recommend that you watch this again you take a look at a few other tutorials online you give it some time to sink in and you’ll probably leapfrog over a bunch of people who are trying to learn javascript but not really pushing themselves past the absolute basics you’re doing great hang in there we’re making great progress and we’re getting close to the end relatively close all right so we’ll see in the next video thanks in the previous lesson i said that javascript doesn’t have classes at least not in the traditional object-oriented programming sense nor did it have inheritance in the traditional object-oriented programming sense i explained how javascript is focused on objects and the linkage between objects that are based on objects we also looked at constructor functions that allow you to construct a new object from a function call but this really isn’t a class in the traditional object oriented sense either but technically javascript does actually now have classes or the notion of a class and it was introduced in the last version of javascript now javascript classes give you the impression that you’re working with something that resembles traditional object-oriented programming but in reality nothing has really changed javascript remains object focused and objects can still be prototypally prototyped prototype chain together i don’t know how to say it correctly javascript classes are what is termed syntactic sugar on top of the existing javascript object and prototype models that term syntactic sugar you’ll see that frequently in software development circles it’s programmer slang that means that they added a few keywords and structures in javascript but these merely map to existing features of javascript they don’t really add new features per se so the syntactic sugar might help those who are transitioning from more traditional programming languages to javascript but javascript purists are quick to point out that this new feature the class feature in javascript may do more harm than good because at the end of the day you still have to make the switch from classic or traditional object-oriented programming to a more object-based prototype inheritance if you want to use that term and if you’re working with prototypes you have to learn the things that we talked about in the previous lesson okay so in a nutshell let’s talk about what a class is and define a class it’s essentially a way to define and create objects and so just like there are function declarations and function expressions now there are class declarations and class expressions so let’s start off with just looking at a class declaration and so you do something like class car whereas an expression would be more like let car equal class and then whatever okay so fairly simple and hopefully straightforward it’s basically declaration you give it a name an expression is well it’s an expression so javascript classes can have a constructor function that gets called automatically whenever you use the new keyword so let’s go with the declaration version of this i’ll comment this out this expression all right so inside of the declaration let’s create a constructor function you have to use the term constructor then you can add any input parameters that are essentially going to map to properties that you’re going to add to a new instance of an object based on this class so here again you do something like make model and year and just like in the constructor functions that we learned about a couple of videos ago you still use the this keyword make it will make this dot model equals model this dot year equals year okay so you can see similarities here right and then uh to create a new instance of an object based on this class you would do let my car equals new car passing in you know bmw 745li and the year 2010. okay so again i want to make the point here that that word or that name of your constructor function in a class definition in javascript has to be named constructor in order for this to work and here you’re still using the new keyword new still creates an object instance it still passes it to the constructor function now in this case the name of the function is not car it’s the name is constructor but you’re passing input parameters into that constructor method and then using this this context it’s the new empty object that we’re attaching properties to and then initializing their value to the arguments that are passed into the constructor all right hopefully that all makes sense and it’s similar enough to what we’ve already learned that it kind of you can see where things are mapped again syntactic sugar on top of what already exists all right so you can also create methods on the class um in fact let me do it outside of the constructor method so here i want to create a print method [Music] and you know i can do something like console.log and then i’m going to go ahead and use our special sends string syntax interpolation syntax let’s make some space for myself here and i’ll say uh this dot make this model and um i’ll format it a little bit with some parentheses but inside we’ll go this dot here all right so here again now that i have an instance of my car class called my car i can call print like so and so let’s go um [Music] node classes and you can see i get the nicely formatted version of the information in my car class you’re calling the print method all right now beyond these basics you can actually approximate inheritance at least inheritance in the classic optic dorian programming sense so in our case let’s kind of go down here at the very bottom and i want to create class sports car and i have to use the keyword extends car all right so right off the bat when i do that and i create a new instance of sports car let’s do that so uh let my sports car equals new sports car now i don’t have a i don’t have a constructor function defined but when i use the open uh the open parenthesis notice that intellisense still sees that i have make model in year why is that because by extending sports car from car i still get the constructor method defined on car and i can still set the make model in the year so here let’s go um [Music] dodd whoops dodge viper and i don’t even know if there’s a 2011 version of it but i don’t know if they stopped making it or if they’re still making it i’m not even really sure but it doesn’t matter all right so um at this point and we can even call mysportscar dot print all right and now we get that print out here like so so we’re extending we’re borrowing the entire definition of our class and we are getting the constructor method defined in car we’re getting the print method to find a car but i can extend and push beyond the boundaries of the car’s definition by adding properties and methods so for example here let’s just add a quick method and i’m going to call it since it’s a sports car we’re going to create a method called rev engine which will be a unique printout of information so console.log and we’ll do something dot vroom goes the this dot model okay so now i can call my sports car dot rev engine and i get vroom goes the viper i guess i should have had a space right there all right now what if i were to do this i still have my car can i go my car rev engine let’s see nope can’t why because it says rev engine is not a function well it is a function it’s just not defined on the car class it’s defined on the car classes inherited child called sports car so you can’t access rev engine from the car class and you know what honestly there’s more to it than that um there are some advanced scenarios but that should be enough to show you what cla uh the class keyword and the extends keyword can do and how it operates and you can see the rough equivalence between what we did here and what we’ve done previously and hopefully you can kind of see that there’s a mapping in that ultimately what’s going on behind the scenes though is that we’re creating a sports car object and its prototype is car and we’re adding on a rev engine method but when we look up the constructor function when we look up the print method it’s still using prototypes behind the prototype chain behind the scene to manage all that it’s just javascript is kind of covering that up just a little bit with some different syntax okay so my personal opinion is that if you’re coming from a traditional object-oriented programming environment and you need to become productive very quickly because you have a looming deadline or whatever the case might be you might be better off trying something like typescript which was created by microsoft it gives you the feel of c-sharp and java more so than javascript will and it gives you more of that traditional object-oriented programming look and feel and ultimately it will transpile down to pure javascript kind of out of the context of this conversation just what you should do is go to typescriptlang.org and you can study up on it a little bit but it’s essentially a super type of javascript meaning anything you do in javascript will work in typescript but typescript gives you some extra features that will make it feel more like java or c-sharp if that’s something you need but most importantly if some of this doesn’t make sense the things that we’ve talked about here don’t beat yourself up about it again this is a feature that was added for specific demographic people coming from other programming languages it may not have been intended for somebody who is just starting to learn javascript so don’t feel like the pressure now to go out and learn traditional object-oriented programming before you can understand how to use class you don’t even need to understand this it’s just again something for a stop gap measure for people coming from another programming language um [Music] so it might not be immediately obvious to you in what situation you might find yourself using this and why you would prefer this over what we looked at in the previous lessons but at this point just focus on um the language the fact that these things exist the fact that they were added for a reason and the reason is to help somebody else maybe not you specifically somebody coming from a different background to make that transition to javascript and you know ultimately i think with a lot of the things that we’re talking about here their usefulness will reveal themselves to you later whenever you start programming and creating real applications with this language all right so uh you’re doing great just hang in there we’re making great progress um you know the fact of the matter is that learning is is iterative and if this is your first attempt to learning any programming language or your first attempt at learning javascript specifically no doubt you’re going to need to come back to some of the ideas that we talk about you know in the coming days weeks or months and you’ll continue to come back to some of these ideas over and over i mean i keep studying and kind of pushing in new directions coming back to studying the basics and then pushing a different direction and you have to do that in order for these ideas to fully sink in over time i mean i’ve been working with javascript almost my entire career and i’m still learning things so it’s just the nature of learning this sort of stuff there’s so many details and there’s only so much time in the day so don’t beat yourself up you’re doing great you’re taking great strides towards understanding javascript so hang in there we’re just kind of entering the home stretch now you’ve come so far just a little bit more and then you can honestly say that you’ve you’ve got a firm foundation of javascript to build on okay a little encouragement to get you over the hump here all right we’ll see in the next video thanks in the most recent version of javascript you can define a function using a shorthand syntax called arrow functions and arrow functions since they are just a shorthand syntax for creating a real function and functions are used everywhere in javascript as you know by now you might not be surprised to hear that there are many different ways that arrow functions are used in javascript and there are many different syntax variations to boot so what i want to do is start simple and i want to look at a few practical applications of arrow functions but we’ll start using them as frequently as possible from this point on and you’ll begin to see them pop up just about everywhere all right so you can see that i have a file called arrow functions.js that i’ve already created and i want to create my very first super simple arrow function so here we go i’m going to create a function called hi i’m going to set it equal to a set of empty parentheses what’s called the fat arrow operator so equal sign in a greater than symbol after it to kind of resemble a fat arrow i guess as opposed to a thin arrow which has absolutely no meaning in javascript this fat arrow then will point to a body defined by an opening closing curly brace and then we’ll do console.log howdy okay so far so good right one line of code an entire function declaration and we can just call hi so here we go let’s go um uh node arrow functions and we get a simple word howdy printed out okay that’s easy enough so let’s comment that out and move on to a slightly more interesting example um we can actually go let hi and inside of the open and closing parentheses we can accept an input parameter so what these really are instead of using you know the keyword function we just get rid of the keyword function but this remains and it allows us to define an input parameter name inside of or after the fat arrow and inside of the body i can go console.log we’ll use our our special backtick character and we’ll go howdy and then dollar sign open close and curly brace name add a few semicolons to the ends of some things here really probably don’t need this one per say now let’s call hi bop howdy bop okay so you can see that all we’re really doing here is just creating again a shortened version of a function and we don’t need the keyword function we just go ahead and start with the opening closing parenthesis to define the area where we can add input parameters the fat arrow points to the body of this of this arrow function and inside of there we can just do whatever we want to do just like we can in any normal function even reference input parameters like we’ve done here all right now up to this point we’ve just been kind of creating what i call void functions they don’t return anything but what if we need to use the return keyword let’s create a different version of this so we’re going to create add add equals here we go we’re going to allow this to take two input parameters a and b and we’ll just do something super simple we’ll hear once again but we will use the return keyword a plus b and now we can do console.log add seven three and we get 10 printed to screen okay can you see this same basic structure here we’re accepting two input parameters separated by a comma here we’re still referencing those in that body that we’ve defined using open and closing curly braces use the return keyword it’ll get returned to our method as a as a return value of our method call and here we’re just passing in numbers getting that value back and printing it to screen okay so far pretty easy stuff right now you might be wondering how could i ever use this sort of thing um what is its pertinence and so i think that one of the ways that i and uh see them using being used the most is whenever you need to run a function over each element of an array and so let’s use our let names equal and you’ll recognize these names once again okay this time we’re going to call the map method now this happens to be a a method defined on the array built-in native object that we learned about but we’ll talk about more of these helper methods built into the array built-in native object in an upcoming lesson but the map is a pretty cool one because what it allows us to do is to basically iterate through each element of an array and when we it iterates through it will actually allow us to call a function and this is a perfect spot for creating right inside of here one of our arrow functions so in this case let’s say you’re going to iterate through each element of names whoops i need to use the right word here names right and so here we’re going to create an arrow function that accepts a name and what we’ll do is to marry these two ideas together console.log and then howdy name all right and you can see basically in one line of code i was able to map each element of the array to our arrow function it passes in the name to the arrow function in the body of the arrow function now we can simply operate on it just like we were doing previously but as a result now for every single element of our array we’re getting a console.log with our little message okay pretty cool pretty cool let’s take this one step further as we continue to build on this idea let me grab this line again so i don’t have to retype that and then i’m going to say let i equal zero and actually let me just grab this line too because i want to do something a little bit more interesting because i want to show that you can actually do a little bit more in a single line like for example here i might increment the value of i and then use i here in the body but now i’m essentially doing you know two commands or two statements inside of the body and i’m not saying this will you’ll do this very often but it’s certainly possible all right so now let’s do uh save that get back to that so now david1 howdy eddie 2 howdy alex 3 howdy michael four all right all right let’s continue to build on this and now let’s use the return keyword kind of in the same doing the same sort of thing here so start with names uh in fact let’s go var transformed equals names.map and then i’m going to borrow some of these pieces here again i’m just going to borrow that except i’m not going to call console.log i’m just going to return that string see if we can get this all kind of lined up here okay so i’m re going to take every name in names and i’m going to return howdy plus the name and now i’ll do console.log the entire array that is going to be returned from this and saved into transformed so here we go whoops i got a spell log right all right and so you can see that what gets returned here because we’re returning multiple values they get added to essentially an array and now you can see on each element of the array is the literal string that we construct inside of this map function using an arrow function to do the construction and those individual names are now transformed and saved into a new array instead of just david eddy alex now it’s howdy david exclamation mark howdy eddie exclamation mark and so on all right so arrow functions are simple to create and they’re just a shorthand version of function expressions they really are useful whenever you’re working with functions on arrays like this map method that allows us to map each element of an array to one of our arrow functions and then basically execute that function against each element of the array and so we’ll see some other examples of this in some upcoming lessons all right pretty cool stuff all right we’ll continue on the next video we’ll see you there thanks designing a course can be challenging sometimes because when you finish a topic there’s a number of directions you can go after that topic but if you have an overarching idea that you want to get to eventually you’re going to have to kind of leave some important thoughts on the side and come back to them later and that’s really what’s going to happen in the next four or five lessons or so these are topics that could have easily been covered much earlier in the course but because i was trying to get somewhere i left those details off till now so hopefully you don’t mind that we’re going to circle back and fill in some of the or backfill some of the topics that we just didn’t cover in a lot of depth and the first that i want to talk about are these terms truthy and falsey which seem to be specific to javascript i haven’t seen them used in other programming languages maybe i just haven’t looked at the right programming language but basically it has to do with with evaluation so when you evaluate an expression like for example in an if statement or in a switch and you perform an evaluation of expressions sometimes they’re going to return absolute true or false one is greater than two that is false patently false and i would expect that i would get from that expression the false value but then there are other things that are not quite as obvious and there are rules in javascript that dictate whether an expression is truthy it’s it’s in and of itself it doesn’t look like it would be true but because of the rules of javascript it is true and things that are falsy doesn’t look like it would be false but based on the rules of javascript we’re going to call it false all right and so if you just don’t know what those are you’re going to maybe perform an a or evaluate an expression you’re going to get a true or a false and you’re going to be like what in the world’s going on there why is that true why is that false so i want to cover these cases and i hope you don’t mind that i’m just going to copy and paste these right in because it’s a pretty big chunk of code we’ll just look at these things but and the top these are things that are falsy well right off the bat line number one if you’re going to evaluate an expression if false that’s always going to be false that’s not really falsy it’s just false all right but then there are things like null if no well in and of itself no you know if we’re looking at this completely subjectively null is not true or false it’s it it doesn’t really have a connotation of being true or false it’s just null we’ll talk about null later however it’s considered by javascript to be a false c value same with undefined undefined is not good or bad it’s not true or false but javascript says that if something pops up and after evaluating expression it comes to undefined that’s going to be a false value the same with the number zero if an expression is evaluated and what’s output is the number zero that is a false c value it will return false in that expression same thing with not a number same thing with an empty string whether you use in lines six or seven single quotes or double quotes to define your your string now everything else is pretty much truthy in fact i don’t even know if there’s the notion of truthy in javascript it’s just everything is that’s not in falsie is essentially truthy but i gave you some examples here obviously just like if false is falsy if true is going to be truthy it’s just true but things like an empty object if you evaluate an expression and it returns an empty object for the purpose of of truthy and falsey it’s truthy same thing with an empty array same thing with a string that’s not empty so we saw empty strings are falsy but a string if you have something in it it’s truthy okay a new instance of an object is truthy even though there’s nothing associated with there’s no properties or methods doesn’t matter truthy same thing with any non-zero value whether it be an integer or a float which means values after the decimal point those are all truthy as is a constant in javascript called infinity whether positive or negative infinity all truthy values so here again if you ever come across something in an evaluation it returns an odd an odd value that doesn’t strictly return true or false it returns a null or undefined or not a number or a zero or an empty string falsely it’s going to the evaluate the evaluation of that expression will be false but everything else is pretty much going to return true okay that’s all i needed to say in this video hopefully that makes sense all right we’ll see in the next video thanks continuing the sentiment from the previous lesson where we’re kind of doing a roundup of topics that by all rights could have been discussed earlier in this course but we’re going to introduce them now i want to talk about the last of the data types that we’ll encounter the data type null so it basically null represents a variable that points to nothing but an object reference was expected in that case so just as a quick reminder you can create a variable and not set any value to it not initialize the value not set the value and in both cases or whenever we look at the actual value or the type of the value it is going to return undefined so the value of a because we’ve not set it to any value is undefined and the type of a is undefined all right but here that’s just a variable we didn’t set it to a primitive string number or boolean or anything of the sort all right let’s suppose that you actually are expecting the variable to hold a reference to an object so just to kind of copy a quick example from a previous lesson let me comment all this out and paste in this so here we have our regular expression example where we’re going to try and match a pattern x y z and we’re going to use the strings match method passing in the regular expression literal that we created in line five but this time there is no match there is no string x y z in my value variable so what is result set to well let’s see what we get in this case we get result is set to null all right well what is the type of results type of result all right this is going to require a little bit of explanation okay that’s the quirk with null it will actually return object not the primitive type null and that’s a known bug in javascript that will likely never be fixed because too much code on the internet depends on the fact that typeof null equals object is you know it’s it’s basically baked in and grandfathered at this point but by all rights if if javascript had been designed correctly from the start that would be null but hopefully you’ll get the idea there all right um but the interesting thing about getting a null result when we expected an object back is that we can do something like this and i’ll just copy and paste this instead of typing it all in we can check results and say are you you know and we’ll do the strict equality uh strict equality evaluation is result null and if it is then we can say well no no match was found xyz was not found in our value all right and so this can be extremely helpful whenever we’re uh building our applications all right so just to kind of recap null the primitive data type null is not zero it’s not undefined it’s not an empty string it simply means that you have a variable where an object reference was expected but it’s not set to any object reference it’s different than undefined right because undefined says i’m expecting to have a value but one was never set and it was expecting maybe a number string or boolean no no we’re expecting an object reference but we don’t have an object reference at this time uh set to our our variable okay so hopefully that makes sense and let’s continue on see there thanks it’s been quite a few lessons now since we’ve talked about the built-in native functions that return that return objects we saw at that time that there was a date constructor function and that date constructor function will return an object that allows us to work with dates and so i just wanted to take a brief look at what it can do and how you can actually work with date type information using the date object so let’s start off with a very simple case here i’m going to say hey let today equals new date and that will by default give us right now this date and time all right so what i can actually do is actually initialize that date object with a specific date using one of several different formats so i may want to like create a date that represents my birthday so i’m going to create a new date and i’m going to pass in and this is interesting right because if i look at intellisense it has an up and down arrow i can actually use the arrows on my keyboard and it will show me the various versions of constructors that are available with which to initialize the state objects so we could start off with something really simple and just kind of a full day like december 7 1969 and we’ll give it a time even at 7 0 1 23 just guessing at the actual minute and second of my birth i don’t really know exactly i know it was early in the morning that’s all so that’s one way to initialize uh the date but there’s a couple of other ways um and uh let’s just do like let bob equals date and there’s a file system date type that looks something like 1969 dash 12 for the month dash 07 and then t for time and then 0 7 because it’s you know on a 24 hour schedule colon um 01 colon 23 and that’s roughly about the same these these two will create the same basic time all right there’s also we could just kind of simplify things a little bit here using a different format just just give it um [Music] the year month base zero and day i think base zero as well though i haven’t really i’m not really entirely sure about that um and then what bob it’s kind of the same thing [Music] 11 6 and then 7 1 seven comma one comma twenty-three so i think these are about the same i may be off by one i forget if it’s zero based or one base but you can look that up fairly easily i’m not gonna use these but i just want to show you that they exist okay but we have here now today’s date and my original date my birth date and so what we can do is something interesting like get the time that’s elapsed between those two dates by just saying var elapsed time equals n equals today minus bob console.log elapsed time and what will get back to me is the number of milliseconds all right between those two between those two dates so this is the number of milliseconds and i could divide it out calculate the years the months the days the hours minutes and seconds if i wanted to all right so that’s one thing that i can do is determine the elapsed milliseconds between two dates you can also get parts of a given date so i can go console.log bob bob.getdate and um it returns seven what does that represent that represents the day of the week so in in this case monday would equal one and sunday would equal seven which seems a little backwards to me but hey that’s what you get when you create a language and you set null equal object and other kind of quirks like that moving on console. bob.gettime and this will be represented the time of day in our date object so that was uh the number of milliseconds and so that’s a little bit less useful but you can do other things and let me just paste in a bunch of these [Music] all in one shot you can get the month today hours minutes seconds and milliseconds and then there are also some additional date functions for things like conversion to utc or universal time code and local dates and times so converting back and forth between utc and local date time and that’s pretty much what you can do with the date object and so let’s continue on in the next video thanks previously when learning about built-in natives i explained how the string primitive is mapped to the string built-in native object and by boxing the string primitive into its equivalent string native object built-in string native object javascript therefore will supply us with a rich set of functions now in this lesson i want to demonstrate just a handful of these very useful string methods that are supplied to us by the string built-in native object and explain why they’re useful and i’m just going to pick the ones that i think are useful to at least they’ve been useful to me in the past but there’s a bunch more and i would just recommend that you use bing or your favorite search engine to search for javascript methods of string and you might find the mozilla developer site and it will give you a full listing of all the methods that you can get on that string object all right so first of all we’re going to need a few strings to work with just for fun and so i’m going to create some some different ones here two are quotes so the first quote is knowledge is power but enthusiasm pulls the switch and second is a famous quote from a good friend of ours do or do not there is no try and then finally a listing of random numbers that mean absolutely nothing and then you know one of the things that i didn’t really mention at the time is that um you realize that uh and i’ll just put this here you can even call these these methods on string literals like for example console.log bob loves you dot isn’t that crazy that you can just do dot on a string literal and call to uppercase well you certainly can so let’s see this message in all of his glory because i do in fact whoa it’s called strings i do in fact love you all right let’s move on all right so let’s um let’s use a couple of uh interesting interesting methods of string so we’re going to use the split method i’m just going to call set my split equal to the third this third one defined on line three with those variable with those values those numbers that are separated by commas and the split method will allow you to say hey every time you see a comma split that up and take the element in between the commas and add that to an element of a new array and so let’s go console.log my split and here we have an array with each value as a separate member of our separate element of that array pretty cool so you’ll do this a lot whenever you’re working with data and it comes to you in some sort of string like format you can split it out all right next up we can slice a string so let’s go let my slice equal first dot slice and then we give it the starting index and the ending index just to kind of pull out one little piece of the string and put it into its own variable so then we can do a console.log my slice so this first sentence we would count you know to the position 13 and then to position 18 and hopefully we’ll grab out that word and we do we grab out power same basic idea with the next one which is substring it’s just a tiny bit different so let my substitute equals first dot substring and here we’ll start with that same index but we’ll say just instead of giving you the end position just go over five five positions so this is the start and then this is the length that we want to pull from that from that first string it’s gonna go console.log my substitute and we get that same value there okay similar ideas between slice and substring moving on we want to return true or false if our string ends with a given string to compare it with so my ends with equals and we’ll use that second string that i created here try or do or do not there is no try and we’ll say hey does that string ends with true or false the word try period and then we’ll go console.log my ends with true great and um hey well and we can do sort of the same thing let my starts with equal second dot starts with and so this is just a way for me to say hey is this the string i was expecting does it have the values in it that i want so i can say does it end with this does it begin with this true or false all right so that’s true as well and then we can even say hey you know some someplace in the string [Music] does it include the sub string or the string there so is the word there used in that second string console.log whoops my include whoops there is no there all right how about capital t in there ah that’s true all right so it is case sensitive all right so let’s come in all these out interesting now let’s uh say let my repeat equals ha exclamation mark space dot repeat and the number of times i want that repeated is three and then let’s do a console.log my repeat and i get ha ha ha so the repeat method will repeat whatever the string is the number of times you tell it to and then we save that off to do its own variable and i think the last one we’re going to look at is a way to kind of clean up a string so let my trim equals and i’m going to put a bunch of spaces here and then in between i’m going to say this is bloated right and i want to clean it up a little bit so i’m going to go console.log and uh the first time through i’m just going to say trim.link this will give me the total number of characters in that string but then the second time i’m going to do what’s called chaining method chaining so my trim dot i’m going to use the trim method that should clean off all the empty spaces at the very beginning and the very end of my string and then i’m going to grab the length property so you can see dot trim dot length i’m able to since trim will return a string then i can call the next method or property on the string because i’m working again with the string type so i’m chaining those calls together to get the result the essentially the before and after so before i call trim we’re looking at 16 total characters here but when i call the trim and i get the length there’s only seven characters that means the word bloated should only be seven characters long and it is all right so those are some helpful string methods on the built-in native string function constructor function and we’ll do the same thing for array in the next video we’ll see there thanks since we gave strings methods the proper treatment i wanted to do the same for arrays so we’ll do that here and let me create a couple of arrays here i’ve got an array called names and an array called others i’ve got an array called lost and an array called fibonacci so the obvious difference is that here i’m working with strings here i’m working with numbers all right so we’re looking at methods that can be applied to arrays right so the first thing we can do is combine two arrays together so we can use the concat method so here i’m going to take the loss numbers and i’m going to concat them to the fibonacci numbers giving me a combined set of array values and here i’m going to go node array dash methods the name of the file i created and i get a long set a complete set where you can kind of see the division between the two sets now they’re all in one array okay seems like it might be helpful at some point otherwise we’d have to loop through and push or pop or i mean push uh elements of one array into the other array that might be a little bit uh a little bit of a cumbersome process you can also do something interesting like console.log and we can take combined or the combine well we don’t need that we can use fibonacci and what we’ll do is call join and i can say hey join all the elements of the array together and separate them with this string so i’m just going to use like a tilde here for no other reason than the fact that we just haven’t used utility yet and i think now we’ve used every character on the keyboard at least once all right so let’s uh save that and then run it all right and you can see that i just merely printed out the fibonacci numbers with a tilde join them together into a single string but they’re separated by a tilde now okay we’ve already talked about or demonstrated push and pop i don’t want to go back into those but there are essentially ways to add elements to the array or remove the last element from the array there’s some other ways to do that too for example here we’ve got a console.log we’ll take the loss numbers and we’ll call the shift method and what the shift method will do is re take one item off the front side of the array and it’ll return it back to us to print out but then if we go and look at the array we’ll see that it actually removed it so it’s essentially a pop but instead of working off the back end it works on the front end okay so let’s see that in action and we get that exact behavior that i described great we can do something called an unshift which is to add items to the front so it’s essentially just like a push except we’re going to add items to the front one or more items so here we go with the lost unshift and then we’re gonna say hey let’s add we can add one we can add two we can add a bunch of items right and so now we do console.log loss numbers oops we’ll see whoops what did i messed up i call it list instead of lost there we go and so now i’ve added the values 1 2 3 and 4 as new elements of my array and then it continues on with 8 15 16 23 and 42 okay let’s comment all these out moving on let’s um let’s find an element uh well first of all here uh console.log let’s take the names and reverse their order so first of all we’ll start with the original order and then we’ll tell it to reverse and we’ll print that out so originally david eddie alex michael but then we get michael alex eddie david all right furthermore what we can do is go console.log names.sort it’s a sort of method i got to use the method invocation operator so now when i run it we get the alphabetical order alex david eddie michael all right next up let’s um let’s see how you can identify where a given element is in an array by looking up its value using an index of method so here we’re going to go console.log and we’ll go others dot index of and i’m going to look up the element name mark all right and so let’s see where it’s at it should be the third element of the array so i’m going to go back up here to others 0 1 2 3. it’s at the third element so then i can go grab it okay how about we look at and find the last index of and let’s take those combined numbers remember those all those numbers we basically put them together let’s go the last index of the value one so here first of all let’s do this just so we can easily see what the current value of combined is and then we’ll say hey we’re going to search for the last time the value 1 appears in my list which array of the element is it at whoops what did i do this time i think it’s just combined right still not right ah because i commented it out now let’s try it there we go all right so you can see that our combine variable holds 48 15 16 23 42 1 1 2 3. so now i want to see what element is the last and it says it’s at the seventh element so zero one two three four five six seven zero based so the last index of one would be at seven so it’s useful if i’m looking through a large set of data and i want to find the last instance of a given value i can use last index of instead of index.which would give me the first index okay all right moving on you know previously we looked at the map function of an array i don’t want to belabor that because we’ve already seen it but we can do other interesting things too like we can create a filtered list so using arrow functions so var filter equals and we’ll go with com combine dot filter and now i’m going to give it an arrow function so for every number i’ll just say it’s x i can give it any input parameter name so here’s the body of my little arrow function if x is less than or equal to 15 then i want to return x all right and effectively what will happen is it’ll return only those numbers that match this expression so that when i do console.log filtered i should only see numbers that are less than or equal to 15 and so i get a filtered version of that of that combine array pretty cool and a good example of why you would want to use arrow functions similarly you can do something using a what’s called an iterable it’s a method called for each so this will go through each uh element of an array and inside of that i can then create an arrow function similar to things we’ve done in the past all right where i’m actually just going to for each element of the array go ahead and console.log this string and interpolate in the name that’s passed in pretty cool and then uh we can also do some checks so for example i can say hey can you tell me if every one of the values inside of my array match a certain condition so here i would go console.log and i’m going to take that filtered list that i just created here so this should contain all of the values that are less than 15 from my combine and here i’m just going to say hey let’s go filtered is every one of those values um and here’s where i’m going to create an arrow function so let’s call this num or every one of those numbers less than 10. true or false false why is that well i happen to know that there’s a at least a 15 in there maybe if we increase the number to something like 16 are all the numbers less than 16 well they better be because they wouldn’t have matched this criteria right okay so that’s the every method of an array similarly we can look at sum so tell me if at least one element of the array matches a condition so here again console.log start with that let’s create arrow function um so whoops we’ll start off with uh let’s use the fibonacci numbers so sum true or false well let’s start off here with an arrow function number are all the nums greater than 50. true or false true okay are they all greater than 100 are there any of them there’s at least one item greater than 100 that’s what we’re testing for here false there’s no items in that fibonacci sequence that we have here in our array that are greater than 100 all right so hopefully first of all you can see that there are some very useful uh helper methods on the array built in native secondly more examples of arrow functions that are used inside of some of these methods hopefully that’s useful let’s keep going see in the next video thanks you have to try really hard to force javascript’s compiler to throw an exception with the code that you write unless you simply just typed in the code incorrectly now i suppose some might consider the fact that javascript tries so hard to work with whatever crazy code you offer it uh as a positive thing but personally i wish that javascript’s compiler would throw errors more often i mean you should never be able to do something as absurd as what i’m about to do here let me paste in a little code i mean this makes absolutely no sense here we’re going to attempt to multiply seven times undefined divided by panama okay what’s the answer to that well um you know it it looks at and says well i you know i’m not really sure that that’s going to come out to be a number so i’m just going to return not a number and we can check for that not a number and you know i guess we can account for that in our logic of our application but kind of wish it would just throw an exception but i guess that’s not the way that javascript is made to work i suppose it tries to do whatever it can because it i guess figures that perhaps people come from many different programming backgrounds or that since it’s a dynamic language maybe it should be able to accommodate any of these crazy situations but when javascript does encounter finally something that it cannot work with uh we call that an error we call that sometimes an exception an exceptional situation an exception something it just can’t work with and the javascript runtime will simply a quit at that point throw up its hands and say i can’t do anything with this line of code and if i can’t do anything with this line of code that means i can’t do anything with any other lines of code i quit so it’s when it does actually reach an exception it completely bails out on any additional code that you might have written all right so when this happens and you can identify where in code these issues are likely occur you can and should build some safeguards to ensure that they never happen again like in this case we might write several lines of code prior to attempting line number one we might do things like to ensure that if for example these values were contained in variables we might try to make sure the variable was not undefined we would try to make sure that the variable had a data type of number and not string or something else right so that we could do that calculation and expect a real number to to be assigned to the variable a oh if we were working with objects we might want to make sure that the property actually existed on the object that was passed into our function and so we would look and say do you have a do you have a property with this name does it have a value okay we can work with that and so there are some safeguards that you can kind of build around your code to bolster it to make it uh to make it more resilient to the possibility that its inputs were bad and that it ultimately might throw or raise an exception so other times these things might be completely out of your control you still need to write your code in a defensive manner so for example you might want to request data maybe json data from a web server that hosts a web api and depending on what you’re requesting and depending on whether the web server is functioning correctly in that moment you may or may not receive the data that you’re expecting so this might cause your code to throw an exception well here’s another case where you should be able to kind of code defensively account for the possibility that an exception could occur because calling into another resource across the world is a highly risky proposition and it could result in an exception so an exception in an era i use those terms kind of interchangeably in my mind they’re the same thing but whenever a a problem arises an exception is raised by uh by javascript in some way the information about that exception is boxed into one of these um built-in natives that we learned about several videos ago uh that were were created in the exception capital e exception function and it will give you an opportunity to inspect that exception or that error object uh and look at for example the error message and be able to handle it gracefully and we’ll talk about that in this lesson you can safeguard your code the code that you suspect or that you know would be prone to throwing exceptions and you can do that by wrapping it in a construct in javascript called try catch so here let me comment this out and then i’m going to create some examples let’s create one where i know i can create an exception so here i’m going to create a function called before try catch so here we’re just going to create one and not not attempt to catch any issues that might be uh that might be created now here i’m going to just say let this variable obj equal to undefined and i’m going to act like obj is an object so i’m going to just act like it should have a property on it so i’m going to do console.log obj.b alright and i know that this obj does not have a property b that should trigger an exception in javascript it’s one of the few cases where we can actually force it to happen so if that happens correctly as i suppose then you should never be able to execute this line so if the previous line of code grows an exception you’ll never see this all right and just want to point this out here i’m going to have to escape this in my literal string this single quote in the contraction yule because uh javascript and in this case visual studio code doesn’t recognize it as an apostrophe it recognizes it recognizes it as the closing single quote for my literal string so to escape that character and treat it like i need to treat it as an apostrophe i’ll use a backslash right in front of it so now that little combination of characters will treat it as an escaped apostrophe instead of the closing single quote all right but any rate just an aside there line 7 should never get executed because i uh i’m expecting line number six to essentially throw an exception so um here’s what we’ll do we’ll just call before try catch and we’ll execute that and you can see that cannot read property b of undefined perfect all right now let’s let’s introduce a different function called after try catch to kind of show you how this works i’m going to comment out the call to before try catch here let’s just grab all this and copy it all but i’m going to first of all create a try and um i’m going to go ahead and just hit uh enter on my keyboard i’m going to use the arrow keys to like create a try catch statement all right see that and inside of the try i’m going to attempt to perform these three lines of code now i don’t have high hopes for 17 ever ever running however i suspect that what i can do is do something like this if we were to reach an exception in line number 16 which we will so in this case what i can do is say i caught an exception that was thrown by the javascript compiler and i can even inspect that error object and i know that it has a message property so we’ll do something like that but the key here is that this will not break my application my application can continue to execute so i can go console.log my application is still running so even if we encounter an exception like i suspect we will in line number 16 we can catch the exception handle it do something and then move on so let’s start with this then we can move on see there’s other things we can do here whoops what am i doing wrong here oh i need to actually now call my after try catch so now let’s try to run it all right i caught an exception cannot read b property b of undefined but my application is still running so it did not completely just shut down my application perfect now what i can do is actually add another statement called finally and this will run regardless of rather my try makes it successfully all the way through or whether the catch has been invoked uh so i can just do console.log this will happen no matter what all right and usually use a finally statement to clean up any resources that you no longer need i’m not sure how useful that is in javascript personally but you might find a use for it all right and so you can see that we have we have hit the catch but then it also executed the finally statement before continuing on with the remainder of the application so there’s a pretty effective software development strategy of throwing custom exceptions from your functions with the intent that those exceptions are caught by the caller so it’s a form of communication if it succeeds then it should succeed quietly but if it fails your function would then throw an exception that would be handled by the caller and it can decide what to do next so i’m going to comment everything we’ve done so far out and we’re going to create one more example here so let’s do that and i’ll comment all this out too and let’s go here okay so this time let’s go function perform calculation and this calculation will take and look at uh a object that will pass in and we’re going to say hey if that object obj dot has own property b wait if it doesn’t have its own property so i could do equals false like that or i can do a shorthand version of this by just using the exclamation mark right before the expression see that that exclamation mark it kind of makes it the negative so if has own property returns false then the entire expression will be true this is if it if not has own property equal true essentially then what we can do is actually throw a new error all right so here we’ll just describe or give it the message we want to to tell what the problem is object missing property all right otherwise if that turns out to be a truth that the object does have its own property then we can continue with the calculation and uh i might just return you know the value six or something like that okay whatever the the calculation using obj dot property b all right so here we go now we’re going to call into that function perform higher level function operation and here let’s um do let’s be j let um that value equals zero all right um and then um let’s do a try around value equals perform calculation and i’m passing in an object that’s undefined right because i didn’t set it to anything in line 41 so i know it’s going to throw an exception here i’m going to catch it i know it’s going to come back to me as a boxed built-in native error and [Music] print that out and what i can do then is kind of to show how this would work is uh if um you know value is uh equal to zero i know that the perform calculation didn’t work so i can run my contingency perhaps i can do some retry logic you know whatever i need to do to to make my application uh handle this exception gracefully and then continue with whatever logic makes sense after that so let’s see if this works i think i got this right let’s call perform higher level operation and um i’m not sure what to expect to see here but i don’t want to see any exceptions pop up other than the one that i’m printing out and throwing um here so let’s try that all right i think in this case i’m gonna need to do this let’s see if that works all right try that again all right that worked perfectly okay so in this case created an object passed it in doesn’t have property b so we throw an error remember this is a strategy for us to do some checking and then look to make sure that if an exception happens we can handle it if not we grab the value back but if the value is zero then perhaps we need to we we hit an exception maybe there’s some other flag we can use to see whether we’re getting the value we thought perhaps in the catch we can do some work maybe in the final statement it makes more sense to put it there but at any rate we can gracefully recover from the exception being thrown uh because that’s kind of just our strategy that hey this function does not have what it needs so you’re calling into it but you didn’t give us what we needed so you’re gonna have to write some logic to figure out what to do next all right that’s all i’m trying to say there so this is a good start to help you understand that you do have options when you think about how to safeguard your application against potential exceptions that could occur and shut down your code completely ideally you could think of all the ways that your code could possibly fail and attempt to mitigate those potential issues up front but after you’ve done a reasonable amount of work to perform gated checks like i demonstrated here in line number 33 34 and 35 uh then you can ultimately wrap your code in um a try catch statement try catch finally whatever works for you and furthermore you can throw custom errors from one function to another as a means of communicating failure and allowing the caller to implement some contingency like we looked at line number 50 there maybe even some retry logic to ensure that ultimately the application is performing correctly and it can recover from any exceptional situations all right that’s all we really wanted to say all right so we’re doing great almost done we’ll see in the next video thanks up to now i have avoided talking about javascript in the specific context of a web browser i actually re-recorded this entire course from scratch earlier this month because i started talking about javascript uh in a web browser from like the very first video and it became obvious as i that i spent so much time fiddling with the html explaining the dom and ultimately i was struggling to get just to talk about javascript by itself and so that’s why i took a different tact and re-recorded this so this time around as you know i started with pure unadulterated javascript and now here at the very end of this course i’m talking about javascript and how it’s used in a web browser which there are some peculiarities that i wanted to talk about and i hope the approach makes sense i hope that this approach worked for you and even if this isn’t how you wanted to learn javascript i hope you can understand understand and see the the rationale behind it all right enough of the pretense uh as we start dipping our toe into javascript in the web browser i want to talk about the amazing work of a web browser and how it will actually turn a request by just typing in an address and to the point where we’re actually viewing a page on screen i want to talk about how it begins to understand the html that it has downloaded as well as other files like css and javascript files and i want to talk about all the things that it has to consider before ultimately rendering a web page out to the end user to to see it’s it’s really quite amazing so let’s start at the very beginning i don’t want to talk about all the process of of requesting and resolving to an ip address and all that let’s just speak in very broad terms there’s a request made from the web browser to a server and ultimately an html file will be downloaded or or i should say a collection of html is downloaded to the web browser and then there are references to other resources like css files and javascript files and things that begin to be downloaded as well kind of all this happens really at the same time along with everything else i’m going to talk about here um and this isn’t really intended to be deeply technical uh i’m really just going to paraphrase that the general order of events because i’m not really privy to what goes on inside of a web browser i haven’t really looked at the source code but any rate while it’s downloading all the resources it has its html now it’s grabbing its css now it’s grabbing its javascript and it’s working asynchronously in the meantime with what it does have in hand in memory uh as it’s continuing to grab these resources down well while that’s going on the browser is beginning to construct this object-based representation of the html elements in the html page and it constructs them into a series of objects that are called nodes so it’ll create a node for a given paragraph a node for the id of that paragraph a node for the class of that class attribute of that paragraph a node for the text that belongs to that paragraph you can see where i’m going with this everything gets its own little instance of an object and ultimately it’s building this object graph that represents all of these these elements their attributes and their values the text values and so each element node could contain other element nodes as well so a paragraph could conceivably have some div tags or vice probably more likely vice versa a div tag has has a paragraph a div tag has a header has an unordered list which has list items and so on all right so there’s that just that nature of html and the the object model that’s being constructed in memory has to consider all those kinds of relationships as well as the attributes and and the text values of each of those attributes so at any rate the final result of all that work is this rich object model that represents the document that can also be accessed programmatically we’ll talk about that in just a little bit but at some point the browser considers then all the styles that it has downloaded whether embedded in the html page itself or through one or more css files that have been downloaded as well and it also has to consider any of the default styles for uh elements so these are ones that are baked right into the web browser it starts to decide which styles will overwrite the other styles which values will overwrite the other values and so now it has to then start to apply those styles to the various nodes inside of this large object graph of node objects and once it’s kind of settled on which styles to apply to each of the individual nodes it then has to calculate how much space each of those will take up on the web page all right so that it can essentially at some point visually render those onto the web page for the end user to view the next thing that it’s going to do is it’s going to start to parse through the javascript that it’s been downloading from various files and it’ll determine what needs to happen and when it needs to happen so some code can be executed immediately some code is attached to the various events of the various nodes in this object graph of nodes that represent the web document and we’re going to see this in a little bit it’ll impact how we write our code and where we write it in the html document so when we’re talking about this collection of nodes from a programmatic standpoint as well as the entire api of the methods and properties that we can access programmatically to change things about the nodes that represent our document to modify them to add new nodes to remove nodes things of that nature as well as the web browser as a whole and all the functionality that it provides to us like the ability to manipulate uh things like the history the the console window and and any of the other debugging windows that might be available take all of that into consideration and that is that essence something called the document object model or the dom all right and so we’ll talk about the dom uh in the remainder of this video in the next couple of videos but eventually after it’s taken all those things into consideration then it finally will render the page visually to the end user but its work is not done at that point now it’s listening for the user’s interaction with the various nodes inside of the document and when the user interacts it might click or hover or mouse up mouse down it might use a keyboard you know the user can interact with the various elements on the page in various ways and if the software developer or the web developer has attached event handlers functions that should be called in response to those interactions those events then it will the web browser has to say oh yeah we have uh these two functions that we’re going to call because the user clicked on this button go ahead and execute those those two functions now all right so when we create those associations html uh actually gives us a couple of ways to do the create those associations but we’re going to look at some some programmatic only ways to create those associations we’ll talk about that in the next video uh but at any rate uh as developers we can also interact with other apis that are exposed to us for example most web browsers expose the console window so that we can write to that write little error messages out like we’ve been doing up to this point so we’re primarily interested in the document object model as as javascript developers again it contains an object graph that represents every element the attributes of those elements uh the text that might be associated with those elements and the relationship between two elements you know one might contain the other or they might be siblings and so on each of these objects are referred to as nodes as i said earlier and i just want to make the point really quick that when we’re talking about nodes inside of the document object model that shouldn’t be confused with nodejs the environment that we’ve been using up to this point to execute our various little javascript examples they’re completely different they have no relationship to each other all right so at the highest level you have the document node and the document node will contain one or more element nodes and each of those could contain other element nodes but each element node will probably have some attribute nodes associated with it and maybe a text node associated with it okay so the dom also includes a rich api so lots of functions that we can call in order to access the various nodes their attributes uh the text and so on all right so we can find a specific node or a collection of node that match our criteria and then once we have a programmatic handle to a given node or multiple nodes then the api also gives us some functions that we can use to modify the values of those nodes everything from changing the text of a node to changing the attributes of the node like changing the um the class that is associated with that given node we can remove nodes we can add nodes all that programmatically the api also allows us as i mentioned a little bit ago to associate events that are raised by the web browser usually because the end user triggered the event with a mouse over a click whatever the case might be as developers we can associate our functions with those events and then finally the api provides some helper functions to perform various various things one that comes to mind is network operations like being able to call out to another web server to grab data or to grab some other code that can be executed but finally there are several ways that we’re going to talk about in this video to basically write your javascript uh in a web page or associate your javascript with a web page and if you have professional aspirations then you should be aware that not all of these techniques are are smiled upon in fact most of them are frowned upon there’s one that’s that’s not so you might see some examples and here we’ll just add this to our to our uh existing page here that i just created randomly in dom intro.html and i’m going to create a button and in that button i’m going to say click me and then here i’m going to add an on click equal i believe we’ve done this already right but i’m just going to write some javascript right here now using this technique i’m able to pop up box an alert box in the browser to just kind of execute one little simple line of code uh so let me see how i’m going to do this let’s right click on this and let’s rebuild an explorer and from here i can double click and it should open up my default web browser i click click me and i get a message box an alert that pops up with the little message the site says hi okay and i can also do something a little bit different this might be a little bit more akin to what we’ve been doing consoled console.log and i could do hi in the console all right and let’s just refresh that page so i’ll hit refresh i’m going to hit f12 on the keyboard this will allow me to see the console tab and specifically i want to look at the logs when i click the click me button high in the console okay so hopefully that all makes sense now using this technique uh you’re only gonna be able to write one line of code at a time maybe two you might have to write some you know but there’s no doubt that just keep writing a bunch of statements here inside of this on click event right in line in the button is not a great idea so your other decision is or a choice is to actually add a script tag like so now for reasons that i’ll explain a little later typically you don’t see script tags added there you would probably want to put them at the bottom of the document and the reason is pretty simple that the uh that the web browser is going to look at the code line by line and if it encounters a script tag up here and we reference elements in the body those elements may not have been loaded yet into the document object model if we put the script at the very end we can ensure that any everything above this has been loaded already so we can reference the various uh elements in our html all right or the various nodes in our document object model to say in a more programmer friendly way okay so here’s what we can do instead i can actually create a function let’s call this just a click handler and um you know i could even just add message and here i can just do something like console.log um hi and then [Music] maybe dot dot and then maybe a message like that so now in the on click i can kind of wire this up and say hey call click handler and i’ll just say from the button click event all right so we’ll save all that and with any luck you can see where we make the call to the function we’ve created and then passing in a message which will should display in the console log let’s open up our web browser again let’s refresh this page f5 i’m gonna click the click me button and we get hide dot dot dot from the button click event all right so you might be wondering well wait a second you are calling a function before it is defined in your javascript isn’t that a problem no and this is something i wanted to talk about before but never really got a chance to and this applies whenever we’re executing all of our examples in node function declarations are hoisted to the very kind of top of the execution environment so the javascript compiler will go through and look for all the function declarations it’ll put them at the top it knows where they’re at now and then it will continue to execute any additional code so this is in essence added then to the top of the execution chain so when we by the time we get to the click event handler for this button javascript’s already very aware that this function exists all right so small point there but these techniques of using this on click equal and the script tag in this manner these are generally frowned upon professionally you probably want to do what’s called separate your concerns so your javascript may it might be more appropriate to keep it in its own file and to kind of remove all of these references like this like we have there so i’m going to say don’t write javascript in your html page all right and some people might argue with that and say it’s perfectly fine and you know it just depends on how much javascript there is and what your professional aspirations are and the other programmers and what they’re doing in your group but generally speaking what you want to do is kind of just add your code to another file in this case i’m going to create another file i’m going to call this dom intro.js right and then what we’re going to do is actually wire up the event handler to um to the button click event but we’re going to wire it up in our code so what we’re going to do to start off with is to create an iffy so um remember how we to do that we’re going to create a function i’m going to wrap it in a parentheses and then we’re going to say execute immediately all right and in here what we’ll do is we’ll define our function of the click handler and then i’m also going to let’s go here to our dom intro.html i’m going to give this guy an id and i’m going to say your id is my button all right in fact i hope you don’t mind that i’m going to delete all of this out of here all right i’m going to delete all of this out of here it’s gone now and i’m going to go into the dom intro.js and the first thing i need to do is get a reference to my button alright or my button so what we’ll do is let my button equal document dot get element by id and then give it the id i want my button button alright next thing i want to do is go my button add event listener i’m going to say which event i want to listen to in this case i want to listen to the click event and when that happens i want to call click handler and i guess i could pass in a message if i wanted to at this point so um hi from my iffy right now i need to go back to the dom intro.html here i’m going to add a script tag like so script type equals text forward slash javascript source equals and then we’ll give it the name dom intro dot js so save that and now hopefully let’s load this guy up refresh and you can see i did something incorrect let’s go back here let’s get rid of all of this business right there all right now we don’t want to call that method just yet we just want to wire it up to the button and say whenever the click we want to listen for the click event when that happens then we want you to execute click handler all right all right now that we have that in place let’s go ahead and refresh click me and there we go now we get pointer event not exactly what i was looking for if it’s really important to us to pass in the message which i actually forgot about sorry what i’ll do is just kind of wrap this call inside of a function expression so go function like so and inside there we’ll make the call like that and then um hi from iffy all right that should work hopefully so let’s save it let’s try that again there we go that’s the that is what i wanted to happen okay so had a lot to say in this video about the dom and about how to attach your javascript into a web page and still access the various elements of the document object model or this document object by using helper methods like git element i by id passing in the id and now getting a programmatic reference which we can then use to add or even remove event handlers too in this case i’m just adding a function expression to make a call into another function that i created earlier all right hopefully all that makes sense and we can continue on uh in the next lesson and kind of expand on this alright we’ll see you there thanks in this video we’re going to talk about working with the dom specifically how to access dom nodes how to change attributes of those nodes how to add nodes dynamically and more so in the past i’ve created like three or four lines of code and then we would look at look at what those lines did and i wanted to change my tactic for this particular example to show you something a little bit more interesting a little more compelling so i’ve already created a dom nodes html dom node css and dom nodes js and you can see here that i basically created just a little playground it does nothing of practical values completely contrived but it will show you how we can manipulate various down nodes and their attributes uh and you might find some of the the practical side of this how did i accomplish that useful as you pick apart this program and we’ll walk through here in just a moment but again this does nothing useful it has a click me button uh a series of div tags each with a color and the name of the of the color itself and then a number beneath that and i’m just going to click the click me two or three times alright you can see that several things are happening all at once first of all we are changing every time the click me button is clicked we are changing the selected color div you can see the selections change because there’s this thick bottom border applied to that particular div furthermore that color is applied to this number and that number seems to be growing each time we click the click me button so here we’re going to click it we go to pumpkin what happens when we get to the end of our list of colors and i click the click me button one more time we start back over at the beginning of the list and so i can just continue to click this the number is growing i’m using uh relative ms rems in css using it the number of times i’ve clicked it that’s the number of rams that i’m applying to this font and so you can see that things are changing they’re very dynamic and uh i just wanted to kind of it’s a it’s a large enough application to be interesting small enough that i think you can pick it apart and kind of understand what’s going on and that’s really the intent here so let’s take a look at the source code itself and we’ll start with the html there’s really not much interesting here i’m pulling a font from the google fonts and then i’m also applying a style sheet the style sheet itself not a lot interesting in it and i don’t want to take too much time it’s just making everything look a little bit nicer than our previous examples here we’re looking at the result container and that seems to be where we have uh though basically the white area with everything else inside of it this button this row of divs and the number and so you can see that each of these have ids applied to them the click me is my button the color div will contain a series of child divs and i’ll talk about more about that more in just a moment and then the resultive is where we add the current number of clicks that continues to grow and grow here we have our script reference to dom nodes.js that’s where all the magic happens first of all you can see that i’ve created an iffy here and i can roll it up using the little plus and minus right next to that line of code in line one you can see that i’ve created a couple of functions uh one called increment counter one called update ui and then one called handle click you can see that i’ve initialized a variable called counter to the value of zero here i’m getting a reference in line 74 to the button and then i’m wiring it up like we learned about the previous video to every time that the click event is raised for my button i want to add an event listener here we have a function expression that will execute both the increment counter method that we looked at at the very top and then the update ui method that we looked at it was right below it and we’ll go and this is where a lot of the magic happens so we’ll look at that in more detail in just a moment then here you can see in line number 80 i execute update ui in order it as the page loads because again as a immediately invoked function expression i want this to happen as soon as this file is loaded into uh into our html by the web browser okay so uh let’s go here to the top and we can see increment counter is very simple it just takes whatever the current value counter is and increments it by one all right now update ui is where a lot of the magic happens first of all we start off with an array of color objects each color object has a name this is the name that you saw printed kind of in the top middle part of each of the div tags as well as the color value itself and i just grabbed these from a website that has colors i think you see the colors i’ll give you the site in dom nodes.css alright the first thing we want to do is grab the resultive this is where we’re going a reference to that element and we use the id resultive to get an access to it so that we can programmatically work with it this is the div that will contain the current number of clicks not only will we increment the display the the incremented click number but then we will also change its size and its color attributes as well all right next up what we want to do is then set the inner text attribute or property of the result object so this resulted we’re going to set a property called intertext that’s how we’re able to put one two 3 4 inside of that div tag here at the very bottom of our white section of our web page okay and then so you can see this is one way that we are getting a reference to a dom node or a dom element and then changing the attribute we’re changing the text whatever it was we’re overriding it with the current value of counter all right then what we do is additionally access the style of that div so here we’re going to the style object of the result element and then the style object has a series and i can hit the period on the keyboard to look at the intellisense all of these are attributes or properties of the style object for our div tag we can change all of these attributes if we want most of them are visual in nature and here i just want to change the size of the font taking the current value of the counter and appending em to make it larger each time that em being a uh a unit of measure in css all right so that’s how we’re making the changing the number of times that the button has been clicked in in that div tag and then also changing its size every time we click it okay so next up what we want to do is determine what the current color is in our array of colors so here’s what we do we take the current value of counter and we use the modulo or modulus operator which will give us the remainder so if this has been clicked six times and there are six elements inside of our of our um array of colors that we define as a const here at the top then the modulo would be zero there would be zero remainder so we would access the first element of that array and that would be this alizarin it’s that coral color all right and so we would use that so that’s how i’m using and being able to to kind of select each of the items i take the current counter so if it was two and there were six then that would give me a remainder of of what a four so it would be the we would grab the fourth element from our colors array and grab the value property the value property of that particular object and set that as the result style objects color attribute all right moving on now what we want to do is clear out all the existing child color div so i basically tear down and recreate this list of colors by basically removing everything first of all from the previous call to update ui and then i begin to re-implement it if the as i’m looping through each of the color objects to create a new div if it’s the currently selected one then i want to apply not only the bottom border but then also use that color here i guess i already did that part but anyway um this is why i’m setting the inner html to an empty string of the color div because i’m emptying out everything inside of the current div but now i’m going to loop through each of the colors and here i’m basically dynamically creating new div tags so i create a new div tag and then i create a text node with the name of the color for you know whether it’s the first object the second object the third object i’m grabbing that name and creating a text node i append that text node as a child to the div tag that i just created i style it up and then at the very bottom here you can see i actually append that child append that node that i’ve been styling up to the larger outermost color div so i do that six times and if it is the currently selected item then i’m going to change the styling of that particular node by adding a class the selected class and that’s what’s going to add that bottom border in fact here if i wonder if i could just go to here and find selected so there we go it is uh just adding a bottom border of five pixels with no padding all right and so um that’s pretty much it now you’ll see that throughout i’m accessing the style object or attribute of the given div tag setting its width its height and other properties like float padding left padding top and i could have just created a css style and then added it using the same technique that i’ve used here but i chose just to demonstrate that we can get at all those those style attributes in addition to the style object there are other things that we’re able to do to it like append a child to it so we want to we have a a div tag and we want to append something to it we have a div tag and we want to append it as a child to that div tag which is already a child to a div tag child to the body right so hopefully this will help you to see how this whole process works and then we can basically get at any ah any uh dom node we can modify it we can add new child nodes to it um creating them essentially out of thin air and we could even do more like move nodes around if we really wanted to and so on obviously this example didn’t call for that but essentially once we get a handle to an element then we can do anything we want with it that we can conceive of so there are so many options that it didn’t really make sense to kind of go through them in a laundry list basically this is a matter of imagining what it is you want to accomplish getting a reference to the element that you you need to start with determining do i need to create a new element and append it on or do i need to remove elements that are currently child of the existing node which attributes of that know do i want to change and so on until you kind of construct i mean it took me several minutes to to build this example and it started really simply it started by can i just increment the current number of clicks all right i got that working now let’s move on to the next thing can i create a bunch of colors and have them applied to the number can i create a bunch of dibs and apply those colors to the div and so you just keep working at it little by little until you’ve you know tackled essentially the whole the whole application and that’s how i built this all right so um hopefully that was helpful as kind of a larger example that we’re able to to dissect and understand better how to accomplish something that that we conceive of by working with nodes inside of the dom all right so uh let’s continue on i guess we’re pretty much done so let’s wrap it up in the next video see you there thanks i just wanted to briefly congratulate you on finishing this course that’s quite an accomplishment and that’s awesome and i have to say that i definitely respect anybody who puts the time in to learn a new technology or a new skill so you’re awesome i congratulate you and i wish you the absolute best i sincerely hope that this course was helpful to you in some way and that you came away with some confidence in javascript and that you have a solid foundation now to build on and i’d strongly encourage you to keep pushing forward in fact modern development with javascript will require that you learn some of the most popular tools and libraries that are currently in vogue by the javascript software development community as well as the build and deployment process for javascript applications so i i hesitate to recommend specific libraries and frameworks for you to look at because especially on the client because things are changing so rapidly in that space but i think you’d be safe at least if you’re watching this within a couple of years after i record it uh to uh to get started with something like vue.js vue.js or react.js by uh by facebook if you’re going into a corporate uh software development environment like in a big company then you may want to look at angularjs or angular i think the current version as i record this is version five i’m sure they’re going to every six months they’re trying to release a new version of that you’re probably going to learn need to learn a little bit about packages in javascript and using npm or yarn another tool by facebook you probably want to learn a little bit about webpack and parcel but again honestly i kind of feel silly recommending anything because a couple years from now it’s it’s likely that the javascript development community will have moved on past some of these sorts of things so you know at least enough now where you can follow along in those kinds of discussions and begin to stay abreast of of javascript’s frequent and fickle library preferences de jour now on the server side i highly recommend that you learn more about nodejs and if you want to use node to create websites and web apis then you may want to learn another framework called express.js which sits on top of node okay makes it easy for you to build entire websites just on the server side okay so quickly just want to give another plug to my own website here let me type it in for you developeruniversity at devview.com http://www.devu.com i’m learning new things every day and when i do learn them i try to share them on my website so definitely want to come and check it out and check it out every so often finally a quick thanks to microsoft virtual academy you guys are awesome quick thanks to you the viewer for watching this and staying with it through the entire course and i as we close here just want to say that i sincerely and truly wish you the best hope you can leverage this course and do something really awesome and if you do let me know about it so good luck
Affiliate Disclosure: This blog may contain affiliate links, which means I may earn a small commission if you click on the link and make a purchase. This comes at no additional cost to you. I only recommend products or services that I believe will add value to my readers. Your support helps keep this blog running and allows me to continue providing you with quality content. Thank you for your support!