Couverture de The DevOps Dojo

The DevOps Dojo

The DevOps Dojo

De : Johan Abildskov
Écouter gratuitement

3 mois pour 0,99 €/mois

Après 3 mois, 9.95 €/mois. Offre soumise à conditions.

À propos de ce contenu audio

The DevOps Dojo is an educational podcast focused on DevOps and making the world of building software a little better. Each episode covers a principle, practice or common DevOps fable. Join the Dojo to expand your software development horizons!Copyright 2020 All rights reserved.
Les membres Amazon Prime bénéficient automatiquement de 2 livres audio offerts chez Audible.

Vous êtes membre Amazon Prime ?

Bénéficiez automatiquement de 2 livres audio offerts.
Bonne écoute !
    Épisodes
    • Digital Transformation with the Three Ways of DevOps
      Sep 25 2020
      The three ways of DevOps comes from the Phoenix Project, a famous book in DevOps circle. This episode covers how to use the three ways to progress in your digital transformation initiatives. Sources: https://www.businessinsider.com/how-changing-one-habit-quintupled-alcoas-income-2014-4?r=US&IR=T https://www.amazon.com/Phoenix-Project-DevOps-Helping-Business/dp/0988262592 https://www.amazon.com/DevOps-Handbook-World-Class-Reliability-Organizations-ebook/dp/B01M9ASFQ3/ref=sr_1_1?crid=316RJMM06NH59&dchild=1&keywords=the+devops+handbook&qid=1600774333&s=books&sprefix=The+devops+h%2Cstripbooks-intl-ship%2C235&sr=1-1 Transcript: My first introduction to the principles behind DevOps came from reading The Phoenix Project by Gene Kim, Kevin Behr and George Spafford. In this seminal book, that blew my mind we follow Bill as he transforms Parts Unlimited through salvaging The Phoenix Project. An IT project that went so wrong, it could almost have been a project in the public sector. Through Bills journey to DevOps, we discover and experience the Three Ways of DevOps. In this episode, I cover the three ways of DevOps and how they can be applied in a Transformation. This is the DevOps Dojo #6, I am Johan Abildskov, join me in the dojo to learn. In the DevOps world, few books have had the impact of The Phoenix Project. If you have not read it yet, it has my whole-hearted recommendation. It is tragically comic in its recognizability and frustratingly true. In it, we experience the three ways of DevOps. The three ways of DevOps are Principles of Flow, principles of feedback and principles of continuous learning. While each of these areas support each other and has some overlap, we can also use them as a vague roadmap towards DevOps capabilities. The First Way of Flow addresses our ability to execute. The second way of Feedback concerns our ability to build quality in and notice defects early. The Third way of Continuous Learning focuses on pushing our organizations to ever higher peaks through experimentation. The first way of DevOps is called the principles of flow. The first way of DevOps is called the principles of flow. The foundational realization of the first way is that we need to consider the full flow from ideation until we provide value to the customer. This is also a a clash with the chronic conflict of DevOps with siloed Dev and Ops teams. It doesn't matter whether you feel like you did your part or not, as long as we the collective are not providing value to end-users. If you feel you are waiting a lot, try to pick up the adjacent skills so you can help where needed. We also focus on not passing defects on and automating the delivery mechanisms such that we have a quick delivery pipeline. Using Kanban boards or similar to visualize how work flows through our organization can help make the intangible work we do visible. A small action with high leverage is WIP limits. Simply limiting the amount of concurrent tasks that can move through the system at any point in time can have massive impact. Another valuable exercise to do is a Value Stream Map where you look at the flow from aha-moment to ka-ching moment. This can be a learning situation for all involved members as well as the organization around them. Looking at the full end to end flow and having optimized that we can move on to the second way of DevOps. The second way of DevOps is the Principles of Feedback The first way of DevOps enables us to act on information, so the second way focuses on generating that information through feedback loops, and shortening those feedback loops to be able to act on learning while it is cheapest and has the highest impact. Activities in the Second Way can be shifting left on security by adding vulnerability scans in our pipelines. It can be decomposing our test suites such that we get the most valuable feedback as soon as is possible. We can also invite QA, InfoSec and other specialist competences into our cycles early to help architect for requirements, making manual approvals and reviews less like to reject a change. Design systems are a powerful way to shift left as we can provide development teams with template projects, pipelines and deployments that adhere to desired guidelines. This enables autonomous teams to be compliant by default. The second way is also about embedding knowledge where it is needed. This is a special case of shortening feedback loops. This can both be subject matter expert knowledge embedded on full stack teams, but it can also be transparency into downstream processes to better allow teams to predict outcomes of review and compliance activities. A fantastic way of shifting left on code reviews, and improve knowledge sharing in the team is Mob Programming. Solving problems together as a team on a single computer. We can even invite people that are external to the team to our sessions to help knowledge sharing and to draw on architects or other key knowledge banks. Now that we have ...
      Afficher plus Afficher moins
      7 min
    • Site Reliability Engineering
      Jun 14 2020

      Site Reliability Engineering or SRE, is the next big thing after DevOps. In this episode I cover what SRE is, some of its principles and some of the challenges along the way.

      Sources

      https://www.oreilly.com/content/site-reliability-engineering-sre-a-simple-overview/

      Afficher plus Afficher moins
      7 min
    • Containers
      Jun 7 2020
      Containers are all the jazz, and they contribute to all sorts of positive outcomes. In this episode, I cover the basics of Containerization. Sources Containers will not fix your broken Culture docker.io Transcript Containers - If one single technology could represent the union of Dev and Ops it would be containers. In 1995, Sun Microsystems told us that using Java we could write once and run anywhere. Containers are the modern, and arguably in this respect more successful, way to go about this portability. Brought to the mainstream by Docker, containers promise us the blessed land of immutability, portability and ease of use. Containers can serve as a breaker of silos or the handoff mechanism between traditional Dev and Ops. This is the DevOps Dojo Episode #4, I’m Johan Abildskov, join me in the dojo to learn. As with anything, containers came to solve problems in software development. The problems containers solve are around the deployment and operability of applications or services in traditional siloed Dev and Ops organizations. On the Development side of things deployment was and is most commonly postponed to the final stages of a project. Software is perhaps only on run the developers own computer. This can lead to all sorts of problems. The architecture might not be compatible with the environments that we deploy the software into. We might not have covered security and operability issues, because we are still working in a sandbox environment. We have not gotten feedback from those who operate applications on how we can enable monitoring and lifecycle management of our applications. And thus, we might have created a lot of value, but we are completely unable to deliver it. On the Operations side of things, we struggle with things such as implicit dependencies. The applications run perfectly fine on staging servers, or on the developer PC, but when we receive it, it is broken. This could be because the version of the operating systems doesn’t match, there are different versions of tooling, or even something as simple as an environment variable or file being present. Different applications can also have different dependencies to operating systems and libraries. This makes it difficult to utilize hardware in a cost-efficient way. Operations commonly serve many teams, and there might be many different frameworks, languages, and delivery mechanisms. Some teams might come with a jar file and no instructions, while others bring thousands of lines of bash. In both camps, there can be problems with testing happening on something other than the thing we end up deploying. Containers can remedy most of these pains. As with physical containers, it does not matter what we stick into them, we will still be able to stack them high and ship them across the oceans. In the case of Docker we create a so called Dockerfile that describes what goes into our container. This typically starts at the operating system level or from some framework like nodejs. Then we can add additional configurations and dependencies, install our application and define how it is run and what it exposes. This means that we can update our infrastructure and applications independently. It also means that we can update our applications independently from each other. If we want to move to a new PHP version, it doesn’t have to be everyone at the same time, but rather product by product fitting it into their respective timelines. This can of course lead to a diverse landscape of diverging versions, which is not a good thing. With great power comes great responsibility. The Dockerfile can be treated like source code and versioned together with our application source. The Dockerfile is then compiled into a container image that can be run locally or distributed for deployment. This image can be shared through private or public registries. Because many people and organizations create and publish these container images, it has become easy to run a test on tooling. We can run a single command, and then we have a configured Jenkins, Jira or whatever instance running, that we can throw away when we are done with it. This leads to faster and safer experimentation. The beautiful thing is that this container image then becomes our build artifact, and we can test this image extensively, deploy it to different environments to poke and prod it. And it is the same artifact that we test and deploy. The container that we run, can be traced to an image which can be traced to a Dockerfile from a specific Git s ha. That is a lot of traceability. Because we now have pushed some of the deployment responsibility to the developers, we have an easier time architecting for deployment. Our local environments look more like production environments. Which should remove some surprises from the process of releasing our software leading to better outcomes and happier employees. Some of you might think, isn’t this just virtual ...
      Afficher plus Afficher moins
      7 min
    Aucun commentaire pour le moment