Shortcut to seniority

Home
Go to main page

Section level: Junior
A journey into the programming realm

Section level: Intermediate
The point of no return

Section level: Senior
Leaping into the unknown
Go to main page
A journey into the programming realm
The point of no return
Leaping into the unknown
The DevOps terms comes from Development and Operations, two teams that are placed together to assure that the customers are happy. The development team is responsible for implementing the new features as fast as possible, while the operation team is responsible for making sure the software remains stable.
The Operations team is against deploying new features too fast and too frequently, because each feature is a change and each change can destabilize the system, while the Development team tries to push as much as possible in the least time.
Therefore:
These practices were designed to break the division between the development and operations teams.
DevOps is a collection of development and operations, and the responsibilities include anything that needs to be done to get the code built, tested, deployed, and running in production. A DevOps representative is in charge of everything that happens to the application as soon as code has been commited to the source code repository.
As a DevOps engineer, you should understand / know:
In the beginning of the Agile area, most of the teams were made up of developers. At that point, the development and testing teams were separated, which was inefficient.
Before the DevOps scenario, this is how a regular event was happening, when the team was supposed to deliver a new feature:
The development team finished the implementation of the feature, and sends the release to QA (testing) team.
The tester’s goal is to find as many bugs as they can find. When they finish, they report the findings to the development team.
The developer(s) blame the tester(s) for testing on wrong environment, or for checking invalid use-cases, whereas the tester(s) blame the developer(s) that the code is problematic.
The findings are resolved and the QA team sends the release to the Ops team.
The Ops’s goal is to limit the number of changes that are done in the system.
When things crash, they blame the developer(s).
The developer(s) say that the software was validated by the QA team.
.. And so on.
DevOps encourage communication, collaboration, integration, and automation between the developers, testing teams, and the IT operations required, in order to improve the speed and the quality of the software.
DevOps focuses on establishing a collaborative culture and improving eficiency through automation with DevOps tools.
To achieve a successful DevOps culture, the Development and Operations team must work together, share responsibility for maintaining the system that runs the software, and to prepare the software to run on the system, with increased quality and delivery automation.
Many of these values are actually agile values, as Devops is nothing more than an extension of agile.
DevOps adds the operations mindset into an agile team, and the success of it is measured in terms of working software in customer’s hands.
DevOps tools consist of configuration management, test and build systems, application deployment, version control and monitoring tools. Continuous integration, continuous delivery and continuous deployment require different tools. While all three practices can use the same tools, you will need more tools as you progress through the delivery chain.
Software deliverables are split into sprints of short development cycles, implemented, and then they are delivered to the Ops side as soon as possible. DevOps tools such as Git and SVN are used for versioning, and tools like Ant, Maven, or Gradle build the code into a binary that can be sent to QA team for further testing.
Once the build is delivered to the QA team, they start testing it for bugs.
Automated tests are part of the Continuous testing phase of the DevOps culture. These test were created with the help of DevOps tools (such as Selenium, Junit, etc.) and they are run to assure that there are no flaws in functionality.
These tests, in completion with the manual testing performed by the QA team, assure that the quality level of the software is high enough to be released.
In this phase, the code that contains the latest additions is integrated with the existing code.
After the code is added over the old one, tools such as Jenkins assure that there are no errors in the runtime environment.
Continuous delivery is an extension of continuous integration, and refers to the fact that we can provide scripts and other automation / testing mechanisms to be executed when we reach this step, to assure that we deliver according to best practice standards.
Continuous deployment is the most advanced evolution of continous delivery.
In this phase, the code is deployed to the production environment.
Configuration management play an important role here, assuring that the tasks are executed quickly and frequently.
This phase aims to improve the quality of the software by monitoring its performance and scanning the user activity for any improper behavior of the system.
Dedicated DevOps tools such as Splunk, ELK Stack, and Nagios will continuously monitor the application performance and highlight issues.
A source code repository is the place where the developers check in the code, and it’s a major component of continuous integration. The repository manages the versions of the code that is checked in, so the project can have access to the code itself, to track the changes made on files, etc.
Popular source code repository tools are Git, Subversion, Cloudforce, Bitbucket and TFS.
The build server is an automation tool that compiles the code in the source code repository into executable code base. Popular build servers are Jenkins, SonarQube and Artifactory.
Virtual infrastructures are provided by cloud vendors that sell infrastracture or platform as a service (PaaS). These infrastructures have APIs that allows you to create new machines with configuration management tools.
Examples of Virtual Infrastructures are Amazon web services (AWS) and Microsoft Azure.
There’s also a possibility to have a private cloud (such as VMware’s vCloud), which enables you to run a cloud on top of the hardware in your data center.
In combination with automation tools, you can test your implementation very fast: Automatically send the code to the cloud infrastructure, build the environment, and then run all the tests without any human intervention.
Configuration management defines the configuration of a server or an environment. Popular configuration management tools are Puppet and Chef.
DevOps focuses on test automation within the build pipeline, to ensure that once you have a deployable build, you’re also confident that it is ready to be deployed. Popular test automation tools are Selenium and Water.
A pipeline is the orchestration of the production steps, similar to the manufacturing assembly line.
Package diagram is a view displaying the coupled classes and their encapsulation into a group (similar to namespace).
Activity diagram is a view representing the flow from one activity to another, describing an operation within a system.
Reserve is a function that pre-allocates a specific memory size, to accommodate new data.
The act of exploiting a bug in order to get administrator access.
Composition refer to two classes (composite and component) in which one of them (composite) contain an instance of the other one (component), creating a ‘has a’ relationship. The composite object has ownership over the component, meaning that the lifetime of the component object starts and ends at the same time as the composite class. In simple terms: A class contains an instance of another class.