
Through the years, developers have picked up habits and tricks that allowed them to create a build environment that helps them work quicker and more efficiently. Making mistakes and fixing errors slows down the development process and hinders progress. To combat this, developers should learn from the successes and mistakes of those who came before them. From your first code success to your first cloud-native deployment, here are some of the top tips and tricks that have helped me, and other developers, along the way.
Learn from others
Solving problems can be time-consuming and stressful when a release date is looming. Within a development team, you’ll quickly discover the diversity of other developers’ experiences by how they approach the problem at hand. By observing their various approaches, participating in “pair debugging” sessions, and asking your fellow developers to impart their wisdom with you, you’ll learn so much more than can be taught during your onboarding. Incorporate these practices into your day-to-day workflows, invite your colleagues for regular feedback chats and share your own vision and roadmap ideas so you don’t reinvent the wheel.
Manual work is a bug: Automate everything
When you’ve spent time elbow-deep in manual debugging, you’ll probably wonder why the debug trace flag isn’t automated on every Git Commit. Not long ago, to make such a request required a developer to open a change request ticket for their infrastructure team just to see if this policy was even available on the software configuration management (SCM) server. Today, however, with the rise of fully-integrated, all-in-one solutions that can run a Git server in addition to other DevOps workflows, establishing an automation policy has become easier to achieve. The ability to do so has shifted from pure infrastructure to developers and operators working closely together.
Git hooks are changed into merge request updates and integrated views on current builds. Continuous integration (CI) allows developers to run unit tests automatically, ensuring each commit stays fully-tested. CI also provides reporting and trending capabilities next to project management burndown charts. This enables product and release managers to estimate and optimize their release-to-production workflows.
Continuous Deployment and Delivery
Starting out, you may hear this common saying: “Do not deploy to production automatically. Especially not on a Friday.” In my opinion, this hinders developer growth and adaptation. Certainly, there’s a deep fear of being the one to break production or the one who gets paged at 3 o’clock in the morning to fix it. But these moments offer an opportunity for devs to learn more about their software and improve the overall quality of debugging and maintenance (they may still do everything to avoid being paged at night, though. After all, we’re only human).
Another tactic for learning how to tackle broken production environments is to practice a chaos engineering simulation by forcefully breaking your application and deployments. This enables everyone in your DevOps workflow to adapt and learn and reduce the same pattern errors. The outcome is a feeling of security that you can deploy to production no matter the time and date.
Additionally, this workflow can significantly increase time-to-market, and new features can be delivered in shorter iterations. Feature flag strategies help with directed feedback groups, giving you an advantage over competitors with your new feature announcements and immediate, ready-to-use actions.
Monitoring and Application Performance
After your 3 a.m. wake-up call and chaotic simulation, and during your post-mortem, you’ll want to take a look at how each Git commit performed and by what amount. You can do this through application performance monitoring. Application performance monitoring helps developers understand if their software environment meets performance standards through close monitoring of IT resources, and can run during review without ever hitting production environments.
In addition to metrics and Service Level Availability (SLA) and Service Level Objectives (SLO), the correlation between log events and traces from inside the application can also provide more insight. This kind of observability challenges everyone on the development team to learn about the “unknown unknowns.” For example, events that influenced each other already existed, but you didn’t understand the relationship between them.
When your application is a microservice of many in a Kubernetes cluster, a sidecar monitoring container and more observability integrations can help correlate and avoid problems.
These four tips will help any developer make the journey from first code to cloud-native deployment. Once these are mastered, we’re able to look ahead to the future of the industry; including the increasing complexity of DevSecOps, the importance of the CI/CD pipeline and the role of machine learning in code patterns. For a successful cloud-native strategy in 2021, lean into application management, continuous delivery and automation before expanding into more complex emerging technologies.