Software development used to be chaotic and messy. However, it’s not just the look & feelings that’s changed. Now we’re in a reactive, adaptable web universe where sites can not only respond to the screen size you’re viewing it on but also give consistent performance regardless of concurrent users and handle multiple geographies.
Software development used to be chaotic and messy. We have gone from websites that looked like one on the left, to the one on the right.
However, it’s not just the look & feel that’s changed. Now we’re in a reactive, adaptable web universe where sites can not only respond to the screen size you’re viewing it on, but also give consistent performance regardless of concurrent users and handle multiple geographies.
This phenomenal shift in the software industry has been driven in part because of the quality engineering that clever software engineers have built as a standard that we must strive to follow. Here are five things to keep in mind while thinking about increasing product quality
Requirement gathering and analysis is generally where any software starts its journey, and this is where we should also start thinking about scalability.
Scalability - in simple terms - is a software’s ability to perform at the same latency1 under varying (mostly increasing) loads. Any system can either scale vertically or horizontally.
Vertical scaling, which is also called scaling-up, means to augment the capabilities of the hardware that’s running your application, now this can include, but is not limited to increasing the storage capacity, RAM and processor speed/counts. This is usually the first step towards making software's perform better under increasing workloads.
Horizontal scaling, or scaling-out, is done when we have dynamic applications that handle varying traffic. In this sort of scaling, we add more hardware to the existing resource pool instead of making the same hardware more powerful. For example, instead of increasing the storage from 16 GB to 32 GB, we can have 2 servers that each have 16 GB storage sitting behind a load balancer that decides which way to send the traffic.
Cloud providers these days offer solutions that can dynamically scale based on the real-time requirements of the applications. These solutions are enjoying a surge of popularity because of their highly elastic nature and out-of-the-box configurability.
Overall, when starting with the application development, it is essential to decide what kind of resource loads the app will be handling, and accordingly to decide which scalability approaches to follow.
Statelessness is defined as a program’s ability to not couple its computation with any intrinsic state. Simply put, a stateless application keeps state separate from the computation logic.
The computation only cares about the input it receives (which can be a state) and the output it needs to produce (again, a state).
A brilliant example of statelessness is the HTTP protocol, two requests over the same http connection have no link to each other, and all the states are managed via browser cookies - which are completely separate from requests.
Each request can check for the cookies, and perform operations accordingly, but is not responsible for managing the cookies.
When states are not shared, and instead made explicit through the communication between the processes, the added advantage is the states are visible whenever necessary (for debugging, perhaps).
(Nearly) Atomic Responsibilities
Expanding on the above, not only decoupling the computation from the state, but also decoupling every single responsibility and encapsulating them within an atomic unit makes for a much easily maintainable software.
For example, a service that’s responsible for handling user interaction should not try to also process the information received and handle database interactions. It should only care about handling user interaction - and handling that in the best possible way.
This principle is better known as Single responsibility principle, and is the first out of the five SOLID principles which define the barebones guideline for good software design.
The reasoning behind this principle is pretty straightforward, if one service cares only about one job - when it (inevitably) breaks, only one job is hampered. Also, since we know which job is broken - and which service is responsible for it, we can easily debug and fix it without much guess-work.
Although, one must take care not to get too crazy with this principle - and document enough - to avoid a microservice explosion.
When developing solutions, we often get into a zone of quick development, where little attention is paid to design in favor of getting the job done. This leads to solutions that are unnecessarily complicated - and these solutions lead to multiple problems in the future.
It is better to think in clear, simple terms to find concise, standardized answers. Solutions that are not simple are not solutions.
This principle is equally relevant in real life situations as it is in UI design and algorithms. When a solution is simple, it is easy to understand, to replicate, to debug and to maintain.
Single responsibility principle actually is a very effective extension of the KISS principle.
We must acknowledge the fact that no software is infallible. Log4J was an immensely popular library, and it was so ubiquitous that by the time the infamous Log4J vulnerability came into light, every java based or java dependent application on the face of the planet was exposed to the threat.
Apart from the vulnerability though, Log4J was an extremely convenient and useful library for logging, and nearly every java developer swore by it. Point being, when you know that every software out there is prone to failure, it is immaterial to obsess over the design too much and not have any work to show for it.
Prototyping and failing fast2 is a concept that has been proven time and again to provide efficient work in stipulated timelines. When one is quick to fail, they have ample time and opportunity to learn from the mistakes and grow in their view.
Optimistic concurrency control (OCC), also known as optimistic locking, is a concurrency control method applied to transactional systems. In this blog, we talk about implementing concurrent data updates with Optimistic Locking.
Explainable AI is a key concept in Machine Learning/AI to explain why your model is making the predictions. It helps us understand how good a model is. In this blog, we cover how you can use a game theory-based method called Shapley Values to explain what's happening inside the ML model.