Over a year, the energy consumed by an average email inbox, which typically holds around 10,000 emails, is equivalent to driving a car for 212 meters. When this energy consumption is multiplied by global email usage, it generates carbon dioxide emissions equivalent to adding 7 million cars to the roads.
The information and communication technology sector produces around 1.4% of overall global emissions. This equates to an estimated 1.6 billion tons of greenhouse gas emissions, making each one of us internet users responsible for about 400kg of carbon dioxide a year.
The global digital carbon footprint could be reduced by 80% if the electricity powering it shifted from fossil fuels to renewable sources. Tech giants like Google and Meta are taking decisive steps in this direction, committing substantial financial resources toward carbon removal as part of a broader industry collaboration.
However, environmental responsibility in our use of resources can’t be solely delegated to big tech companies. It should be a shared responsibility. Every company, regardless of size, should be aware of its carbon footprint and make necessary adjustments to improve environmental sustainability.
Here are some approaches to enhance the environmental sustainability of cloud native applications by implementing optimizations in Kubernetes runtimes. We will explore methods to shift resource usage toward renewable energy sources and minimize the so-called cloud zombies.
A first step to a greener cloud native experience#
One of the most challenging aspects of green software is understanding where to start. Before delving into more complex things, a good starting point is to raise awareness of the hidden “zombies” within our infrastructure. By zombies, I refer to workloads that do not perform any useful tasks, resulting in a waste of both environmental and financial resources.
Those used for testing and development purposes — typically only during working hours — are a common example. Considering that the average ratio between working and nonworking hours in a week is approximately 4-to-1 in favor of nonworking hours, it’s evident that these workloads need to be addressed.
One effective solution is kube-green, an open source tool developed by Davide Bianchi, senior technical leader at Mia-Platform. Kube-green manages Kubernetes cluster resizing to optimize IT infrastructure energy consumption, reducing CO2 emissions by around 30% on average. This tool acts as a Kubernetes controller that defines a custom resource definition (CRD) named SleepInfo, which allows you to pause and restart pods within a specific namespace. With kube-green, you can scale down the number of deployments to zero and limit cron jobs to run only during working hours.
Kube-green has been part of the scheduling and orchestration section of the Cloud Native Computing Foundation (CNCF) landscape since 2022. The number of kube-green adopters is continuously growing, with users reporting significant benefits — not only in environmental impact but also in cloud cost management, with an average savings of about 30%.
Carbon Intensity#
Being aware of carbon intensity, which refers to the amount of carbon dioxide emitted per kilowatt-hour of energy consumed in the regions where our cloud service provider makes resources available to us, allows us to adjust the execution of our workloads so that they primarily use energy from renewable sources. The basic concept is simple: When energy production is skewed toward renewable sources, we run more workloads; when it is not, we run fewer.
To balance workloads so they align with the use of energy sources by the cloud service provider, we have two strategies: temporal shifting, which involves shifting major workloads to times when renewable energy usage is lower in the region where our cluster is located, and spatial shifting, which involves multiregion federation mechanisms to move workloads to physical locations where the grid carbon intensity is lower.
The choice between the two approaches depends on the type of workload being considered. For extremely time-sensitive computations, such as digital payments, it’s best to adopt the temporal shifting approach. On the other hand, for computations that are not particularly time-sensitive, such as machine learning model training, the spatial approach can also be adopted. For instance, Google has implemented spatial shifting for managing multimedia files on platforms like Google Drive and YouTube. Another important consideration is the presence of policies that require the use of a specific region: In this case, temporal shifting is the only available option.
Regardless of the choice between temporal shifting and spatial shifting, a fundamental requirement is access to data on the carbon intensity of the grid in our regions. To obtain such data, there are two available services: WattTime and Electricity Maps. The difference between the two is that WattTime provides marginal carbon intensity, while Electricity Maps provides average carbon intensity. Consequently, WattTime is preferred for optimizing immediate impacts, while Electricity Maps is more suitable for long-term optimizations.
The Green Web Foundation has simplified access to this type of data by providing grid-intensity-go, a Go library designed to be integrated into Kubernetes and other schedulers. This allows the use of carbon intensity values in decisions on where and when to run jobs.
Conclusion#
As we continue to advance with innovative technologies, ensuring the environmental sustainability of cloud computing is essential for long-term success and progress at the forefront of the industry. As the demand for digital services continues to grow exponentially, so does the energy consumption and carbon footprint associated with powering data centers. Therefore, adopting green practices and prioritizing renewable energy sources in cloud infrastructure is mandatory to mitigate the environmental impact and pave the way for a sustainable digital future.
With this approach, not only do we address ecological concerns, but we also gain significant enhancements in FinOps management. By optimizing resource allocation, we can drive efficiency and reduce waste, contributing to a healthier planet while still pushing the boundaries of technological innovation.