The Data-driven approach, with decisions made based on array analysis, is becoming a priority in managing a business and its individual functions. Companies collect large amounts of information from hundreds of sources to use it for their digital services and analytics systems development. This process runs along with the endless growth of an IT infrastructure that can store data and ensure its quick processing. Due to possible hardware shortages and higher prices, IT costs may go up significantly. Head of Tarantool platform Aleksandr Vinogradov shares what technologies are there to help you harness the total cost of infrastructure ownership (TCO).
The amount of data in the world is rapidly growing: according to IDC forecast, the volume of data created and captured by 2025 will equal 175 zettabytes, compared to only 40 zettabytes in 2020. Most of it is generated and collected by businesses.
Companies collect data about their customers, partners, employees, and their internal systems operations. For example, a bank customer profile may not only contain text (full name, passport number), but also screenshots of personal documents. Companies with millions of customers, thousands of employees, and hundreds of partners store petabytes of data.
On top of static data, you need to keep track of transactions and other actions of each client. For instance, online stores build their recommender systems around a client’s previous purchases. Therefore, information volumes are constantly growing. Each new megabyte that takes up space on the server means growing expenses for the company.
But large businesses need more than just storing their data on a single server and even in a single data center. Safety requirements and risk of data loss directly affect business reputation. Most often, companies store backups of their data in multiple data processing centers, which means skyrocketing storage and processing costs.
IT infrastructure maintenance and development costs are also constantly growing. The situation is made worse by a limited market supply of server and network equipment, higher prices for hardware and extended delivery times. The shortage in the server market is due to suspended imports of products manufactured by foreign companies, such as Dell, Hewlett Packard Enterprise, IBM and Cisco, as well as the Taiwanese TSMC refusing to produce processors for Russia. Companies have to look out for alternative ways of infrastructure development in order to continue working in the new normal.
IT acceleration and data compression
You can slow down your IT infrastructure’s physical expansion by making better use of existing capacities. In-memory platforms are a tool that can save resources in your server. They work in RAM and don’t take up your hard drive resources.
In-memory platforms provide horizontal scalability of your IT infrastructure. They can also combine different types of data from different repositories – data about clients, their actions in sales channels, and much more. So, digital services, such as online stores or mobile applications will seek information in a single space instead of collecting it from various databases in two stages (cache + hard drive). This significantly speeds up the work of online services. While a traditional database can process up to 10,000 queries per second, an in-memory platform can manage up to 1 million queries within the same time limit.
Another way to make better use of storage devices is data compression. You can compress data in storage systems, as well as software. For instance, the compression function on Tarantool platform allows you to save up to 15% of server space.
But there’s one thing to keep in mind –compressing and decompressing when accessing the data takes up processor resources, which can affect its performance, so it’s better to compress data that is rarely accessed.
A recent trend, hybrid IT is a model which involves local resources and public clouds simultaneously. Today, the creation of such infrastructures goes hand in hand with companies transitioning from global clouds to Russian providers’ platforms. Experts from VK Cloud Solutions estimate that an increase in Russian public clouds users may be as high as 70-80% by the beginning of summer.
For large companies with higher security requirements, critical infrastructure segments must be stored on their own servers. Therefore, a complete transition to a public cloud would be impossible for them. But there are several scenarios where a hybrid infrastructure can help reduce dependency on hardware.
— The hybrid IT consumption model allows you to turn part of your capital costs into operating costs. That means, instead of making one-time investments in new servers, you can calculate the monthly cost of a cloud service subscription as much as the company needs it at any given time.
— A hybrid infrastructure can be part of the business continuity strategy. For example, non-critical data can be moved to a public cloud. In case of serious problems, you can dismiss the cloud resources, while keeping all vital data on a local server.
— It’s cheaper to test hypotheses in a cloud when developing new IT solutions. It’s not practical to purchase equipment for every single idea while it’s still not clear whether it’ll work, or whether you’ll have to wrap up the project. A cloud solution works best for experiments. In this case, computing capacity is rented for the duration of the development and testing of the solution. You can later transfer your IT product to a local infrastructure or scale it up and run it in the cloud.
— A public cloud can be used as an environment for developing solutions. On top of computing resources (IaaS, infrastructure as a service), the cloud has tools for IT products development (PaaS, platform as a service). They are configured and ready for use, while the platforms are administered by the cloud provider.
— Transferring digital services with volatile load to a cloud has been the trend of the last couple of years. Companies host online stores, mobile apps, and video services in the cloud to ensure rapid scaling during peak demand (Black Friday, holiday season, etc.). You can also use autoscaling in a public cloud. In this case, resources are tapped automatically when predetermined service load parameters are reached, and are automatically disconnected later.
Currently, maintenance and development of data storage and processing infrastructure is a key task for IT departments. It‘s further complicated by the incredible speed of collecting data as well as equipment delivery disruptions. We’ll have to change the way we approach building IT infrastructures to meet the needs of businesses while optimizing IT costs.
18 countries have unveiled the first international agreement on how to protect artificial intelligence from irresponsible players. It aims to develop AI solutions that are "inherently safe".
On November 30, the professional IT community GlobalCIO hosted a large-scaled international conference "Global CIO Insights: Digital Transformation with AI". During the event, leading experts shared their practical experience in launching projects utilizing artificial intelligence (AI) and highlighted approaches that helped elevate their companies to new heights.
Voting for projects participating in the "Project of the Year" contest is open. The voting began on December 1st and will continue until January 15th inclusive. The winners will be announced on February 7th, 2024.
Online sales is one of the areas where the quality of IT tools directly affects business profitability. Kamza Nugumanov, CIO of Jusan store, tells about the experience of deploying a rapidly growing Kazakh marketplace.