Large data volumes makes it hard for organisations to retrieve specific info they need
Knowing value of data being gathered before storing it can have significant impact
DATA growth has been driving change around storage and data lifecycle management within the industry for years.
Analysts estimate that the amount of data produced globally more than doubles every two years – 1.8 trillion gigabytes in 500 quadrillion files, according to the 2011 IDC Digital Universe Study (click here for PDF).
In Asia Pacific, 72% of organisations rate the exponential growth and increasing complexity of data as one of their top data management challenges, according to the 2014 CommVault/ IDC Smart Data Management in the Third Platform Era report.
Such figures have led to revolutionary industry changes over the past decade – for example, increases in storage platform capacity, big data – and have developed out of a necessity to store and manage these growing pools of data.
At the same time, the big volumes of data amassed by organisations also result in an increasing difficulty to easily retrieve the specific information they might need.
Knowing the value of data that is being gathered before storing it can have a significant impact on storage utilisation, recovery objectives, and their underlying costs for organisations.
Data that matters
As the amount of enterprise data continues to grow exponentially, so does the storage capacity required to manage it.
This is mainly due to the fact that most companies do not have a data management plan. They tend to capture and store all the data they own, even if it is redundant.
Traditionally, high storage costs forced companies to be more careful about how and where they stored their data. Today, with capacity being more reasonably priced, the tendency to store everything without differentiating what is really important, creates challenges around knowing where to store different kinds of data and how to retrieve that data when needed.
The ability to easily search electronically stored data and provide accurate results instantly is critical for organisations, both for daily restore operations, as well as in urgent situations such as when specific data is needed to satisfy an audit or to answer for litigation.
In addition to internal organisational compliance, enterprises must also contend with continuously evolving external regulatory requirements.
Regardless of industry or sector, any corporate information generated within an organization may be subject to regulatory or litigation requests. The risks of non-compliance are significant, and can lead to legal, financial and reputation damage.
As a result, organisations have no other choice than to develop solid plans for the reliable retention of enterprise-controlled information.
Managing data efficiently
Developing and implementing the right data retention policy is a necessity for both internal data governance and legal compliance.
Yet, not all data can be treated the same way. Some organisations’ data need to be retained for many years, whereas there may be data that are required for just a few days; and some others may not need to be stored at all.
Therefore, when setting up processes, it is important to identify the organisation's most valuable data and prioritise storage management resources appropriately.
Data classification is the first critical step in placing the right data into the appropriate type of storage tiers.
Having this in mind, organisations can effectively determine what route to take, whether building on-premise solutions, moving to the cloud, or opting for a combination of the two.
Organisations must be able to make information identifiable and available at any time to satisfy internal and external requests, or face financial and legal consequences.
It is more important now than ever for companies to have a well thought-out data lifecycle management strategy. It is critical for organisations to understand the business value of data before defining the storage strategy.
Successful organisations partner with their IT departments to develop a tiered storage approach. This enables IT to break down the organisation’s storage requirements into digestible and manageable pieces.
The objective is to match the data’s relative value to a particular tier, placing more recent or valuable storage on the faster performing storage, while relegating the older, less critical or infrequently accessed data to less expensive storage.
There is an opportunity for organisations to save on storage by only using the fastest storage for data that is actively used by the companies, and use less expensive platforms, such as the cloud, to store archival or backup data.
The result should be a mixture of good corporate policy, properly sized and tiered physical and cloud storage, and a comprehensive data management system to tie all of it together in an automated and auditable solution.
Mark Bentkower is director of Enterprise Solutions Asia Pacific, CommVault Systems.
Smart data management in the ‘third platform’ era
How file analysis tools can unleash the light in ‘dark data’
As big data grows, so does the confusion it brings: Forrester
For more technology news and the latest updates, follow us on Twitter, LinkedIn or Like us on Facebook.