Oracle says, 2015 is the Year of Big Data
Oracle has revealed seven Big Data predictions for the year 2015.
"We believe 2015 will be the year of Big Data. Data is the new form of capital. Today, we have an index that talks about the value of a brand. Going forward, we will see an index that talks about the value of data. Businesses will start talking about data capital because having the right information about people, places and things will be critical for an enterprise to build the competitive edge and succeed,” said Sundar Ram, Vice-President, Technology Sales Consulting Oracle Corporation, Asia-Pacific.
Corporate boardrooms will talk about data capital, not big data. Data is as necessary for creating new products, services and ways of working as financial capital. For CEOs, this means securing access to, and increasing use of data capital by digitizing and datafying key activities with customers, suppliers and partners before rivals do. For CIOs, this means providing data liquidity: the ability to get data the firm wants into the shape it needs with minimal time, cost and risk.
Big data management will grow up. Hadoop and NoSQL will graduate from mostly experimental pilots to standard components of enterprise data management, taking their place alongside relational databases. Over the course of the year, early majority firms will settle on the best roles for each of these foundational components.
Companies will demand a SQL for all seasons. SQL is not just a technology standard. It is a language based on 100 years of hard thinking about how to think straight about data. Applications, analysts, and algorithms rely on it daily to run everything from fraud analyses to freight forwarding. Companies will demand that SQL works with all big data, not just data in a Hadoop, NoSQL (Oh, the irony!), or relational silo.
Just-in-time transformation will transform ETL. New in-memory streaming technologies change the rate at which we can act on data, causing a re-examination of extract, transform, and load (ETL) activities. Data scientists will increasingly opt for real-time data replication tools instead of batch-oriented ones to get data into Hadoop, which has been the norm. They'll also take advantage of distributed in-memory processing to make data transformation fast enough to support interactive exploration, creating new data combinations on the fly.
Self-service discovery and visualization tools will come to big data. New data discovery and visualization tools will help people with expertise in the business, but not in technology use big data in daily decisions. Much of this data will come from outside the firm and, therefore, beyond the careful curation of enterprise data policies. To simplify this complexity, these new technologies will combine consumer-grade user experience with sophisticated algorithmic classification, analysis and enrichment under the hood.
Security and governance will increase big data innovation. Many large firms have found their big data pilots shut down by compliance officers concerned about legal or regulatory violations. This is particularly an issue when creating new data combinations that include customer data. In a twist, firms will find big data experimentation easier to pen up when the data involved is more locked down. This means extending modern security practices such as data masking and redaction to the full big data environment, in addition to the must-haves of access, authorization and auditing.
Production workloads blend cloud and on-premises capabilities. Once companies see enterprise security and governance extended to high-performance cloud environments, they'll start to shift workloads around, as needed.
See What’s Next in Tech With the Fast Forward Newsletter
Tweets From @varindiamag
Nothing to see here - yet
When they Tweet, their Tweets will show up here.