In this special guest feature, Eldad Chai, CEO & Co-Founder, Satori, discusses how companies can better manage data access by adopting a need-to-know approach that isn’t bogged down by manual processes, custom scripts and disparate tooling. A single integrated approach – known as DataSecOps – will get them there without friction. Satori is the developer of the first DataSecOps platform — a universal data access platform for cloud-based data stores and infrastructure, touting multiple out-of-the-box integrations with industry’s leading data stores, such as Snowflake, Amazon Redshift, Amazon Athena, Amazon Aurora and Azure SQL. Prior to founding Satori, Eldad was the Senior Vice President of Product Management and a member of the senior executive team at Imperva.
Despite the prevalence of data breaches and mishandled information, the knee-jerk reaction to tightly lock up all of the data is unrealistic for today’s business needs. Data isn’t meant to be put away. It needs to be accessible to all stakeholders – data scientists, analysts, engineers, IT and business leaders – to better understand customers and ultimately optimize revenue.
Yet democratizing data comes with risk. There’s a critical need to understand where sensitive data (like PII, PHI and financial data) is stored, who is accessing it and why. Most companies use one of two approaches to manage access to data – the default-to-know or open-to-all approach or the need-to-know approach. Default-to-know allows open access to valuable data, facilitating rapid analysis and insights, but is ridden with immense risk and not compliant for consumer data privacy regulations. Need-to-know is the ideal approach, but requires manual processing of each access request, which slows down decision making and seriously inhibits innovation.
Companies can better manage data access by adopting a need-to-know approach that isn’t bogged down by manual processes, custom scripts and disparate tooling. A single integrated approach – known as DataSecOps – will get them there without friction.
A New Take on the Traditional Approaches
Providing increased access to data means greater risk. But traditional approaches suffer from bottlenecks, reliance on manual processes and lack of governance.
The default-to-know or open-to-all approach is typically seen in startups where a company’s customer and employee base has rapidly expanded. The break-neck pace of growth lends itself to openly sharing data, driving innovation as data scientists, analysts and engineers use the data for better insights. However, most organizations don’t have the resources in place to properly monitor what data is accessed, by whom and for what reason. This elevates the risk of a data breach as sensitive PII, PHI and financial data is open to all employees regardless of their role or responsibility.
The better approach, need-to-know, grants access to a user based on what they need for their job role (e.g. financial analyst) and responsibility (e.g. North America or US geography). However, it traditionally involves manual data access requests via support tickets or emails to the IT or database administrators, which are routed to the data owners for approval. This process can take days from initial request to granting access.
Default-to-know or open-to-all is risky, with the possibility of unsuspecting employees exposing sensitive data (e.g., the Robinhood breach that led to customer support personnel mistakenly sharing records for over five million consumers with hackers). And the default way of using manual-based tools and processes to monitor and grant need-to-know access drastically slows down innovation and growth.
Further, neither optimizes regulatory compliance for consumer data privacy laws, such as the California Consumer Protection Act (CCPA), Virginia Consumer Data Protection Act (CDPA), Colorado Privacy Act (CPA), EU GDPR or China PIPL. Default-to-know does not track who accesses sensitive information and need-to-know requires too much manual reconciliation and report creation from data engineering and compliance teams.
Today’s innovation-driven, cloud-connected climate calls for a new take on data access. The ideal configuration is need-to-know with streamlined access that supports full automation. That is why successful, high-growth companies are shifting to DataSecOps to facilitate need-to-know access.
DataSecOps: The Modern Way to Implement Need-to-Know
High-growth companies that want secure and timely data access need a better way to implement need-to-know access. A DataSecOps approach makes it possible to move from open-to-all to need-to-know without slowing down innovation. It provides a single, integrated platform to streamline and automate data access, security and compliance.
DataSecOps, or Data Security Operations, demands that organizations treat security as an inherent part of data operations, not something that’s added as an afterthought. Automatically integrating security at every phase of the data lifecycle and centralizing data governance prevents adverse effects such as project delays and compliance risks.
DataSecOps ensures complete visibility and control over data flows from the security perspective, and provides a seamless experience for gaining access to data. Instead of hindering data democratization, DataSecOps fosters it.
A successful DataSecOps approach should be a shared responsibility between all stakeholders – data engineering and platform, data science, analytics, security and compliance. The approach includes the following five capabilities:
- Understands data. It’s critical to understand all data, including the most sensitive data. Continuous data discovery and classification monitors every database query and result, classifies the data in-motion, keeps a universal audit of data access, and builds a current data inventory.
- Protects sensitive data. Dynamic masking of sensitive data at query run-time can be based on preferred security policies and identities, data locations and data types, and supports compliance for GDPR, CCPA and other regulatory requirements.
- Combines security and access. Granular security policies are integrated into access control policies, regardless of the data platform being used or how the data is consumed. This may include Dynamic Masking, Row-Level Security, Role and ABAC (Attribute-Based Access Control) and can be applied across different data platforms without the need for engineering resources.
- Supports self-service access. Self-service access facilitates data access requests and approvals without any added code or data flow modifications, so analysts, data scientists and engineers can access the data they need quickly and securely without manual access requests via IT service tickets or emails.
- Supports compliance and governance. Comprehensive reporting and monitoring provides a real-time audit trail and visibility into the usage of all data so that companies meet regulatory compliance requirements.
The Power of Secure, Accessible Data
Organizations no longer need to trade a risky open-to-all approach for a slow, manual need-to-know approach. They can implement an efficient and secure need-to-know access approach that fosters innovation and growth – via a DataSecOps platform.
A DataSecOps platform offers timely and secure need-to-know data access by ‘invisibly’ and automatically embedding access controls, security and governance right into data operations.
With DataSecOps, companies will make more informed decisions, enhance compliance, reduce risk and keep data highly accessible. And less time spent on ad-hoc access and security controls means companies can focus on what’s most important – connecting with customers, building better business models and improving operations.
Sign up for the free insideBIGDATA newsletter.
Join us on Twitter: @InsideBigData1 – https://twitter.com/InsideBigData1