15 May 2023
Securing data-in-use, the data moving inside applications, presents unique challenges. Traditional security measures for data-in-motion and data-at-rest are insufficient since APIs often tranport and combine fragments of data from those sources.
In our increasingly interconnected world, data security has emerged as a paramount concern. Traditionally, we've addressed the need to secure data-in-motion—data that is being transmitted between applications and end users, and data-at-rest—data stored in databases and other storage systems. But what about the data moving inside of an application, often between the APIs that constitute it? This is referred to as data-in-use, and it presents a unique set of challenges to secure.
Data-in-motion has been successfully secured using technologies such as Virtual Private Networks (VPNs), Cloud Access Security Brokers (CASBs), and Secure Access Service Edge (SASE). VPNs create encrypted connections between users and the internet, ensuring that data cannot be intercepted or tampered with during transmission. CASBs provide visibility into cloud-based applications, enforce policies, and detect and respond to threats. SASE combines networking and security services into a single cloud-based platform to provide secure access from any location.
On the other hand, data-at-rest is secured using fine-grained access controls, next-generation Data Loss Prevention (DLP), and data classification technologies. Fine-grained access controls limit who can access data based on user roles and privileges, while next-generation DLP systems prevent unauthorized data exposure by continuously monitoring and protecting data across an organization. Data classification, in turn, involves categorizing data so that it can be easily and efficiently protected, ensuring sensitive data gets the highest protection level.
Data-in-use is different. It refers to the active data that APIs or data microservices extract from both data-in-motion and data-at-rest to perform an application's functions. This data is seldom complete; APIs typically read and transport specific records from databases, not entire tables. They move byte-ranges from documents, not the complete documents, and combine data fragments.
This fragmented and modular nature of data-in-use makes it difficult to track and secure with the same techniques used for data-at-rest. Solutions designed for data-at-rest security work with structured and predictable data sets. In contrast, data-in-use is fluid, moving in and out of various components of an application, with uncertain data alignment and lack of clear record boundaries.
A system intending to secure data in APIs could potentially ingest every API spec to understand the data structures, permissions, and data flows. However, collecting these specifications would be burdensome for customers and still fail to provide complete security.
Next-generation DLP products can classify the data but cannot determine its permissions and ownership, critical aspects for data-in-use security. For instance, in a multitenant application, it's less about knowing that two customers' data is private and more about ensuring Customer A's data is not accessible by Customer B.
Enter Caber CA/CO, a platform designed specifically to secure data-in-use. Rather than merely classifying data, Caber identifies the ownership and permissions that belong to the data as it moves within an application, addressing the unique challenges of data-in-use security.
Caber provides continuous observability, meaning it monitors data-in-use in real-time, tracking its movement and usage. This approach enables quick detection and response to any anomalies or security incidents.
Moreover, Caber applies continuous authorization. It validates permissions and data ownership at each stage of data movement and usage, ensuring that only the right entities have access to the right data at the right time.
In a world where APIs are the building blocks of modern applications, securing data-in-use is critical. Caber's unique approach fills the security gap, providing comprehensive observability of how data-in-use moves so you can develop policies to control it, and continous authorization against those policies so you can verify they are working.