Why the California Consumer Privacy Act, why now?
California’s CCPA came into force January 1st 2020, becoming fully enforceable penalties and all, July 1st 2020.
The act has been endearingly referred to as GDPR Lite, a more laid-back version of the GDPR that could be passed through California’s legislative process with more minimal protest from stakeholders. The CCPA applies against a backdrop of diverse businesses processing personal data.
In anticipation of getting a piece of the new “sharing economy,” countless businesses have cropped up across California which capitalise on the opportunity to cut out the middle-men of their industry.
Business models in the sharing economy bring goods and services directly to consumers, without the stress and liability of managing the aforementioned goods or services themselves. Think Airbnb, which is known as the “The world’s largest hotel which owns no property.” Or Uber, which is the largest ride-sharing service in the world, yet owns no cars and (so far) employs no drivers.
Companies participating in the sharing economy rely on data-driven methodologies to maintain healthy profit margins. Facebook can afford to provide you with a “free” service because they profit from the data you generate while engaging with their services. FB’s overall value increases as you post content. California’s Silicon Valley culture has sent a resounding message – data is the gift that keeps on giving. With a sea of new start-ups relying on data to bolster fragile business models, it’s no wonder that the first US state to enact its own privacy legislation is California. The privacy risk consumers face has increased exponentially.
59% of companies believe that data and data analytics are vital to their organisation
29% of businesses surveyed believed the “big data” allowed them to generate new revenue from existing products or services
The dangers of the digital economy
While most digital consumers are wary the moment they input a credit card or phone number online, they may not be aware of the technicalities of data processing, understand nuances of the legal framework, or be familiar with technical terms like “shadow profiling.”
A shadow profile is made up of data harvested from a multitude of sources, collected as you engage with websites, apps and other digital services. Data is combined by linking matching data points. Accurate estimates can then be made about who you are – your gender, health, interests, socio-economic status, and even your criminal background may be accurately determined.
We are also surrounded by a multitude of sensors day-to-day that may be collecting data. Your phone, personal assistant, discounted TV nabbed in the Black Friday sale, and even your refrigerator or vacuum cleaner may contain sensors.
The consequences of loss of personal data have devastating effects, including, as California’s DOJ remarks “financial fraud, identity theft, unnecessary costs to personal time and finances… reputational damage, emotional stress, and even potential physical harm.” The DOJ also cited that there is a pressing need for enhanced consumer privacy rights, as “neither state nor federal law have kept pace with these [technological] developments in ways that enable consumers to exert control over the collection, use, and protection of their personal information.”
One of the biggest concerns for consumers is how to prevent data from being sold to and used in unpredictable ways by third, fourth, and fifth parties, as it’s sold on ad infinitum. Chances are, if you are not paying a subscription for your software-heavy product, your data is being used to fill the profit-gaps. The CCPA is intended to provide traceability for this very reason.
In a statement of reasons for the CCPA, the California legislature notes that “although California has been a leader in privacy protection, the law has not kept pace with rapid technological developments and the proliferation of personal information that fuels the internet economy. As a result, consumers are largely unable to control or even understand the collection and use of their personal information by a myriad of businesses and organizations.”
Isn’t the CCPA just another GDPR? Think again...
The intention of the CCPA echoes that of the GDPR. The GDPR states in its preamble that “[technological] developments require a strong and more coherent data protection framework in the Union...natural persons should have control of their own personal data.”
How do the rights granted to consumers in the GDPR compare to those under the CCPA? On the face of it, the rights granted to California’s consumers mirror those enumerated in the GDPR. The CCPA also includes the right to be informed about what data is being collected, the right to be informed about how and why your personal data is being used and sold, the right to request deletion of your data, the right to request that companies do not sell personal data, and protections for consumers against discrimination for exercising these rights among other rights. You can read the full act here.
There are other similarities. The European Data Protection Supervisory (or EDPS) has remarked that access to a service must not be made conditional upon the individual being forced to ‘consent’ to being tracked and prevents discrimination. Both the GDPR and CCPA explicitly prohibit discriminatory practices against consumers who choose to exercise their privacy rights.
However, being compliant with the GDPR does not mean you are automatically compliant with the CCPA.
Prior to July of this year, businesses will need to prepare on the ground by, for example, reviewing their site designs and updating their records of data processing. Consumers who expect the CCPA to protect them to the same degree as the GDPR will need to manage high expectations.
The CCPA is much more narrow in scope than the GDPR, excluding certain types of data such as health information, information related to vehicles, or information related to credit worthiness.
Additionally, the CCPA applies only to businesses that meet certain thresholds – the most notable being that 50% or more of a business’ revenue must be derived from selling personal information. “Selling” data doesn’t just cover cash in exchange for data, but is wide enough to cover other processes deriving value from data.
Unlike the GDPR, which is more free-form in its requirements as to how businesses establish opt-in consent (e.g. opt-ins can be a tick-box, or an “I agree” button), the CCPA requires that all websites have a link or logo reading “do not sell my personal information.” This logo may look jarring to consumers, as it makes it obvious that websites are selling consumer data.
The culture of privacy; is data privacy the same in the USA as in Europe?
The EU and USA have markedly different privacy cultures. Experts argue that the EU relies more heavily on written laws to set privacy standards, the legislature pushing specific risk-management rules down onto businesses who view privacy compliance as a legal jargon, an administrative burden.
By contrast, and “particularly in the United States, there is a surprising deep chasm between privacy law in the books and privacy practice on the ground…the focus is on a reputation-based approach.” In the US, privacy is more closely related to corporate responsibility than litigious risk-management. Regulators push the onus of responsibility onto businesses to determine and drive the appropriate privacy protections within their organizations.
This can be further demonstrated via the “data as a commodity” argument, which is gaining steam in the USA. This is the idea that data should be viewed as an asset you own and license others to use. Prohibited in earlier versions of the act, the CCPA allows financial incentives or price differentials to compensate for profit gaps. If a consumer doesn’t want to consent to having their data used, they can just pay money to avoid compromising their data.
How American business empires were built on consumer data and loose terms of privacy
The payment model is currently being marketed as a potential legal “solution” for businesses who have built fragile business models that mostly rely on data to generate profit. A great example of a business with such a model is Google. Google offers a free service directly to the consumer – you can Google something, use services like Gmail, Google Maps and Google docs without a monetary charge. Their business model is fragile because it relies on data, and without data there is no business.
Google can afford to stay on the forefront of technological development, have brick-and-mortar locations globally employing thousands of people, and provide consumers with top-tier services and ongoing support by using the data they collect from you. In 2018, 85% of Google’s revenue came from ads. Let’s look at targeted ads as an example.
Targeted ads work by collecting, analyzing, and processing consumer data. This data processing is used to generate ads that are relevant to you. Data may be disclosed or sold to third parties. If the legal privacy landscape changes significantly, prohibiting Google from using and selling personal data as it can be done today, the business model collapses.
A great example of what would happen if the tide were to turn in favor of stronger privacy regulation can be demonstrated by the recent senatorial hearings with Facebook, following the Cambridge Analytica data scandal.
During the hearings, Senator Grassley asked Mark Zuckerburg “Are you actually considering having Facebook users pay for you not to use the information?” He replied “I think what Sheryl Facebook’s COO was saying was that, in order to not run ads at all, we would still need some sort of business model.”
Mark Zuckerburg stated that FB’s business model would fall apart if the company was prohibited from running ads. He implied that the burden of making up the loss in profits could go to consumers, perhaps by FB’s adopting of a “data as a commodity” approach.
What snail beer can teach us about the ethical obligations of data processing
I can imagine Grassley admired the chutzpah – no other industry would have the tenacity to argue such a point. On the face of it, it seems like a reasonable business solution. But let’s put it in the context of consumer protection law in food standards, from one of the best-known Scots law cases – Donaghue vs Stevenson.
Mrs. Donaghue was treated to a ginger-beer by her friend in Wellmeadow Café in Paisley, one afternoon in 1928. Much to the horror of everyone involved, a lone decomposed snail was found at the bottom of her beer, eventually resulting in Donaghue’s hospitalization from gastroenteritis and shock. The case was fundamentally about negligence, as calculable harm was caused to Mrs. Donaghue. Donaghue vs Stevenson set off a chain-reaction that eventually resulted in the modern concept of “duty of care” toward consumers.
Now let’s say the “ode-d’escargot” ginger-beer represents Facebook processing data negligently, such as selling data to third parties like Cambridge Analytica without appropriate controls. Mrs. Donaghue represents the consumer, harmed by the detrimental data processing.
The equivalent would be like if the drinks manufacturer, Mr. “Facebook” Stevenson, built his entire business model on the continuous sale of snail beer, maintaining a predilection for not telling his customers about the snails. Imagine if Stevenson testified in court that “snails” were his thing, and not only was he not prepared to stop selling snail beer, but henceforth intended to charge all of his customers who didn’t want a snail in their ginger beer extra for the privilege.
After all, how else was he going to pay to keep the Café doors open after all of this negative press?
CCPA: A step in the right direction, but not a complete solution
Propositions for a payment model have raised a number of equity concerns for the DOJ, as “low-income groups may be more likely to give up their personal information in exchange for services while high-income groups will pay the service fee to protect their data.” Big data is arguing for “Technofeudalism.” To continue with Facebook as the provocative example…
Facebook represents the crown: the technology creator and controller. Since Facebook determines access to, use of and service conditions for their platform, they also represent the nobility in this example.
FB users upload their own content to the platform, a process which NY times bestseller Nicholas Carr cleverly named “digital sharecropping.” Creating content increases the value of Facebook overtime– generating profits for the company that consumers will never see. Ultimately, consumers pay for access to the platform with their data (and maybe in the future, with cash or check too). The consumer ultimately represents the peasantry, and always gets the short-end of the stick.
If differential pricing models become an industry standard, the CCPA could unintentionally contribute to a ‘privacy class system,’ where higher socio-economic groups are able to pay to protect their personal information and disadvantaged groups have no choice but to allow their data to be used.
Still, these services are so engrained into our education system, jobs, and daily lives that any move to regulate them has to navigate the complex short and long-term consequences diligently. Some of these technologies are so pervasive that to not have access to them puts consumers at a measurable social or financial disadvantage.
While the CCPA is a step toward greater transparency and moves some power back into the hands of the consumer, there is still work to be done.
Remember, terms and conditions may change.