“OPINION”: The Whole Social Media System Is Corrupt. Here’s How Legislators Can Fix It
The Whole Social Media System Is Corrupt. Here’s How Legislators Can Fix It.
The conversation around the problems with social media platforms is complex and multi-dimensional, touching on wildly different subject areas including free speech, economic theory, Section 230 of the 1996 Communications Decency Act (CDA) and anti-trust laws and enforcement. Further, new legislative proposals – and there are many – are trying to address Big Tech, when in reality, there is Big Tech and a sub-industry of it called Social Media. The latter involves a much more complex set of issues such as free speech, transparency, algorithms that drive engagement, and most importantly, the platforms’ protections from liability under section 230 of the 1996 Communications Decency Act (CDA). Thus far, no proposed legislation is fully addressing all of these issues (see a list of proposed legislation here). Missing from the barrage of partisan as well as bi-partisan proposals is the very important discussion of how social media platforms do indeed manipulate, stifle and often ban free speech and also allow third party actors of good and bad intent to interact with the platforms, without any meaningful restraint, to contaminate, distort, amplify, prejudice and malign free and open debate among its users. Here’s what legislators can do to fix it…
First and foremost there is, fundamentally, a lack of consumer/user choice and a total lack of transparency among social media platforms. The differences between social media platforms and older internet communications platforms, while basely similar, are decidedly different. Social media platforms today are supercharged by algorithms, connectivity and scale, which earlier platforms were not at the time that Section 230 of the Communications Decency Act of 1995 was enacted. Further, each social platform has its own unique algorithms that bias what is presented to users based on user demographics, preferences and a wide range of other factors that the user has little to no control over. This was a choice made by the top management of each of the platform companies to incorporate in the software so that they could improve the consumer experience, supposedly, and to collect data on users’ behaviors and profiles to help advertisers better reach their target audience. However, the result, in the pursuit of advertising revenues, was to leave users/consumers in the dark. Consumers know what happens when they turn on their television or streaming service, or when they make a phone call or text a friend. On social media, the consumer has little understanding and limited choice on how to manage their experience, and they do not know how algorithms affect what they view or how algorithms influence their engagement.
Secondarily, the platforms are not equipped and never will be, in any time frame that matters, to moderate free speech no matter how many resources each apply to the problem. It is, as many have labeled, a “whack-a-mole” problem. Artificial intelligence (AI) and machine learning driven algorithms are just not that smart. The science around AI has many years to go in its development before we can rely on it – if ever. And scientists are still seeking a break-through in natural language processing that would enable accurate predictions of context and intent. Simply put, as long as algorithms are driving any type of engagement on the platforms, the platforms are enabling and thus culpable in any speech that occurs on them. Do they deserve protection under Section 230, as currently stated, especially when their design intent puts them in an entirely different category than the communications companies that Section 230 was created for? Certainly not as has been discerned by the authors of Protecting Americans from Dangerous Algorithms Act, which removes protections under Section 230 when a platform’s algorithms are a contributing factor to the spread of speech that incites violence and is related to criminal activity, or when free speech, as protected by the Constitution, is prohibited.
However, third and fundamental, this “design intent” is what needs special attention. In that regard, social media platforms’ raison d’etre is to capture and exploit user information and data to algorithmically charge free speech to amass more engagement and to resell user behavior and demographic statistics to advertisers and others. In order to do this, social media platforms have implemented highly flexible and easy to use APIs (Application Programming Interfaces). These APIs enable third party developer applications to retrieve data and automate engagement activities. It is the APIs and what the platforms allow third party developers to do within their system that make the social media networks open to manipulation, abuse and hacking. They are also shockingly harmful to privacy, data security, user control and choice.
To address this, legislators would be wise to examine how it is that social media platforms leave the barn door open to hackers and other miscreants that cause outsize influence with respect to misinformation and criminal activity. The most glaring example of this is the uniformly applied and authorized use of “bearer” tokens, a feature of OAuth 2.0, which is a delegated authorization protocol, not an authenticated protocol, meaning that there are no measures taken to ensure that token is associated with the actual user. The latter would require a much more complex form of access and would be more secure although less flexible, a trade-off that seems more than reasonable given the scale and seriousness of the problem. Using a bearer token to grant access to APIs allows any third party developer to use a user ID and password to gain a “token”. This token enables a third party developer to gain access to the same features, data and information that any real user with a user account would have, even on other users. Hence, the third party developer, and any application they develop, is able to gather data on tweets, posts, replies and likes by user and also engage, programmatically, as a user would by liking, posting, retweeting other engagement features. This is like Verizon or AT&T enabling anyone to eavesdrop on any caller or to tap into any stream of conversations. You would never dream of that ever happening unless there was a court order to do so. This attribute alone should raise alarms with respect to the power of the platforms to act like an unrestricted federal government within the context of 4th amendment rights. Even with consent of the user, wouldn’t you want to give that user a choice as to whether their data could be used by others? And, if I tweet or post something, shouldn’t I have a choice to make it private, sent to just to my followers, a specific group, or opt-in for a broadcast message? Privacy, transparency and consumer choice is certainly a feature that seems essential to include in new legislation such as demanded by the Platform Accountability and Consumer Transparency Act (the PACT Act).
But wait, there is more. The actions of third party developers and any programming code they develop to interact with a platform are subject to “throttling” or “rate limiting” rules and guidelines, which are flimsy at best, because they can be easily circumvented by carefully modeling your code to stay within the time and access limits set before the platform will shut the developer’s application down. However, the more “bearer” tokens a developer has, the faster the developer can process actions in parallel, such as creating fake engagements, gathering tweets, or posting likes and retweets, and remain undetected by API throttling. Each social media platform has a different formula for rate limiting. But they all are easily thwarted. Because of this, it should come as no surprise that the prevalence of “troll farms” exploded on social media in the last two election cycles and continues. Users are the losers in this fraud magnet scheme. Your personal reactions to posts, likes and shares, are directly impacted by the more rapid, easily proliferated and automated interactions of “bot” or “fake” accounts and their fake interactions.
What makes this glaring design flaw even worse is that there is absolutely no impediment to developers – of good or bad intent – to gain increasing access to more “bearer” tokens. As I said earlier, the more they have, the faster they can process data, and the more they can do harm. Current throttling and rate limiting algorithms do not stop any developer from gathering data on 10s of millions of users in under 10 seconds if programmed to do so. Believe it! By using their own social media handles/accounts developers can create engaging and simple applications to capture more social media credentials, such as an application to track the activities of your followers on social media as an example. Or they can purchase tokens on the dark web, where such databases are available in abundance.
I must note here that while I find the design intent and flimsy APIs of social media platforms thoughtless and irresponsible at best, in fairness to the social media platforms, current internet protocols and standards leave gaping holes for hackers, allowing anyone to buy IP addresses to make it appear that programmatic calls to APIs are coming from many different places, among other things. This certainly exacerbates all of the problem cited above. However, these protocols and standards are an international standards issue, not one that each social media platform can resolve on its own – although platforms could be a powerful group to lobby for such enhancements. How developers can use a platform’s API is, however, one that the platform companies have complete control over. As an aside, on my wish list is that standards groups revisit the long fought “Protocol War”, which involved egos more than good engineering. Perhaps technologists and others could put down their arms to encourage this for the good of all human-kind? Please?
By changing security protocols for access to APIs and also limiting access to features that are vulnerable to abuse, all social media platforms could eliminate a substantial amount of misinformation, fake engagement and more – almost immediately, allowing free speech to flourish naturally as it should. Legislators please take note: Addressing API functionality and security is key to thwarting info wars and platform manipulation by third parties, including foreign actors trying to interfere with free and real debate. Without doing so, not much will change for the better, and it would be a tragic loss of an opportunity to end the vast majority of the “problems” with social media.
Great article
You should testify in front of Congress
I bet most of them never heard of any of the above
Keep pushing !