0001214659-24-007262.txt : 20240422 0001214659-24-007262.hdr.sgml : 20240422 20240422121138 ACCESSION NUMBER: 0001214659-24-007262 CONFORMED SUBMISSION TYPE: PX14A6G PUBLIC DOCUMENT COUNT: 1 FILED AS OF DATE: 20240422 DATE AS OF CHANGE: 20240422 EFFECTIVENESS DATE: 20240422 SUBJECT COMPANY: COMPANY DATA: COMPANY CONFORMED NAME: Meta Platforms, Inc. CENTRAL INDEX KEY: 0001326801 STANDARD INDUSTRIAL CLASSIFICATION: SERVICES-COMPUTER PROGRAMMING, DATA PROCESSING, ETC. [7370] ORGANIZATION NAME: 06 Technology IRS NUMBER: 201665019 STATE OF INCORPORATION: DE FISCAL YEAR END: 1231 FILING VALUES: FORM TYPE: PX14A6G SEC ACT: 1934 Act SEC FILE NUMBER: 001-35551 FILM NUMBER: 24860311 BUSINESS ADDRESS: STREET 1: 1 META WAY CITY: MENLO PARK STATE: CA ZIP: 94025 BUSINESS PHONE: 650-543-4800 MAIL ADDRESS: STREET 1: 1 META WAY CITY: MENLO PARK STATE: CA ZIP: 94025 FORMER COMPANY: FORMER CONFORMED NAME: Facebook Inc DATE OF NAME CHANGE: 20050511 FILED BY: COMPANY DATA: COMPANY CONFORMED NAME: Mercy Investment Services, Inc. CENTRAL INDEX KEY: 0001699865 ORGANIZATION NAME: IRS NUMBER: 263224636 STATE OF INCORPORATION: MO FISCAL YEAR END: 0630 FILING VALUES: FORM TYPE: PX14A6G BUSINESS ADDRESS: STREET 1: 2039 NORTH GEYER ROAD CITY: ST. LOUIS STATE: MO ZIP: 63131 BUSINESS PHONE: 314-909-4609 MAIL ADDRESS: STREET 1: 2039 NORTH GEYER ROAD CITY: ST. LOUIS STATE: MO ZIP: 63131 PX14A6G 1 b422241px14a6g.htm

 

Notice of Exempt Solicitation

 

NAME OF REGISTRANT: Meta Platforms, Inc.

NAME OF PERSON RELYING ON EXEMPTION: Mercy Investment Services

ADDRESS OF PERSON RELYING ON EXEMPTION: 2039 N Geyer Rd, Frontenac, MO 63131

 

Written materials are submitted pursuant to Rule 14a-6(g)(1) promulgated under the Securities Exchange Act of 1934.

 

This is not a solicitation of authority to vote your proxy.

Please DO NOT send us your proxy card as it will not be accepted.

 

We are writing to urge Meta Platforms, Inc (“Meta” or the “Company”) shareholders to VOTE FOR PROPOSAL 10 (A shareholder proposal regarding human rights impact assessment on its Artificial Intelligence (AI) systems driving targeted advertising) on the Company’s 2024 proxy.

 

The Shareholder Proposal:

Shareholders direct the board of directors of Meta Platforms, Inc. to publish an independent third-party Human Rights Impact Assessment (HRIA), examining the actual and potential human rights impacts of Facebook’s use of artificial intelligence systems that drives its targeted advertising policies and practices throughout its business operations. This HRIA should be conducted at reasonable cost; omit proprietary and confidential information, as well as information relevant to litigation or enforcement actions; and be published on the Company’s website by June 1, 2025.

 

Introduction:

Facebook’s business model relies almost entirely on ads, with nearly 98% of Facebook's global revenue generated from advertising, which amounts to over $130 billion in 20231.

 

Algorithms powered by AI systems are deployed to enable the delivery of targeted advertisements. These AI systems rely on the harvesting of vast quantities of personal data and algorithmic inferences, which can result in discrimination, the amplification of harmful content and hate speech, and the manipulation of public debate and the democratic process by malicious state and non-state actors2. Algorithms determine the ads that users see3, which enables the exploitation of individuals’ vulnerabilities, perpetuating existing and systemic discrimination4 and marginalization, and can lead to the exclusion of certain groups of people, such as women and older people when it comes to job ads5. Data used to enable the targeting of such ads include the personal and behavioral data of Facebook users, which further increases the risk of user privacy violations deliberately or inadvertently committed by Meta. In 2019, the U.S. Federal Trade Commission imposed a $5 billion fine on Meta (then Facebook) for such privacy violations, and just last year was imposed a $1.3 billion privacy fine by the European Union, the largest of such fines in the EU.

 

 

_____________________________

1 https://www.statista.com/statistics/277229/facebooks-annual-revenue-and-net-income/#:~:text=In%202022%2C%20Meta%20Platforms%20generated,to%20114%20billion%20U.S.%20dollars.

2 https://cyberdefensereview.army.mil/Portals/6/Documents/2021_winter_cdr/04_CDR_V6N1_Dawson.pdf

3 https://en.panoptykon.org/algorithms-of-trauma

4 https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html

5 https://www.globalwitness.org/en/campaigns/digital-threats/how-facebooks-ad-targeting-may-be-in-breach-of-uk-equality-and-data-protection-laws/

 

   
 

 

Over the last few years, digital advertising has continued to be closely examined by regulators, data privacy and consumer advocates, and customers. Headlines like “Digital Ads Collapse”6 highlight concerns surrounding the practice, such as an increasingly crowded marketplace. By investing in true human rights due diligence processes initiated with a HRIA, Meta could use its current dominant position to lead the way in centering human rights within its business, which would also serve to separate it from its competitors but also set a higher standard for the social media industry.

 

Given the paramount importance of targeted ads to Meta’s business, in addition to the well documented human rights risks, a HRIA is necessary for the Company and stakeholders to understand the risks associated with Meta’s targeted ads business model7. Meta endorses the UN Guiding Principles on Business and Human Rights (UNGPs) - the authoritative global standard on the role of businesses in ensuring respect for human rights in their own operations and through their business relationships. The UNGPs explicitly state that companies must conduct human rights due diligence to identify and address adverse salient risks and adverse impacts connected with their products and services, particularly if the scale and scope of the impacts are likely to be large. Human rights due diligence processes including HRIAs provide the best available evidence that such impacts are subject to structured and ongoing reviews. Such an approach should certainly be applied to the Company’s single source of revenue.

 

Regulatory, legal, and financial risks persist:

The most transformative legislation to date has come into effect in the European Union (EU). The Digital Services Act (DSA) imposes new obligations on companies operating in the EU, including banning, or limiting certain user-targeting practices and sharing some internal data with regulators and associated researchers. Currently, this transparency and accountability stops at the borders of the EU. However, we know this to be a global problem. Given that, under the DSA, Meta has already set up data collection and reporting infrastructure to provide detailed reporting for EU regulators, it should be even easier for the Company to conduct a global HRIA on these practices. This would allow the Company to assess the feasibility of applying the strong provisions it adheres to in the EU on a wider scale.

 

 

_____________________________

6 https://www.cnbc.com/2022/10/24/facebook-google-face-skeptical-wall-street-this-week-amid-ad-collapse.html

7 https://www.amnesty.org/en/latest/campaigns/2022/07/metas-human-rights-report-ignores-the-real-threat-the-company-poses-to-human-rights-worldwide/ OR https://rankingdigitalrights.org/wp-content/uploads/2019/02/Human-Rights-Risk-Scenarios-targeted-advertising.pdf

 

   
 

 

Very recently, on April 17th, 2024, the EU privacy watchdog the European Data Protection Board (EDPB) stated that Meta and other very large online platforms8 should give users an option to use their services for free without targeted advertising.9 This alone could cause a monumental shift in how consumers interact with ads. And there is still action in the United States as well, on the same day the EDPB released their statement, the American Privacy Rights Act (APRA) has been brought back to a house committee, with much optimism.10 This proposed legislation is designed to establish the United States’s first comprehensive data privacy law at the federal level.

 

Legal risk is also apparent. In the past few years, Meta has been sued by the National Fair Housing Alliance, the American Civil Liberties Union, the Communications Workers of America, the U.S. Department of Housing and Urban Development, and others for human rights and civil rights violations specifically resulting from Facebook’s targeting advertising and delivery systems which were discriminatory and enabled discrimination by advertisers11. Meta is also facing a long-standing lawsuit alleging that it deceived advertisers by overstating the “potential reach” of its ads and charging inflated premiums for ad placements12, as well as separate legal action on the Company’s purported collusion with fellow advertising giant Google on dividing up the ad market13.

 

Additionally, public trust in AI companies is rapidly declining globally14. Over the past five years, trust in AI companies has dropped from 61% to 53%, with the U.S. experiencing a steep 15 percentage point drop (from 50% to 35%). This erosion of trust spans political lines, with low levels of trust among Democrats (38%), independents (25%), and Republicans (24%). Moreover, the technology sector is losing its position as the most trusted industry, now leading in only half of the countries studied by Edelman, compared to 90% eight years ago. Meta should recognize it has substantial work to do to rebuild trust. A true independent assessment of its human rights risks is a crucial step in that process.

 

The Company’s current approach isn’t working:

 

Targeted ads have been the subject of much controversy. Frances Haugen revealed that Meta had long known that targeted ads are detrimental to mental health, body image, and political polarization15. Meta now faces a lawsuit from investors for allegedly violating federal securities laws by presenting inaccurate statements about the harm its products, funded through targeted advertisements, can cause16.

 

 

_____________________________

8 https://ec.europa.eu/commission/presscorner/detail/en/ip_23_2413

9 https://www.reuters.com/technology/meta-should-give-users-free-option-without-targeted-ads-eu-privacy-watchdog-says-2024-04-17/

10 https://www.theverge.com/2024/4/17/24133323/american-privacy-rights-act-house-lawmakers-legislative-hearing

11 https://www.aclu.org/press-releases/californias-court-of-appeals-rules-that-meta-cant-evade-liability-in-case-claiming-facebooks-ad-tools-violate-users-civil-rights

12 https://www.cnn.com/2022/03/30/tech/facebook-advertisers-class-action/index.html

13 https://www.theguardian.com/technology/2022/jan/14/facebook-google-lawsuit-advertising-deal

14 https://www.edelman.com/insights/technology-industry-watch-out-innovation-risk#:~:text=Trust%20in%20AI%20Companies%20Declining,50%20percent%20to%2035%20percent.

15 https://www.washingtonpost.com/technology/2021/10/03/facebook-whistleblower-frances-haugen-revealed/

16 https://www.cnbc.com/2021/11/15/ohio-ag-accuses-facebook-of-securities-fraud-for-misleading-investors.html

 

   
 

 

Even now, Meta continues to mislead the public on its use of targeted ads. In July 2021, the Company stated that “we’ll only allow advertisers to target ads to people under 18 (or older in certain countries) based on their age, gender and location”. However, it was discovered that, outside of stated parameters, Meta is still using the vast amount of data it collects about young people to determine which children are most likely to be vulnerable to a given ad, opening them to allegations of human rights violations17. Additionally, Meta does not publish data on alleged violations of the policies they do have, making it impossible to know if they are effective18.

 

Meta shows signs of understanding that there are real risks associated with microtargeting. In November 2021, the Company announced that advertisers would no longer be able to use four existing targeting parameters19. This is on top of the decision earlier that year to disallow the targeting of anyone under the age of 1820. Norway’s decision in July 2023 to ban Meta’s targeted behavioral advertising and fine the tech giant $100,000 per day, resulted in Meta changing how it process data for behavioral advertising for people in the EU, EEA and Switzerland, which required the explicit consent of users.21 With these moves, Meta is reacting to bad press, bans and fines by regulators and specific occurrences by nibbling around the edges of a problem instead of looking at the root cause– the overarching targeted ads business model that is puts profits over people. Meta’s reactive approach creates unacceptable volatility that has shareholders and other stakeholders on edge waiting for the next ad-driven scandal. True public accountability requires nothing short of a thorough impact assessment.

 

Recently, more evidence emerged to suggest that Meta does not have this problem under control. In March 2022, a bipartisan group of lawmakers sent a letter to the Company demanding an explanation about how state-controlled China Global Television Network had been able to place ads on Facebook featuring newscasts pushing pro-Russia talking points about the ongoing attacks on Ukraine22. In the absence of Meta implementing ongoing risks management safeguards like ongoing and independent HRIA to manage the multitude of real and potential harms linked to their advertising business model, we are concerned that advertisers will continue misusing Meta’s system, Meta will fail to filter infringing content, and the Company will over-collect personal user data – which will remain mainstays of the news cycle.

 

 

_____________________________

17 https://techcrunch.com/2021/11/16/facebook-accused-of-still-targeting-teens-with-ads/

18 https://rankingdigitalrights.org/index2020/companies/Facebook

19 https://www.facebook.com/business/news/removing-certain-ad-targeting-options-and-expanding-our-ad-controls

20 https://www.reuters.com/technology/facebook-will-restrict-ad-targeting-under-18s-2021-07-27/

21 https://www.wired.com/story/meta-behavioral-ads-eu-norway/

22 https://www.axios.com/lawmakers-press-meta-china-ad-policy-cfb33f04-afbf-4b1e-af51-36bbd0b00daa.html

 

   
 

 

And the risk to investors is already clear and present – in January of 2023, the
Company faced a nearly €400 million fine for breaking EU data rules around targeted advertisements.23 As the article states, “It means Meta will potentially have to change the way a key part of its business works.” This is not a theoretical risk, but a true fundamental danger to the Company itself – and one that is preventable.

 

The statement in opposition is insufficient:

The Company’s statement of opposition includes reference to the salient risk analysis conducted last year. While this is a useful exercise, Meta included exactly zero recommendations for future action, and only listed steps they have already taken to “address the potential risk.” This indicates that Meta considers the risk fully addressed and the issue to be no longer a concern. Which is extremely troubling considering again, the issue is their use of AI systems in targeted advertising which comprises their entire revenue stream.

 

Furthermore, this was an internal assessment with no transparency around how the assessment was conducted. In fact, the report reads more like a piece of marketing material for the Company rather than a serious examination of its real and potential human rights risks. We believe that when examining the pulse of the entire revenue stream, outside leadership and third-party perspectives are necessary to avoid the findings being limited by any previously held biases or leanings.

 

On paper, Meta appears to embrace the value of HRIAs as tools that enable the protection of its users and improve the quality of the Company’s services. In the last three years, third parties commissioned by Meta have released abbreviated human rights assessments of major undertakings such as the Oversight Board24 and Meta’s plans to expand end-to-end encryption to more products25. Previous assessments covered Meta’s operations in Myanmar, Cambodia, Sri Lanka, Indonesia26, and others. In describing these assessments, the Company stated:

 

“We’re committed to understanding the role our platforms play offline and how Facebook’s products and policies can evolve to create better outcomes. Engaging independent experts and evaluating our work through the lens of global human rights principles is key to achieving this goal.”

 

That is a laudable commitment, but one that rings hollow when Meta does not lean into this very commitment to examine the core of its business. If Meta deemed the topics of the existing assessments important enough to warrant comprehensive reports, then we believe its targeted advertising system should similarly meet the threshold of priority that should trigger a similar evaluation.

 

 

_____________________________

23 https://www.bbc.com/news/technology-64153383

24 https://www.bsr.org/en/our-insights/blog-view/a-human-rights-review-of-the-facebook-oversight-board

25 https://about.fb.com/news/2022/04/expanding-end-to-end-encryption-protects-fundamental-human-rights/

26 https://about.fb.com/news/2020/05/human-rights-work-in-asia/

 

   
 

 

Conclusion:

Meta Platforms, Inc., which includes Facebook, Instagram, and WhatsApp, has one of the largest corporate footprints of any entity in the world. The Company’s actions have an impact on both society and the global economy, with more than 3.6 billion active monthly users; a market cap that is larger than the GDP of over 150 countries; and over $65.4 billion in cash, cash equivalents and marketable securities as of December 31, 2023.

 

The assessment we are asking for would likely use less than 0.01% of Meta’s cash on hand - equivalent to a rounding error.

 

This nearly unmatched reach and influence requires an equally unmatched commitment to preserving and respecting human rights across all parts of the business model. That business model relies on a single source of revenue – advertising. Targeted advertising, given concerns around the fairness, accountability, and transparency of the underlying algorithmic systems, has been heavily scrutinized for its adverse impacts on human rights, and is targeted for significant regulation. This is a material risk to investors. A robust HRIA will enable Meta to better identify, address, mitigate and prevent such adverse human rights impacts that expose the Company to regulatory, legal, and financial risks.

 

For these reasons, we ask you to vote FOR Proposal 10.

 

 

This is not a solicitation of authority to vote your proxy.

Please DO NOT send us your proxy card as it will not be accepted.