Search results

SORT BY
Filters
News & Media
Publications
Skip to main content

Progress made but more work to do say investors

The global investor initiative that came together following the Christchurch terrorist attack to engage with social media companies to prevent the spread of objectionable material has concluded saying progress has been made, but a continuous focus on the evolution of preventative safeguards is needed.

In response to the 15 March 2019 attacks, New Zealand’s government-owned investors, supported by 105 global investors representing approximately NZD$13.5 trillion, began a collaborative effort to engage the world’s three largest social media companies (Facebook, Alphabet and Twitter) to strengthen controls to prevent the livestreaming and dissemination of objectionable content.

“Our thoughts remain with the victims of that horrific act. In response to the violence and its amplification across social media, our initiative came together to pressure the social media companies to increase their focus on preventing objectionable content making it on to their platforms and being unwillingly seen by innocent users,” says NZ Super Fund Senior Investment Strategist Responsible Investment Katie Beith.

“By speaking out we added an investor voice to the pressure these companies are under to take accountability and to reform. We see the successful management of these issues as critical to the long-term success of the companies. We do not expect companies to pursue profit at all costs to society and we expect them to carry out their duty of care with absolute resolve. While we’ve seen some good progress - the platforms have all moved to strengthen controls to prevent the live streaming and distribution of objectionable content, the goal posts will keep moving and the companies need to remain focused on managing this.

“Initiative members held meetings with key executives to express our concerns about how content risk is managed by the respective companies. We remain frustrated and disappointed, however, that all three companies have continually declined our requests to meet with Board members to discuss our issues of concern.

“An important achievement of the initiative occurred in late 2020 when Facebook informed us that they had strengthened the Audit and Risk Oversight Committee charter to explicitly include a focus on the sharing of content that violate its policies. It included a commitment not just to monitor and mitigate such abuse, but also to prevent it.

“This notable improvement is directly attributable to this engagement and represents a real strengthening of governance and accountability for the Board on this issue. It puts the company on the front foot in working towards prevention of the issue rather than just fire-fighting inherent problems.”

To conclude the engagement, independent external research was commissioned to help the group of investors assess whether the changes made by the companies are appropriate for the scale of the problem. The research also took a look into the different types of emerging regulation on this issue.

The research was undertaken by a New Zealand based consultancy Brainbox Institute, which specialises in analysis of issues at the intersection of technology, politics, law and policy.

Brainbox found that the measures put in place by the platforms are likely to be highly effective in mitigating the scale in which objectionable content of a similar type to the Christchurch terror attack, can be disseminated online. It also noted, however, that it is unlikely the platforms will be able to entirely prevent a similar type of incident in the future.

Brainbox’s research drew out some robust legislative mechanisms that investors can look for and advocate for, as regulation in this area emerges. In its view, increasing transparency and auditability of content moderation systems is the key area that will drive improvement.

“While we take some reassurance from Brainbox’s findings, the companies must continue to find ways to prevent objectionable content making it on to their platforms, and being viewed by users,” says Beith.

The most effective means of dealing with a live crisis and the sharing of related objectionable content attacks comes from cross-platform, collaborative efforts. In the specific case of violent online terror-related content, use of a shared hash database as administered by the Global Internet Forum to Counter Terrorism (GIFCT) and the development of a ‘Crisis Response Protocol’ as part of the Christchurch Call, are key mechanisms to enable suppression of objectionable content that stems from a live crisis.

These measures provide a procedure for platforms to rapidly coordinate, identify and classify objectionable content, and add information to a shared database so duplicate uploads can be quickly identified. Nevertheless, they are not foolproof and do have some limitations.

“The issue of content moderation is becoming one of the defining legal and socio-political issues of our time. It deserves its own body of specialist expertise stretching across a range of academia, law and policy. We urge the companies to open up the platforms to allow independent scrutiny of policies and related decisions and actions. This will, in due course, drive effectiveness, improvement and accountability.

“We believe the companies are only at the start of their journey. They must keep this issue elevated as a core focus of the Executive and Board, with considerable resourcing and open and honest reporting on progress between boards and investors.”



Social Media Collaborative Investor Initiative Conclusion Note

BrainBox's full report