As technology continues to evolve, so too do the methods by which abusers exploit these advancements. Acknowledging that the advocacy community was lagging behind in understanding tech, Erin Southworth established the National Network to End Domestic Violence’s Safety Net Project in 2000. The aim was clear: to provide comprehensive training on leveraging technology to assist victims and hold abusers accountable. Today, the project’s online resources include toolkits offering critical guidance, such as creating robust passwords and formulating security questions that could safeguard sensitive information. As director Audace Garnett aptly notes, “When you’re in a relationship with someone, they may know your mother’s maiden name.”
Big Tech Safeguards
In an effort to extend her reach, Southworth has also served as an advisor to major tech companies, guiding them on enhancing user safety for those who have experienced intimate partner violence. In 2020, she took on the role of head of women’s safety at Facebook (now Meta), significantly drawn to its proactive measures against intimate image abuse. Notably, the platform introduced one of the first “sextortion” policies back in 2012. Today, she focuses on a technological approach known as “reactive hashing,” which assigns unique “digital fingerprints” to nonconsensual images. This system streamlines the reporting process for survivors: reporting one instance effectively prevents all duplicates from circulating.
Another facet of concern is “cyberflashing,” a technology-enabled behavior where individuals share unsolicited explicit photos. Meta has countered this issue on Instagram by restricting accounts from sending images unless they follow one another. However, critics argue that Meta’s approach appears more reactive than proactive. Despite the company’s stance of removing content that breaches bullying or violence policies, their recent policy changes have allowed greater freedoms for abusive speech. As reported by CNN, users are now allowed to refer to women as “household objects” and share transphobic or homophobic comments that were previously prohibited.
The duality of technology presents a unique challenge: tools designed for good can just as easily facilitate harm. A tracking feature that could be useful for an individual monitoring a stalker might pose a significant risk for someone being pursued. When questioned about preventative measures that tech companies should adopt to mitigate technology-assisted abuse, experts often express frustration. For instance, abusers have exploited parental controls designed for safeguarding children to monitor adults. While essential for child safety, tech companies face a dilemma; they can’t simply eliminate these features without serious consequences.
Audace Garnett emphasizes the importance of embedding safety considerations into product design from the outset. However, she notes this is hard to implement for many longstanding products. Interestingly, some computer scientists point to Apple as a leader in the field of security measures. Their closed ecosystem effectively blocks many unauthorized third-party applications and friction can alert users when their privacy is compromised. Yet, even these measures cannot guarantee foolproof security.
Over the past ten years, leading US tech companies such as Google, Meta, Airbnb, Apple, and Amazon have assembled safety advisory boards to tackle the issues surrounding digital abuse. Their strategies vary widely. For example, at Uber, board members have provided insight into potential “blind spots,” influencing the creation of customizable safety tools. A significant outcome is Uber’s PIN verification feature, ensuring riders must provide a unique code before their trip begins, effectively promoting safer interactions.
Apple’s strategy has included thorough guidance, encapsulated in a comprehensive “Personal Safety User Guide.” This guide includes a section titled “I want to escape or am considering leaving a relationship that doesn’t feel safe,” offering information on blocking unwanted contacts, gathering evidence, and steps for alerting users of unwanted tracking.
Unfortunately, determined and creative abusers often find ways to circumvent these safeguards. Elizabeth, who prefers to remain anonymous, recently discovered an AirTag secretly placed by her ex-partner in a wheel well of her car. Wrapped in duct tape to muffle sound, the device exemplifies how some individuals are willing to exploit technology for nefarious purposes. In response to growing reports of unwanted tracking via AirTags, Apple has implemented features allowing users to locate an unwelcome device by sound – but as Elizabeth points out, no protection is foolproof when faced with malicious intent.
Laws Play Catch-Up
As tech companies grapple with preventing technology-facilitated abuse, law enforcement’s responses can vary significantly. Lisa Fontes, a psychologist and expert in coercive control, highlights troubling encounters she has observed. One victim reported police dismissing her case, stating, “You shouldn’t have given him the picture.” In other instances, individuals have approached law enforcement regarding hidden “nanny cams” placed by their abusers, only to be told, “You can’t prove he bought it” or “there’s nothing we can do.” This inconsistency in the judicial response highlights an urgent need for the legal system to catch up with the realities of technology-enabled abuse and better support those affected.
Inspired by: Source

