Community chat: https://t.me/hamster_kombat_chat_2
Twitter: x.com/hamster_kombat
YouTube: https://www.youtube.com/@HamsterKombat_Official
Bot: https://t.me/hamster_kombat_bot
Game: https://t.me/hamster_kombat_bot/
Last updated 3Â months, 1Â week ago
Your easy, fun crypto trading app for buying and trading any crypto on the market
Last updated 3Â months ago
Turn your endless taps into a financial tool.
Join @tapswap_bot
Collaboration - @taping_Guru
Last updated 3Â days, 5Â hours ago
I also want you to realize that anything bad that you see on the platform is a symptom of Mark Zuckerberg’s unwillingness to rate-limit or sufficiently moderate the platform. Logically-speaking, one would think that Meta would want you to have a high-quality Facebook experience, pruning content that might be incendiary, spammy, scammy or unhelpful, or at the very least, comes primarily from those within your own network, but when your only concern is growth, content moderation is more of an emergency measure.
And to be clear, this is part of Meta’s cultural DNA. In an interview with journalist Jeff Horwitz in his book Broken Code, Facebook’s former VP of Ads and Partnerships Brian Bolland said that “building things is way more fun than making things secure and safe…[and] until there’s a regulatory or press fire, you don’t deal with it.”
Horwitz also cites that Meta engineers’ greatest frustration was that the company “perpetually [needed] something to fail — often fucking spectacularly — to drive interest in fixing it.” Horwitz’s book describes Meta’s approach to moderation as “having a light touch,” considering it “a moral virtue” and that the company “wasn’t failing to supervise what users did — it was neutral.”
As I’ve briefly explained, the logic here is that the more stuff there is on Facebook or Instagram, the more likely you are to run into something you’ll interact with, even if said interaction is genuinely bad. Horwitz notes that in April 2016, Meta analyzed Facebook’s most successful political groups, finding that a third of them “routinely featured content that was racist and conspiracy-minded,” with their growth heavily-driven by Facebook’s “Groups You Should Join” and “Discover” features, algorithmic tools that Facebook used to recommend content. The researcher in question added that “sixty-four percent of all extremist group joins are due to our recommendation tools.”
When the researcher took their concerns to Facebook’s “Protect and Care” team, they were told that there was nothing the team could do as “the accounts creating the content were real people, and Facebook intentionally had no rules mandating truth, balance or good faith.”
Meta, at its core, is a rot economy empire, entirely engineered to grow metrics and revenue at the expense of anything else. In practice, this means allowing almost any activity that might “grow” the platform, even if it means groups that balloon by tens or hundreds of thousands of people a day, or allowing people to friend 50 or more people in a single day. It means allowing almost any content other than that which it’s legally required to police like mutilation and child pornography, even if the content it allows in makes the platform significantly worse.
https://www.wheresyoured.at/were-watching-facebook-die/
Ed Zitron's Where's Your Ed At
We're Watching Facebook Die
Like this newsletter? You should listen to the Better Offline episode! In the first quarter of 2024, Meta made $36.45 billion dollars - $12.37 billion dollars of which was pure profit. Though the company no longer reports daily active users, it now uses…
the Guardian
US state department falsified report absolving Israel on Gaza aid – ex-official
Stacy Gilbert, who quit post as senior adviser on Tuesday, says report went against consensus of experts
the Guardian
Revealed: how a US far-right group is influencing anti-gay policies in Africa
US anti-pornography campaigning group has advised, promoted and endorsed anti-LGBTQ+ activists and politicians in Uganda
US slows plans to retire coal-fired plants as power demand from AI surges
Israeli forces in Gaza have created invisible “kill zones” near their operations in Gaza where soldiers are under orders to fire on anyone who is not Israeli military personnel, according to an explosive new report in Haaretz, a prominent Israeli newspaper.
“This kind of indiscriminate killing is illegal and falls far short of any gold standard for civilian harm,” argued Brianna Rosen, a senior fellow at Just Security.
The news casts significant doubt on Israel’s accounting of the number of Hamas militants killed during its operations in Gaza. Israel claims that more than one in four of the over 32,000 Gazans killed since October were members of Hamas, but the rules for making such a designation are loose. “In practice, a terrorist is anyone the IDF has killed in the areas in which its forces operate,” a reserve officer who served in Gaza told Haaretz.
“When a person's status is in doubt, international humanitarian law requires combatants to presume that that person is a civilian,” Ramming Chappell told RS. “This new reporting from Haaretz seems to confirm yet again that the Israeli military is not taking sufficient measures to protect civilians.”
https://responsiblestatecraft.org/us-weapons-israel-2667646466/
Responsible Statecraft
US ships Israel more bombs amid 'kill zone' revelations
A bombshell new report should force Biden to reassess Israel’s human rights compliance, experts say
Community chat: https://t.me/hamster_kombat_chat_2
Twitter: x.com/hamster_kombat
YouTube: https://www.youtube.com/@HamsterKombat_Official
Bot: https://t.me/hamster_kombat_bot
Game: https://t.me/hamster_kombat_bot/
Last updated 3Â months, 1Â week ago
Your easy, fun crypto trading app for buying and trading any crypto on the market
Last updated 3Â months ago
Turn your endless taps into a financial tool.
Join @tapswap_bot
Collaboration - @taping_Guru
Last updated 3Â days, 5Â hours ago