The impact of Amazon, Google, Apple and Facebook on our values

I read this article, “Amazon and Google are making us worse people” by David Priest and was incredibly shaken up by my agreement with the following:

But something else gets lost in the shuffle of this trick. Despite the common misconception that our values always direct our behavior, the opposite is often true. What we do, we come to value.

The problem with Amazon and Google and all the other tech giants restricting our choices goes beyond the material. These companies are actually training us — intentionally or not — for hours each day to act primarily on impulse, convenience and short-term economic self interest, even when our more deeply held values are at odds with that.

After all, high percentages of people value buying local products (it’s why I drove to two stores before ordering batteries from Amazon). We value privacy (which is why so many struggle to understand the subtle ways we’re losing it all the time). We valuechoice (which is why so many use ad-blockers and scroll past the promotional content on so many platforms).

But when we’re consistently confronted with the choice between convenience and inconvenience, a lower price tag and a higher one, it’s natural to respond to our immediate, material concerns. But each time we do that, we have to make a second choice: Do we care that we’re making decisions at odds with our values, or do we just refashion our values to align with our behavior?

Changing Attitudes by Changing Behavior

Although it might not have surprised you to hear that we can often predict people’s behaviors if we know their thoughts and their feelings about the attitude object, you might be surprised to find that our actions also have an influence on our thoughts and feelings.

https://opentextbc.ca/socialpsychology/chapter/changing-attitudes-by-changing-behavior/

fMRI and Machine Learning to Determine Suicidal Indicators

This is fascinating and such important research, the ability to distinguish between actual self-harm vs. thought of self-harm. There are clearly huge philosophical issues around this type of fMRI application — i.e. Minority Report. Hopefully, we as a society can create a sound moral compass to leverage such tools to save lives.

H/T: https://machinelearnings.co/ for finding the below:

“Researchers at Carnegie Mellon and the University of Pittsburgh analyzed how suicidal individuals think and feel differently about life and death, by looking at patterns of how their brains light up in an fMRI machine. Then they trained a machine learning algorithm to isolate those signals… The computational classifier was able to pick out the suicidal ideators with more than 90 percent accuracy. Furthermore, it was able to distinguish people who had actually attempted self-harm from those who had only thought about it.” – Megan Molteni, Science Writer Learn More on WIRED >