Moral metrics: Are corporate algorithms becoming our new moral authorities?
- Written by Beth DuFault, Assistant Professor of Marketing, University of Portland
Scores help give us a sense of how we're doing – but they're not always neutral.Dilok Klaisataporn/iStock via Getty Images PlusYou check your credit score before applying for an apartment. Your fitness watch tells you whether you slept well enough. A workplace dashboard measures your productivity. Parents can buy devices that track their baby’s breathing and heart rate while they sleep.
Increasingly, numbers tell us how we are doing.
These systems promise something appealing: clear feedback about whether we are behaving well. They appear objective, neutral and data-driven. But they also signal a deeper cultural shift, as algorithms define what counts as virtuous behavior.
In other words, we are living in a world where metrics are being translated into moral judgments. As a researcher who has long studied how markets and technologiesshape moral responsibility, I’ve seen how these metrics quietly reshape how people understand themselves and how other people judge them.
Defining the good life
For generations, religious congregations structured everyday life for many people, offering templates for identity and for what a “worthy” life should look like.
As societies grow more diverse, however, and as fewer people affiliate with formal religious groups, faiths’ moral influence on society is waning. With their authority no longer taken for granted, some religious groups market themselves almost like brands: lifestyle choices that one can choose to follow or ignore.
People start to assemble their own sense of right or wrong from a patchwork of sources – and increasingly, that involves for-profit scores, rankings and dashboards.
Credit scoring offers a clear example of how this works. A credit score seems like an objective measure of financial worthiness.
But the actions required to optimize a score define what worthy financial behavior looks like in U.S. society today. It’s not just about paying bills on time. Achieving an optimum credit score most often involves having at least one credit card; keeping a low debt-to-credit ratio, which might involve requesting credit limit increases instead of paying down debt; not canceling any credit cards so that average account length is maximized; and having the “right” credit mix, which often includes a consumer loan. Today, a consumer with no credit cards – something that at one time might have seemed financially virtuous – doesn’t develop the kind of “file” that is readily rewarded with a high score, and they might not be able to obtain credit to buy a house or a car.
In our work on consumer credit scoring, consumer culture researcher John Schouten and I found that people often incorporate their credit scores into their sense of identity and narrative about their life, interpreting scores as reflections of their character and morality. A high score feels like a sign of virtue. A low score can trigger feelings of shame or failure and a determination to be better.
One consumer described discovering her credit score for the first time as finding out what kind of person she actually was. Another, working to rebuild his score after a medical debt caused a cascade of defaults, related that he checked it every morning, to see if he was someone people could trust again.
Moral mirrors
Credit scoring is only one example. Health apps convert exercise, sleep and heart rate into performance indicators. Workplace platforms turn everyday tasks into dashboards, rankings and streaks. Reputation systems rate drivers, sellers and freelancers, often with a single number that stands in for trustworthiness.
Even parenting, one of the most emotional human roles, is touched by this logic. Wearable infant monitors translate babies’ breathing, oxygen levels and sleep patterns into charts, alerts and “insights.” These technologies are marketed as tools for reassurance, but in a 2026 paper, my co-authors and I found that these tools also nudge expectations.
Parents describe feeling that if a device exists that can watch a baby’s breathing all night, then a truly responsible caregiver must use it. “All the parents in our social group have one breathing monitor or another,” one dad said. “My boss has one. If I could prevent something horrible by spending a little money and watching the monitors, and I didn’t, what kind of parent would I be?”
Apps don’t just record behavior; they shape it.Oscar Wong/Moment via Getty ImagesThe emotional weight of that shift is striking. One mother said that she felt guilty on the nights she forgot to charge the device – not because anything had gone wrong, but because she had failed to be watchful in the way the market now defines good parenting. Another said simply, “If something happened and I didn’t have it on, I don’t know how I could live with myself.” The monitor had become less a tool than a test.
Measurement can be genuinely useful. When scores appear precise and impersonal, they can feel more solid than the messy, subjective judgments we make in everyday life. But as historian Jerry Muller lays out in “The Tyranny of Metrics,” scoring systems subtly embed assumptions about what responsible behavior looks like, then reflect those assumptions back to us as if they were simple facts. A high credit score begins to look like proof of moral worthiness. A steady stream of productive hours on a work dashboard looks like evidence of commitment.
As these metrics spread, they start to stitch together a new, data-driven sense of what it means to be a good person. This shows up in ordinary decisions: choosing a loan because it will help your score; taking your phone on a run so it “counts” toward your fitness goals; waking in the night to check a baby only because the app suggests you should. The line between caring for others and optimizing for a number becomes easy to blur.
Into the void
For centuries, religious traditions, philosophers and moral communities have wrestled with what it means to live a good and virtuous life. Algorithmic scoring systems do not claim to answer those questions, but as traditional forms of moral authority weaken among many Americans, I would argue that algorithmic systems are moving into the void.
They do not claim to answer questions about the soul, but they do offer something that can feel almost as reassuring: clear indicators of whether you’re on the right track. A high score, a green check mark, a completed streak – these are small, everyday reassurances that we are, in some sense, measuring up.
The deeper question is how comfortable society is letting these systems become our go-to mirrors for moral self-assessment. Instinctively looking to a number to tell whether someone is doing well as a borrower, worker, patient or parent risks forgetting that numbers can only capture a thin slice of what it means to be a good human being.
Many of these scoring systems are built by for-profit companies with a specific interest in the outcome. They are not designed simply to measure behavior; they are designed to shape it, nudging consumers to continuously improve their scores in ways that make them more valuable, more legible and more profitable to the companies doing the measuring. The goal is not necessarily for you to flourish; it’s for your behavior to benefit corporations.
The next time you check your rating or a ranking and feel a small surge of pride or unease, it may be worth pausing to ask: Whose idea of “good” am I seeing reflected there, and is it really the one I want to live by?
Beth DuFault does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Authors: Beth DuFault, Assistant Professor of Marketing, University of Portland

