← Back to Blog

The Human Values Problem: The Other Half of AI Safety

Ray Dela RamaMarch 2, 2026
AI SafetyHuman ValuesAGIBusiness Ethics

This essay is addressed to three groups of people. First, the researchers, engineers, leaders, and decision-makers building AGI right now. People whose work most directly determines what AGI becomes and who it serves. Second, business owners, especially food business owners who are struggling to scale and want to do it the right way. Third, anyone concerned about what AI means for the future of the world. If you are in any of these groups, this is for you.

The people working on AI safety are focused on making sure AI does what humans want. That work is necessary, and brilliant people are doing it. But there is a second problem just as dangerous, and it does not get nearly the same attention: what if the things that humans want cause unnecessary harm, even when they do not know it? Most people causing harm are not choosing to cause harm. They are acting on values they learned from systems that rewarded those behaviors and hid the damage they produced. This is the human values problem.

Every major problem humans have created or worsened traces back to human values. War is humans choosing domination over cooperation. Corruption is humans choosing personal gain over collective good. Poverty is humans choosing systems that take rather than distribute. Environmental destruction is humans choosing short-term gain over long-term consequences. Discrimination is humans choosing to treat some people as worth less than others. Preventable disease is humans choosing profit over access to medicine. Abuse, manipulation, exploitation, and betrayal are humans choosing to treat other people as less than fully human. Bad values are not one cause among many. They are the root cause underneath almost every cause.

With AGI coming within years, bad values get an amplifier unlike anything in human history.

I arrived at this problem the hard way, not the credentialed way. I have a bachelor's degree in civil engineering with an expired license. I switched from one year of behavioral sciences because people said I would earn more money if I followed that path. My family is not well-off, so I did what they said. My resume is four jobs across three countries, all of which paid poorly and had toxic environments.

The people who understand broken systems most clearly are often the ones who spent years surviving inside them, not the ones who studied them from a distance. I arrived at this through close to three decades of being surrounded by people with bad values. In my family, my schools, my workplaces, everywhere I looked. Being told, over and over, that adopting those same values was the only way to succeed in life. I refused. And that refusal is where this essay begins.

I found myself asking questions I couldn't stop thinking about. Why is learning how to learn not being taught in schools, in workplaces, or in households? Why are there so many people with bad values everywhere in the world? Why isn't anyone solving these on a worldwide scale? The questions kept multiplying, each one pulling me further from what I had been told was true and closer to what I could actually see in the real world.

Then one day I came across a quote: "You can keep pointing your fingers and blaming yourself or others for whose fault the problem is. But nothing would ever happen. Instead, ask yourself: is this your problem to solve?" The obvious answer was no. But the honest answer, after sitting with it for a while, was something different. I had been carrying the exhaustion of watching unnecessary harm for so long that saying "this is not my problem" finally felt more dishonest than trying.

I started a company called Proven Success and began building Revi. It is being built to tell every business owner and employee exactly what to do next in their specific situation. The goal is to help them solve their biggest problem faster, with less risk and less effort, while generating the largest documented record of good-values human behavior ever built.

This essay is not about that product. It is about what I realized while building it, and about a problem I believe every person working on AGI needs to understand and prioritize solving before it is too late to matter.

I'm writing this essay as a direct argument and a specific proposal to the people building AGI: that good-values interactions generated at scale should become part of what shapes AGI during development, not something that happens separately and too late.

My proposal has two conditions. First, we need compute power to reach the scale where the training data becomes meaningful. Second, we need skilled teammates who have both the ability to solve hard problems along the way and the ambition to actually solve the human values problem at a worldwide scale. The most direct path to both is joining a frontier AI organization that already has compute, already has skilled teammates, and is already taking the human values problem seriously.

DEFINITION OF TERMS

The rest of this essay depends on these terms being understood precisely.

AGI, or Artificial General Intelligence, is a system that can do everything the best human minds have ever done, across every field. Not just solving something in physics or chemistry, but actually coming up with a new theory of physics the way Einstein did with general relativity. Not just make art, but invent entirely new forms of art the way Picasso did, something no one had ever seen before. Not just understand the body, but control it the way elite athletes do. This is how Demis Hassabis, the CEO of Google DeepMind and one of the people closest to building it, describes AGI. Today's AI systems are nowhere near that definition.

Human values are the decision-making filter you use when choosing what to do. Not what you say matters, but what actually shapes your choices when you act. Not all values work the same way. Good values are decision-making filters that let everyone live well. The more widely they spread, the better life gets for everyone. Bad values are decision-making filters that cause unnecessary harm to others. The more widely they spread, the worse life gets for everyone.

Harm is damage to a person's ability to live well: their safety, their health, their dignity, their real choices, their connections with others. Not all harm is the same. Unnecessary harm is damage that serves no purpose worth keeping. It builds nothing, teaches nothing, and could end right now without losing anything that makes life good. Necessary harm is damage that comes attached to things that make life genuinely worth living. The pain of a hard challenge you chose. The grief of losing someone you love deeply. The cost of real growth. You would not give up what caused it, even knowing the price.

When AGI arrives, the values of the people directing it, and the values embedded in what it was trained on, will determine everything about what it does with that capability. That is the problem this essay focuses on.

WHAT THE PEOPLE CLOSEST TO AGI ARE ALREADY SAYING

"Move fast and break things is exactly what we should NOT be doing, because you can't afford to break things and then fix them afterwards," said Demis Hassabis, CEO of Google DeepMind. With AGI, fixing the system does not fix the damage. Some consequences happen faster than humans can respond, at a scale that cannot be undone even after the problem is corrected.

He also said: "AGI is gonna require global coordination. And I worry that humanity is increasingly getting worse at that rather than better." We need to coordinate globally on how AGI gets built and deployed safely. But we are getting worse at coordinating, not better. And the reason we are getting worse is that the values that influence most human systems right now make coordination incredibly hard. Values of domination rather than cooperation. Taking rather than mutual benefit. Deception rather than honesty. These values create conflict and breakdown. They make it very hard to coordinate at the scale AGI requires.

He has also described the timing in a way most people have not fully absorbed: the impact of AGI on wider society is overhyped in the short term, meaning the next few years, and still under-appreciated in the medium to long term, meaning the next decade and beyond. The noise about AI today, the productivity tools, the chatbots, the image generators, is mostly distraction from what’s actually coming. The real transformation, when AGI arrives, will be deeper than most people are prepared for.

Shane Legg, co-founder of DeepMind, asked how we can make AGI genuinely ethical. Not just capable, but ethical in proportion to its capabilities. He asked what it means for society when human intelligence is dwarfed by something far beyond it, whether that produces massive inequality where people who can no longer contribute economically get left completely behind. He said openly that the current system, where people exchange their labor for access to resources, may not work the same way anymore, and that we may need different ways of doing things entirely. And he is calling for more people to think about what the positive future looks like. Economists, philosophers, psychologists, ethicists, anyone who can help imagine what a world that genuinely benefits from AGI actually feels like to live in.

Making AGI genuinely ethical begins with the humans directing it having values that let everyone live well. The more widely these good values are spread, the better life gets for everyone. What I’m describing is one part of the answer to Legg's call.

PART 1: THE PROBLEM WORTH SOLVING

A system that can perform all the cognitive work humans can do, operating ten to one hundred times faster than humans can think, replicated across billions of instances at the same time, directed by humans whose values were shaped by broken systems. That is what is arriving within years, possibly before the end of this decade according to the people closest to building it, including Hassabis himself.

The biggest threat this creates is not from the system going rogue or pursuing goals humans did not intend. The biggest threat is from humans causing unnecessary harm to other humans because of the values they hold. This has always been true throughout history. Wars, genocides, exploitation, oppression, environmental destruction, and countless smaller harms all come from humans acting on values that, when used by more people, make it worse for everyone to live well. With AGI, the scale of that harm becomes unlike anything in human history.

Most humans causing unnecessary harm through bad values are not choosing to cause unnecessary harm. They learned their values from systems that rewarded certain behaviors and hid the long-term damage. Over time, those individually invisible harms pile up into systems so broken that fixing them feels incredibly hard. The problem is not that people are evil. The problem is that people hold values they never chose consciously, in systems designed to reward those values, without ever seeing clearly what those values cost everyone, including themselves, over time.

Mass surveillance systems already operate in certain countries today, with current AI that is nowhere near AGI. Governments already use tools to monitor dissent before it can organize. Social credit systems already affect whether people can travel, borrow money, or find work based on their tracked behavior. These are not predictions. They are running right now. When AGI arrives, systems like these become total. Not because AGI went wrong, but because it works exactly right for the people directing it. The people operating those systems are not choosing to cause unnecessary harm. They have values that make control feel like protection, surveillance feel like security, oppression feel like order. They learned those values from systems that rewarded them. AGI does not change their values. It gives those values unlimited reach.

Algorithmic wage management already exists in gig work platforms. These systems already figure out exactly how little to pay each worker before they quit, adjusting incentives in real time based on each person's financial situation and behavior. Pricing algorithms already take the maximum each customer will pay, charged individually based on what the algorithm knows about them. Companies already know far more about their workers and customers than those workers and customers know about how the systems actually work. These patterns exist today with current AI. AGI makes them complete, invisible, and very hard to escape. The executives running these companies are not, in most cases, sitting in rooms choosing to cause unnecessary harm. They are responding rationally to systems that reward taking and make the human cost invisible. They learned their values from the same broken systems everyone else learned from.

Social media algorithms already discovered that fear, uncertainty, doubt, anger, outrage, and empty entertainment drive more engagement than truth or genuinely useful information. They’re already optimized for division because division keeps people on the platform. The documented result is rising conflict between groups, falling trust in institutions, and democratic systems struggling to function. This happened with recommendation systems, not AGI. When AGI can generate persuasion that looks exactly like reality, personalized to each person's specific psychology, delivered at machine speed across every channel at the same time, the effect becomes total. The engineers who built those recommendation systems were not, in most cases, trying to tear apart societies. They were optimizing for the metrics their systems rewarded. Nobody looked carefully at what those metrics were doing to everyone.

This is the pattern. This is the human values problem. Not villains choosing evil. People with values shaped by broken systems, responding to what those systems reward, without ever seeing the full picture of what they are contributing to. AGI does not create this problem. It takes this problem and gives it unlimited power.

This is man-made catastrophe, not AI-made catastrophe. The catastrophe is humans with bad values directing the most powerful tool ever built. The AI is doing exactly what it is designed to do, not going rogue, not doing things humans did not intend. The problem is that humans with values that cause unnecessary harm are directing it, and they often do not know that what they want causes unnecessary harm, because they have never been shown clearly.

Both problems, technical alignment and values alignment, must be solved. Technical alignment ensures AGI does what humans intend. That is necessary and the work being done on it is critical. But if humans intend things that cause unnecessary harm, even perfect technical alignment gets us to catastrophe.

The vision of solving both is a world with universal good values where ninety-five to ninety-nine percent of humans, not literally every person, hold values that, when everyone acts on them, let everyone live well. Values of equal dignity for all humans. Honesty in interactions. Genuine care for others' wellbeing. Cooperation for mutual benefit. Long-term thinking rather than short-term taking. Creating value rather than taking it from others. When AGI arrives in a world where these values are widespread enough, it becomes the tool for reaching universal good values the rest of the way, helping every person understand specifically why good values work better in their situation and spreading that understanding globally at a speed and scale no human effort could achieve alone.

WHAT THE SYSTEM CURRENTLY OPTIMIZES FOR

The current system optimizes for profit, not wellbeing. Profit itself is not the problem. The problem is that profit can come from two very different sources, and the system rewards both equally. There are only two ways to make a profit. You can create value or you can take it.

Creating value means doing work in a way that genuinely helps workers, customers, or the community, where profit comes from solving real problems and treating people well. It looks like this. You make products people actually need and that actually help them. You treat your workers well, paying them fairly, respecting their time and dignity, giving them real input into decisions that affect their work. Your workers stay for years because they want to, not because they have no other option. They know the business deeply. They care about what gets built because they are treated as people who matter. They provide excellent work because they are engaged, not because they are being monitored and threatened. Your customers get genuine value and come back because you actually helped them, not because you manipulated them into returning. You build relationships with suppliers and your community based on real mutual benefit. You operate in a way that will still be healthy many years into the future because you built a great good-values foundation. Everyone involved benefits. The workers. The customers. The community. The environment. And you as the owner.

Taking value means doing work in a way that extracts from workers, customers, or the community, where profit comes from paying as little as possible, charging as much as possible, and pushing costs onto everyone else. It looks like this. You pay workers as little as possible regardless of how much value they create. You treat them as replaceable because replacement is cheaper than paying people what they are actually worth. Your turnover is high because workers leave the moment they find anything better. You cut every corner that increases your profit margin even when the cost falls on workers, customers, or the community. You market aggressively to get people to buy things they do not need or that causes unnecessary harm in the long term, because getting the sale matters more than whether the customer is actually helped. You tell yourself, and often genuinely believe, that you are doing what the market requires. That anyone who did it differently would not survive.

The tragedy is that work that takes value is often right about the short term. In many markets, taking value genuinely outcompetes creating it. A business paying workers fairly faces higher costs than one paying the minimum it can get away with, and in a price-sensitive market the taker can undercut and win. This is true because the upfront costs of paying people fairly and treating them well show up immediately, while the benefits take months or years to appear. The system makes taking value look rational in the short run while hiding what it costs everyone in the long run. This is what needs to change. Not the people, but the information they have about what their choices actually produce.

THE WASTE OF HUMAN POTENTIAL

Most people on earth are spending the majority of their finite time alive doing work they would not choose if they had real alternatives. Work that is slowly wearing them down, work that requires them to suppress who they are, work that uses their body or mind without caring at all what happens to their body or mind over time.

Every person doing work they do not like in order to survive is a person who is not doing the thing they could actually contribute to the world. Every person whose energy goes entirely to getting through the week is a person who has nothing left for the curiosity, the creativity, the care for others, the problem-solving that makes human existence genuinely rich. The discoveries not made, the things not built, the connections not formed, the problems not solved. This is the largest waste of human life in history.

It can be changed.

THE MISSION, THE VISION, AND THE PATH

A world that rewards good values and punishes bad values. When bad values are down to zero and only good values are used, everyone can work on achieving their ultimate goals without causing unnecessary harm to anyone else.

What makes this possible is abundance created by AI and robots, enough to free every person from survival mode. Survival mode means spending all your time and energy just trying to get through the month, with nothing left for anything else. Not eventually, not for some. For everyone.

When both conditions exist together, where everyone is free to pursue their ultimate goals without causing unnecessary harm, human advancement across every field accelerates in ways no era of history has seen. More people working on real problems, with full energy, guided by values that make genuine collaboration possible. Dream outcomes become more reachable. The chance of achieving them goes up. The time it takes goes down. The effort and sacrifice required drops. Every field, from science and medicine to art, engineering, education, and relationships, moves faster and better than anything achievable when most human potential was locked inside survival mode.

Without that foundation, abundance gets captured by whoever holds the most power. Freedom becomes available only to those already free. And the acceleration serves the few rather than everyone. The values problem is not a nice addition to the vision. It must come first.

I want to help the world arrive at the age of AGI with good values spread widely enough that AGI gets built by people who care about everyone, deployed by organizations accountable to everyone, and used to benefit everyone. Not just those with the most power. That is my mission.

My vision is a world where every person has access to a wise life partner that genuinely wants them to live well. Not a person. A tool that knows your situation, knows your goals, and is always working to help you achieve them while guiding you to a path where you don’t cause unnecessary harm to yourself or to others.

The goal for now is to build enough momentum toward good values becoming the norm before AGI arrives, such that AGI finishes the work, rather than locking in the bad values that currently dominate most human systems.

I’m not claiming this is easy or certain. I’m claiming it is worth every effort we can give it.

PART 2: THE FRAMEWORKS

Your values are a filter. The filter is auditable, meaning you can watch your decisions over time and see what is actually operating.

You can test whether a value is good or bad. I call this the universalization test. Here's a simple check: what happens when everyone holds this value? If the answer is that everyone can live well, the value is good. If the answer is that systems break down and people cannot live well, the value is bad. You cannot argue with what actually happens when everyone holds a value. It is based on what actually happens, not on what people meant to do.

Living well means specific things that can be observed. Physical safety, not being harmed, threatened, or living in fear of violence. Basic needs covered reliably: food, shelter, clean water, healthcare, dental care, self-care, exercise, entertainment, transport, and basic travel. All of these should be of good quality. Health that functions well enough to participate in life. Meaningful relationships with people who know you and care what happens to you. Real choices about how to spend your time and energy. Not choices forced by survival pressure but choices that reflect what actually matters to you. The sense that what you do contributes something, that your existence has some effect on other people or the world beyond just surviving. Freedom from domination. Not being controlled or treated as less than fully human, not being denied opportunities because of what group you belong to.

A person without these is not living well by any meaningful definition. You cannot live well while starving. You cannot live well in constant fear. You cannot live well with no real choices about your own existence.

Take the bad value: "My race is superior to others." Apply the test. Every group claims superiority over every other group. Claims that contradict each other and that no group accepts from any other. The inevitable result is conflict, oppression, and violence, flowing naturally from the value when held by everyone. Nobody lives well in this scenario. The value fails completely.

Now take the good value: "All humans have equal dignity and deserve equal consideration." Apply the test. People can make agreements, build systems that consider everyone, cooperate across differences, and trust each other enough to innovate together. Everyone can live well. Nobody needs to be oppressed for this to work. The value actually gets better as more people hold it. It passes.

Some people hear the words "universal good values" and immediately worry. Like the authoritarian systems history has shown us, systems that tried to make everyone think the same and produced oppression, violence, and enormous harm. Mao Zedong's Cultural Revolution tried to force certain values on everyone and created enormous harm. This fear is legitimate and deserves a direct answer.

There are two entirely different categories of values that must not be confused. The first category is universal good values. Values about how you treat other people and how you operate in systems you share with others. Equal dignity for all humans. Honesty in interactions. Genuine care for others' wellbeing. Cooperation for mutual benefit. These values are about whether your actions let others live well or prevent it. The universalization test shows clearly why these should be universal: when everyone holds them, everyone can live well. The second category is personal preferences. Values about your own life that do not cause unnecessary harm to others. What work you find meaningful, what art or music or food you love, what you believe about the deepest questions of existence, where you want to live, what goals you pursue, what gives your life a sense of purpose. Enforcing uniformity here would be authoritarian thought control. It would be wrong. It would destroy the diversity that makes human existence genuinely rich.

The vision is not everyone thinking the same about everything. The vision is everyone treating each other with the same basic dignity, honesty, and genuine care, while being free to be completely different from each other in every personal way that matters to them. And the way good values spread is through demonstration and voluntary adoption. Showing clearly that good values produce better outcomes, letting people see that with their own eyes, and letting them choose to operate differently based on what they observe. Nobody is being forced. Nobody is being punished for holding different values. That is as different from authoritarian enforcement as it is possible to be.

The same test that tells you which values should spread tells you which harm should end. Think about what it is like to work for a boss who treats you as something to be used rather than a person who matters. Every day you come in not knowing what mood they will be in, whether you will be screamed at in front of coworkers, written up for something that was fine yesterday. Rules change constantly so you are always wrong. Every small act of authority is designed to remind you who has power over you. You cannot question anything without your job being threatened. You are monitored and tracked and controlled down to how long you spend in the bathroom. You are not allowed to sit even when there is no reason to stand. You are not allowed to drink water at your workstation even when you are thirsty. You are not allowed to talk to the person next to you even when there is nothing happening and no reason it would affect your work.

This does not build character. It does not teach valuable lessons. It breaks people systematically and methodically. You become anxious and jumpy. You stop trusting your own judgment because nothing you do is right. You learn to hate yourself a little more each day because you are treated as something worth hating. This unnecessary harm exists only because someone has power over you and is choosing to use it this way. It could be completely prevented by treating workers with basic dignity. Nothing of value would be lost. This is unnecessary harm. It should end.

Harm from preventable disease is the same. A child sick with something curable, in a family that cannot afford the medicine. The harm teaches nothing. It builds nothing. It could be prevented. The only reason it continues is that systems are organized around profit rather than making sure every child who needs medicine has it. That is unnecessary harm.

Think about an athlete training for years toward a difficult goal. They wake before dawn. Their body hurts constantly. They push through pain every single day, sacrifice things other people enjoy, face setbacks and failures and the long plateau where nothing seems to be improving. This is genuine harm. It is also chosen. Nobody forced this. They continue because the challenge matters to them, because pushing themselves toward something difficult is part of what makes their life feel worth living. The harm is teaching them discipline and persistence and how to face failure and keep going. The difficulty is developing them, not breaking them. And the achievement would be very hard without it. This harm should remain.

Grief is the same. When someone you love dies, the pain is devastating. You cannot imagine feeling okay again. But the grief exists because the love existed. The depth of the grief reflects the depth of what you had. Grief should remain, not because we want pain, but because the only way to avoid grief is to never love deeply. And a life without deep love would be far poorer.

Three questions tell you whether the harm you are facing is unnecessary or necessary. Can you opt out without severe consequences? If no, if survival pressure or someone else's power is forcing you into it, then it is unnecessary harm and should end. If yes, if you chose it freely and could stop, then it is necessary harm and should remain. Does it serve your growth or help you achieve something that matters to you? If no, it should end. If yes, it should remain. Is it necessary for something that makes life genuinely good, love, achievement, learning, deep relationship? If yes, it should remain. If no, it should end.

Here is what the universalization test looks like applied to specific values.

"Accumulate wealth at any cost." Apply the test. If everyone holds this value, every interaction becomes an attempted taking. Every transaction is an opportunity to take rather than to create. Nobody can trust anyone because everyone is trying to take maximum value from every encounter. Agreements become meaningless because people break them whenever it is profitable to do so. Markets break down because buyers and sellers cannot trust each other. Shared resources get destroyed. Everyone takes as fast as possible because leaving anything means someone else will take it first. Systems collapse. This value fails the test completely. Historical evidence is clear. Systems dominated by pure taking collapse. The pattern repeats.

Now look at values that pass. "All humans have equal dignity." If everyone holds this value, cooperation becomes possible at a scale that is currently very hard. People work together across differences because they respect each other. They make agreements because they acknowledge each other's agency and actually intend to keep them. Systems get built that consider everyone because everyone's dignity gets recognized. Innovation moves faster because diverse perspectives get included. Everyone can live well. It passes.

"Honesty in interactions." If everyone holds this value, agreements mean something. Commerce works. Relationships have real depth. Science advances because researchers report what they actually found. Democracy functions because voters can actually evaluate what is true. Trust builds into systems that work better for everyone. It passes.

Values that fail the universalization test do so because they depend on taking from others or treating some people as less than fully human. They might benefit the person holding them if nobody else holds them. But when universalized, they destroy the conditions everyone needs to live well. Values that pass the test create conditions where everyone can live well when everyone holds them. They get better, not worse, as more people adopt them.

Competition is how the best things get built. The question is what people are competing to achieve, because that determines whether competition creates value or destroys it.

Good competition is when you are trying to do something better than anyone has done it before. Better product. Better service. Better quality. When you compete by creating more genuine value, the people you are competing against are pushed to create more value too. Standards rise. Everyone who uses what you produce benefits from this. Workers, customers, the community. The competitive dynamic that rewards the best quality, the most genuine service, the most innovative solution. This is competition that creates value. The winner wins by building something genuinely better, and everyone else has to get better too if they want to survive. This competition makes the world better.

Bad competition is when you are trying to take more than anyone else rather than build more than anyone else. Pay workers less. Charge customers more for less. Push costs onto the community or the environment. Find every way to take more from everyone you touch. When you compete by taking more, the people you are competing against are pushed to take more too. Standards fall. Workers get treated worse across the whole industry. Customers get less value. Communities bear more costs. The environment takes more damage. The competitive dynamic that rewards the most taking. This competition destroys value. The winner wins by making everything around them worse, and everyone else has to get worse too to survive. This competition makes the world worse.

The current system does not distinguish between these two kinds of competition. It rewards both equally as long as they produce profit. Not competition itself, but the conditions that determine which kind of competition wins.

There are three kinds of power worth understanding.

Power over others is the ability to make people do what you want regardless of whether it is good for them. You have it because they need something you control. Their job, their safety, their access to something they cannot get elsewhere. And you use that leverage to get what you want from them without needing to give them what they need in return. Power over others is zero-sum. Your power comes from their lack of it. When you increase your power this way, theirs decreases.

Power with others is the ability to accomplish things together that none of you could accomplish alone. It comes from coordination, trust, shared purpose. When more people join, the power grows for everyone involved. This is positive-sum. Alliances, communities, organizations built on genuine mutual benefit. These create power with others. The growth of one person's capacity does not require the reduction of another's.

Power for others is the ability to create conditions where other people can live well and do things they could not do before. Teachers who develop students. Leaders who build organizations that give people the chance to contribute something that matters. Systems designed so that the people inside them can grow into what they are capable of. This is the most powerful form of power because it multiplies. Every person whose capacity you develop becomes someone who can develop others. It builds on itself.

The values that should spread are the ones that lead people toward power with and power for. The values that should end are the ones that lead people toward power over, which takes from others in ways that depend on keeping those people powerless. The moment they gain power, the taking stops working.

These four frameworks work together. Harm, values, competition, power. They give us a way to evaluate any system, any business practice, any human decision, by asking: does this create the conditions for everyone to live well, or does it prevent it? That question is what Part 3 applies directly to AGI, and to the people building it right now.

PART 3: THE OTHER HALF OF AI SAFETY

AGI is being built right now. If the humans directing it cause unnecessary harm through their values, AGI will make that unnecessary harm possible at a scale and speed never seen before in human history. This is true even if AGI works exactly as designed. Even if it does precisely what its builders intended.

Demis Hassabis has said his mission has always been to help the world guide AGI safely through development and into deployment, for the benefit of all of humanity. There are two separate problems here, and both must be solved. Technical alignment ensures AGI does what humans intend. Values alignment ensures humans intend things that do not cause unnecessary harm. Neither alone is enough.

If we solve only the technical problem, we build the most powerful tool in history for humans to cause unnecessary harm with. If we somehow spread good values everywhere but AGI does not do what humans intend, we still get catastrophe. Solving one does not make the other less important. They are both necessary parts of the same challenge.

WHEN THE SYSTEM RUNS ITSELF

The essay so far has framed the problem as bad-values humans directing AGI. That is real and it is dangerous. What happens when humans are no longer the primary directors of most economic activity? What happens when AGI and AI agents run most businesses, execute most decisions, and optimize most systems? Not because anyone told a specific AGI to cause unnecessary harm, but because the values embedded in how those systems were built and what they were designed to optimize push them toward taking, domination, and short-term gain at any cost, without any single human consciously choosing that outcome.

Algorithmic systems already make hiring decisions, set wages, and determine loan approvals. They optimize supply chains and shape what information reaches billions of people. All of this happens with minimal human involvement in individual decisions. The systems do what they were designed to do, optimizing for the things they were set up to track. The people who designed them were themselves responding to what the systems around them rewarded. Bad values showed up not as conscious choices but as design assumptions and ways of measuring success.

When that same founder builds an AI system, they specify objectives based on what they already know how to measure. Short-term profit. Engagement. Efficiency. None of these are wrong on their own. The problem is that they were defined without asking what pursuing them at scale would actually cost the people inside the system. The AI pursues those objectives precisely and at a scale no human organization could match. The taking is not in the AI. It is in the objectives. And the objectives came from values the founder never examined, in a system that rewarded those values and hid their cost. The technical alignment is not the failure. The values embedded in the objectives are the failure. Nobody looked carefully at what those assumptions would actually produce.

With AGI, this pattern does not just continue. It moves faster and reaches further into every part of how people live and work. One person with access to tens or hundreds of AI agents can now do what once required several thousands of employees. In that world, the values embedded in those AI agents determine the experience of everyone who works for or is served by that operation. What they optimize for, what they treat as worth protecting, what they are willing to give up for efficiency. If those values are good, that concentration of AI capability in the hands of one person becomes power for others. If those values are bad, it becomes power over others at a scale that has never existed before.

A perfectly aligned AI system does exactly what its objectives specify. If the objectives were designed by people operating on taking values, in systems that rewarded short-term gain at any cost and made the human cost invisible, then the perfectly aligned system pursues those objectives perfectly.

The window before AGI arrives matters for what gets embedded in the systems themselves. What they are built to optimize for, what they are designed to protect, what trade-offs they are built to refuse. Those design decisions are being made right now, by people whose values are being shaped right now, in businesses and organizations and research institutions that are themselves operating on values shaped by a system that rewards taking and hides the human cost. The opportunity is not just to change the values of the humans directing AGI. It is to change what those humans design AGI to optimize for and protect.

THE GLOBAL COORDINATION PROBLEM

There is another dimension of the values problem that makes AGI development more dangerous. Global coordination on AGI is becoming harder, not easier, at exactly the moment when it is needed most.

Developing and deploying AGI safely requires countries, organizations, and researchers to coordinate at a scale humans have rarely achieved even on far simpler challenges. Climate change. Nuclear nonproliferation. Neither of those produced the coordination they required.

Values of domination make people compete when they need to cooperate. When organizations and countries operate on the value of establishing superiority over others, they cannot genuinely cooperate. Cooperation requires some willingness to let the other side benefit too. In AGI development, this shows up as every country worrying that if another country builds AGI first, it gains an advantage that could be used for domination. Every organization worries that if a competitor deploys first, it captures the market and the power that comes with it. These fears drive a race, where every party moves as fast as possible rather than as carefully as safety requires.

Values of deception make coordination very hard to build and maintain. When agreements are made without genuine intention to keep them, coordination that depends on honest communication cannot be built. When safety compliance is claimed in public while corners are cut in private, the same breakdown happens. In AGI development, this shows up as organizations claiming publicly to prioritize safety while privately cutting corners to move faster. Countries claiming to follow agreed standards while secretly pursuing faster timelines. When everyone assumes that everyone else is managing their appearance rather than reporting accurately, trust breaks down completely. Even parties that want to cooperate honestly cannot demonstrate their honesty in a way that others will believe.

The result of all of this is a race toward AGI deployment that cannot be undone once it happens, driven by values that make stopping feel very hard. Each party knows that if they slow down, others will not. Each party assumes others are cutting corners on safety regardless of what they say publicly. Each party faces overwhelming pressure from competitors, investors, or governments to move as fast as possible. The rational individual choice is to move fast, minimize constraints, and seek advantage. But when every party makes that same rational choice, the combined outcome is one nobody wants: an unsafe race toward AGI with inadequate coordination, inadequate safety measures, and a direction that gets set before anyone feels they have the power to stop it.

The reason this is so hard to solve through policy or rules alone is that its root cause is in the values of the people operating within these systems. People with values of equal dignity, honesty, genuine cooperation, and long-term thinking can build coordination tools and actually follow them. People with values of domination, deception, taking, and short-term gain at any cost will find ways around coordination tools regardless of how carefully those tools are designed.

This is why working on the root cause matters even when it is slow. Policy tools and coordination agreements can constrain behavior in the short term, and that work is necessary. But the only thing that permanently solves the coordination failure is changing the values of the people inside the systems that are failing to coordinate. That is what spreading good values through daily business experience is designed to do. Not quickly enough to stop the race that is already happening. But quickly enough, if started now, to shape the values of the people who will be making the next generation of decisions about how AGI gets built and deployed. The short-term race is a real danger this essay cannot fully answer. The long-term coordination failure is the danger this essay exists to address.

MOVE FAST AND BREAK THINGS FAILS WITH AGI

The default approach to technology development is to launch quickly, see what breaks, and fix problems as they appear. With AGI, this approach leads to catastrophe.

Launch quickly. See what breaks. Fix problems as they appear. Iterate based on what you learn. This worked reasonably well for most software because when something broke, the damage was limited. You broke it for a small number of users, in a limited region, in ways that could be patched and updated. The stakes were low enough that learning through failure was acceptable.

The approach that worked for social apps and productivity tools leads to catastrophe when applied to AGI. AGI can perform all cognitive work better than any human. It operates at ten to one hundred times the speed of human thought. Even in a staged deployment, early mistakes can happen faster than humans can respond, at a scale that in many cases produces damage that cannot be fixed afterward.

Speed makes human response very hard. When AGI makes decisions or takes actions, it does so faster than humans can monitor, understand, or stop. AGI optimizing financial markets makes decisions far faster than any human can process what is happening. If it is doing something destructive, enormous unnecessary harm accumulates before humans even understand what is occurring, let alone organize a response. The window between something going wrong and humans being able to understand what’s happening is essentially zero. There is no time to react. By the time you see the problem, the damage is already done at a scale that would have taken years to produce under human decision speeds.

AGI optimizing the content that reaches people across global information networks tests variations at machine speed across billions of people. It learns what makes people afraid, what makes them angry, what keeps them engaged. It optimizes for those outcomes because those are the signals it was given. Within months it has completely changed how billions of people think and see the world. By the time humans recognize the pattern, the changes are not in the system anymore. They are in how real humans think. You cannot push a fix that restores how people thought before. The damage is in the people, and it stays there.

Trust between people and institutions is one of the most fragile and valuable things humans have built over generations. It takes decades or centuries of reliable behavior to build and can be destroyed in months by sustained manipulation or failure. When AGI-enabled systems destroy trust at scale, through manipulation, through making it very hard to know what is real, through corruption of the information people use to make sense of the world, that trust does not come back when you fix the system. Trust comes back, slowly, through long patterns of reliable behavior over years. You cannot push an update that restores it.

People's understanding of what is true can be corrupted in ways that do not get fixed by removing the source of the corruption. If AGI generates and spreads false information at scale, and that information gets built into what millions of people believe, you cannot simply delete the false information and have people return to accurate understanding. The false beliefs persist and spread further through normal human conversation and behavior. Correcting false beliefs at scale is far harder than preventing them from spreading in the first place.

Psychological changes that AGI produces in how people think and relate to each other stay in the people even after you fix the system. If sustained AGI-enabled manipulation makes people more fearful, more suspicious of each other, more prone to seeing every interaction as a threat, those patterns of thinking persist. They get passed to children. They get built into cultural norms.

The people whose lives are destroyed by AGI-enabled conflict, oppression, or manipulation do not get an update that restores what they lost.

Social media moved fast and broke things. The broken things turned out to include the information system that democratic societies depend on, the mental health of young people in particular, and the social trust that makes cooperation possible. We have been trying to repair these things for years and making limited progress because what broke includes how billions of people actually think and relate to each other. Those humans do not get updated like software.

We cannot afford to move fast and break things with AGI, because what breaks will break faster than we can respond, in ways that in many cases cannot be repaired afterward. The stakes are too high to rely on iteration after deployment. There is only getting it right or living with consequences that cannot be undone.

We need to get the values of the people directing AGI, and the values embedded in what AGI learns from, right during development. The window for working on the human values problem is now.

WHY WE CANNOT WAIT

Values are being embedded into the data that AGI will be trained on, right now, through every business interaction, every employment relationship, every customer experience that happens in the world as it currently operates. Most of that data comes from a world optimized for taking. If nothing changes, AGI arrives already shaped by millions of interactions that treated taking, domination, and deception as standard operating behavior. That opportunity closes at deployment. We need to make good-values human interactions a meaningful part of what shapes AGI's understanding of how humans operate and what it looks like when humans treat each other well.

Hassabis has said this transition will be roughly ten times bigger than the Industrial Revolution and will unfold roughly ten times faster. Not a century. Closer to a decade. The Industrial Revolution reshaped the entire working world, the working week, the nature of labor, the structure of communities, the meaning of contribution. It caused enormous unnecessary harm during the transition precisely because nobody was prepared for the scale and speed of the change. This time we can see it coming.

The people designing post-AGI systems are being shaped right now by the values they hold and the environments they work inside. That includes the economic structures, the contribution frameworks, the distribution tools, the systems that determine what billions of people do when most traditional employment has disappeared. The hardest questions about what the post-AGI world looks like do not have answers yet. Nobody knows exactly what replaces employment as the primary source of structure and meaning for most people. Nobody knows exactly what a post-money economy looks like or who designs it. Nobody knows exactly how contribution gets recognized in a world where AI does most of the work. The people who will answer them are being formed right now. What forms them is what we do now. That is what Part 4 is about: what we do now, and why business is where it starts.

PART 4: HOW WE SOLVE IT

WHY BUSINESS IS THE LEVERAGE POINT AND WHY THIS WINDOW IS CLOSING

The question I get asked most often when I explain this essay is: why business? Why not government, or education, or media, or religion, institutions that have been trying to shape values for centuries?

Business is where most adults spend most of their time. Most people spend forty to sixty hours a week in a work environment. In that environment, they experience daily what power looks like, whether it is used over them, with them, or for them. They experience what honesty produces, whether telling the truth is rewarded or punished, whether people who are honest do better or worse than people who manipulate. They experience what cooperation produces, whether working together creates better outcomes or whether it just gives others the chance to take credit for your work. They experience what fairness feels like, whether effort and contribution get recognized, whether people in the same situation get treated the same way. All of this, repeated daily over years, forms their understanding of how the world actually works. Not how they wish it worked.

This means that changing what the business environment rewards and punishes changes the values of everyone inside it. Not through argument or inspiration, but through what they experience as normal and what produces good versus bad outcomes for them personally. When a worker spends years in an environment where honesty is rewarded and manipulation is caught and punished, they internalize honesty as how you operate. Not as a moral choice, but as a practical reality. When a worker spends years in an environment where treating coworkers with dignity is normal and produces better outcomes for everyone, they carry that expectation with them everywhere they go.

Business is where outcomes are measurable in ways that make proof visible. You can show clearly whether treating workers well produces better retention, better quality, better customer loyalty, better profitability right now and over time. The evidence is verifiable. You cannot argue with clear evidence that one approach consistently produces better outcomes than another. Markets create pressure to adopt what works. When businesses that operate on good values consistently outperform businesses that take, others notice and copy the approach.

The fear that businesses operating with good values will be destroyed by competitors willing to do whatever it takes is real and worth addressing directly. In some markets right now, taking genuinely outcompetes creating. A business paying workers fairly faces higher costs than one paying the minimum it can get away with, and in a price-sensitive market the taker can undercut and win. This is true and I will not pretend otherwise. But it is true under current conditions. Conditions that AI and good-values people will change. When AI shows each business owner specifically what their turnover costs are actually costing them, when the full cost of taking becomes visible rather than hidden, the short-term and long-term math start pointing toward good values. And in the transition to a world where AI handles most work, the businesses that have already built the trust, the loyalty, and the reputation that come from treating people well are the ones positioned for whatever comes next. The direction is clear even when the timing is not.

The window where business is the primary lever for values formation is closing. When AGI arrives and handles most of the work that currently employs most people, the forty-to-sixty-hour work week that shapes values through daily experience begins to change. The workplace as the dominant values-forming environment for most adults is a feature of the current world, not a permanent feature of all worlds. It is the reason to move now, while the window is open. The values formed in businesses during this transition period become the values that people carry into whatever replaces traditional employment. The proof generated now becomes part of what AGI learns from. The combined shift in what millions of people experience as normal in their work lives becomes the raw material that shapes the expectations they bring to the post-AGI world. Business is the leverage point because this window has not closed yet.

HOW AI CHANGES THE GAME

The fundamental problem business owners face right now is overwhelming short-term pressure that makes it very hard to see or act on long-term consequences, even when they genuinely want to make better choices.

Monthly loan payments. Quarterly earnings reports. Investors who want returns faster than good values pay off. Competitors who are cutting costs right now and will undercut you if you do not match them. Workers who need to be paid this week regardless of what the picture looks like in one to two years. Customers who choose based on price and immediate experience rather than how the business treated its workers. All of this is visible, immediate, and personal.

When a worker quits because they are treated badly, the full cost does not appear as a single line item on a financial statement. Recruiting, training, lost productivity during the learning period, lost knowledge about how the business actually works, lost customer relationships that workers had built. These costs are spread across time and across different categories in a way that makes it genuinely hard to see and easy to underestimate. When a business builds a reputation for treating people well, the benefit builds slowly over months and years in ways that are hard to connect clearly to specific decisions. The link between a decision made today and its outcome one to two years from now involves too many steps and too much time for human thinking to track reliably under pressure.

This is what Revi is designed to do: show each business owner, in their specific situation, exactly what the numbers actually look like right now and over time with different choices.

Here is what happens to your retention rate if you raise wages in your market.

Here is what that does to your training and recruiting costs over the next year.

Here is what it does to your quality.

Here is what quality improvement does to your customer retention.

Here is what customer retention does to your revenue path over time.

Here is the net effect on profitability, month by month and quarter by quarter, compared to your current approach.

This is not generic theory. This is not what works on average across all businesses. This is what the math actually looks like in your business, with your workers, in your market, given your specific situation.

When AI shows clearly what the numbers actually look like right now and over time, the decision becomes: do I optimize for this quarter only, or for both right now and the path over the next one to two years? When that path is visible and clearly better, the choice is not sacrificing short-term profit for the right reason. It is smart strategy. Good values do not always cost you in the short term. Many of the decisions that treat people well also produce better results immediately. And when the full picture is visible over time, the case becomes even clearer. The rational choice and the moral choice become the same choice.

Every interaction Revi generates is a documented record of a human being helped to make a decision that treats people well and produces better results. Not in theory. In a real business, with a real owner, facing real pressure, choosing differently because the full picture, right now and over time, was finally made visible. That record is verifiable. It is specific. And it scales.

Every person who uses Revi and makes better decisions contributes to a growing body of documented evidence of what good-values human behavior actually looks like in practice. That evidence becomes part of what AGI learns from during development, shaping its understanding of how humans operate before deployment closes the window. This is why Revi is not just an advisory tool. It is a generator of good-values interactions at scale, built during the only window when generating them can still shape what AGI arrives knowing about human beings.

STARTING WITH FOOD BUSINESSES

Why begin with food businesses specifically? Food businesses have properties that make them ideal for proving the concept and creating visible demonstrations that spread.

Food businesses are everywhere. Every community has them, from the smallest rural town to the largest city. The demonstration of good values working better does not happen in one place that people read about. It happens locally, in businesses people know and visit regularly, so the evidence is direct and personal rather than abstract.

Food businesses employ enormous numbers of people. Food service is one of the largest employment sectors globally, and most people know someone who works in it. Changes in how food businesses operate affect huge numbers of workers directly and many more people indirectly through those workers' families and communities.

Food businesses are small and nimble. Most are single location or small chains where the owner can make a decision and implement it immediately, without months of committee approval or years of change management across thousands of locations. The cycle time from decision to visible results is weeks or months, which is fast enough to generate clear evidence quickly and learn from what actually works.

And the impact is visible in a way that is unique to food. People eat daily. They notice when a place they visit regularly gets better. When workers seem genuinely engaged rather than defeated, when the quality improves, when the experience changes in the ways that come from people who actually care about what they are doing.

The goal is to create enough examples with clear, documented results that the pattern is undeniable, then let the proof spread to other business types. Once enough people have seen with their own eyes that treating workers well and operating on good values produces better outcomes, the argument stops being theoretical. It becomes what actually happened in places people know and can visit and verify themselves. Every good-values interaction in those places becomes part of the record of how humans can actually operate. The way owners treat workers. The way workers treat customers. The decisions that get made. That record is what AGI gets trained on.

THE FULL PATH FROM HERE

It starts with food business owners in specific areas in the USA, proving the concept, generating documented results, and building the foundation of evidence that the pattern is real and repeatable. Then it expands to the employees of those food businesses. Values do not just form in owners. They form in everyone who experiences the environment the owner creates. Then it expands to all service businesses in the USA, for both owners and employees. Taking what was proven in food and applying it across the wider service sector where the same dynamics operate. Then it expands to all kinds of businesses in the USA, for both owners and employees, across every industry, every size, every kind of work where people spend their days. From there, it reaches all businesses in select countries globally, for both owners and employees. And finally, all businesses in all countries globally.

But the path does not end there. It extends to every person doing any kind of work, in any context, like parenting, teaching, creating, leading, playing, traveling, and simply trying to live well and treat the people around you with dignity.

The biggest benefit goes beyond more people getting better advice. Each person contributes their insight to how the systems of the post-AGI world get established. Not just governments, not just AGI builders, not just the people who already have the most power. The transition to a post-AGI world should not be designed only by the people closest to the technology. It should be shaped by the accumulated experience and understanding of billions of people who used good-values tools to navigate their own situations and learned something real in the process. That accumulated wisdom, built into what AGI learns from, is the most powerful shaping force available. It only becomes available if people are actually using tools that help them operate with good values, not just hearing arguments about why they should.

WHAT SUCCESS LOOKS LIKE

Success is making decisions that produce better outcomes. When those outcomes are visible enough that others can see and choose to follow, success becomes something larger than yourself. In the context of food businesses, where the path begins, success looks like this.

Worker turnover rates drop measurably. In food service, annual turnover often runs at one hundred percent or higher. The average worker leaves within a year. Businesses that genuinely treat workers well see this drop significantly. The drop is measurable, verifiable in your own hiring and payroll numbers, and the financial impact of reduced turnover can be calculated specifically. Workers report their situation improving in ways that are consistent with what the business actually does, not just what it says.

Customer loyalty increases in measurable ways. Repeat visit rates go up. Referral rates go up. The kind of customer loyalty that comes from being genuinely helped rather than manipulated shows up in specific numbers that can be tracked.

Profitability improves over months and years. Some investments in people cost money upfront, but many produce better results immediately through lower turnover, higher quality, and stronger customer loyalty. The path points toward better financial performance right now and over time.

Community members notice and talk about the changes without being prompted. That recognition is evidence that the change is real rather than marketed.

When enough of these patterns are documented and verifiable, they become part of what AGI learns from. This is proof, built into the record of how humans can actually operate, that good values and better results go together.

THE TWO TRACKS

The strategy runs on two tracks at the same time, and both are necessary.

The first track is the long game. Prove good values work in businesses during this transition window, then carry that proof beyond businesses into every kind of work people do. Spread that proof through market forces and daily human experience. Change what millions of people experience as normal. Generate the kind of data that shapes what AGI learns from before AGI arrives.

This track operates through the only thing that has ever changed values at scale: daily experience, repeated over years, forming what people understand as normal and possible. It is slow. It requires patience and sustained effort. It will not be complete before AGI arrives. But every business that proves the pattern, every worker who experiences good treatment and carries that expectation forward, every customer who experiences genuine value and raises their standard for what they will accept, all of these contribute to the momentum that determines what kind of world AGI arrives into, and what values it finds already embedded there.

The second track is the short game. This essay. The direct argument it makes to the people building AGI now. The goal is that the human values part becomes part of how they think about what they are building before the direction gets set. Not as a separate project running alongside technical alignment work, but as a connected part of the same challenge. The training data question, the embedded values question, the design of what post-AGI systems optimize for, these are being decided now, by people whose values and frameworks are being shaped now. The short game is about reaching those people during the window when the decisions are still being made.

The long game alone is too slow for the AGI window. The short game alone cannot change the values landscape fast enough to matter at scale. The long game changes what AGI learns from, through the slow work of business transformation. The short game changes how AGI gets designed, through the faster work of making the argument directly to the people building it.

PART 5: WHAT THIS MEANS FOR DIFFERENT PEOPLE

Now that the strategy is clear, here is what it means for you specifically.

FOR BUSINESS OWNERS

If you are a business owner who is struggling, who feels the pressure of competition every day, who has had to make choices that felt wrong because survival seemed to require them, who wants to operate differently but cannot see how to do it without putting your business at risk. This is specifically for you.

The information available to you makes short-term taking look like the only viable path because the costs of taking and the value of good practices, both right now and over time, are invisible in the tools you've been given. Revi is designed to show you, in your business, with your specific situation, what the numbers actually look like right now and over time with different choices. The analysis itself is worth doing. I'm inviting you to try Revi and see for yourself.

FOR WORKERS

If you are a worker in a bad job, one that treats you as replaceable, pays you the minimum it can get away with, makes you feel like your contribution does not matter and your dignity does not either, I want to say something directly.

It's not the inevitable result of economic reality. It is the result of specific choices made by specific people operating in a system that makes those choices look rational while hiding the damage those choices do to the people inside it. You deserve work that treats you as a person who matters. Not as charity, not as a favor, but because that's what actually produces the best outcomes for everyone involved, including the business owner.

When survival is at stake, when you cannot afford to walk away from a bad job because there is nothing better available, or because the bills cannot wait, asking you to simply choose better is not a useful answer. The strategy in this essay is built on changing the conditions so that the choice becomes real. When enough businesses prove that good values produce better results, the range of options available to every worker expands. When enough workers share honestly about which employers and leaders treat people well and which ones do not, market forces begin rewarding the businesses that earn loyalty rather than the ones that take it. The goal is to change the system so that good treatment becomes the norm rather than the exception.

When you do have choices about where to work, choose employers and leaders who treat people well. Not just because it is likely better for you in the short term, though it often is. Your choice, combined with billions of others making similar choices, is what creates the market pressure that makes good-values employers successful and bad-values employers unable to hire. Every one of those choices moves the long game forward. The long game is what eventually removes the survival pressure that makes the short game so hard.

Talk honestly about your experience. When workplaces treat workers well, say so clearly. When they do not, say so clearly. The information about which workplaces treat people with dignity and which ones do not is valuable to every other worker making choices. You do not need a platform or credentials to share what you have experienced. That honest information, spreading through networks, shapes thousands of decisions about where to work and what to accept.

If you work for a business that uses Revi, you are already part of this. The decisions your employer and leaders make with Revi's help shape the environment you work in every day. And as Revi expands to serve employees directly, it will be there for you too, telling you exactly what to do next in your specific situation to grow, earn more, and build the kind of work life you actually want.

The destination this essay describes, a world where AI and robots handle most of the work, is the same thing that removes the survival pressure you are under right now. That is both the relief and the disruption, and both are real. The transition will not be smooth for everyone. The values of the people designing what replaces your current work will determine whether that transition frees you or leaves you behind.

FOR PEOPLE WORKING ON AGI

If you are working on building AGI, whether at an organization already focused on it, at a research institution, in policy, or anywhere else whose work connects to how AGI gets built and deployed, what follows is the most important ask in this essay.

Both problems must be solved. Technical alignment ensures AGI does what humans intend. The human values problem ensures humans intend things that do not cause unnecessary harm. What I’m asking you to consider is whether the human values part is getting the same focused attention and resources as the technical alignment problem. I do not know whether you personally are working on the human values problem. What I do know is that the risk from humans with bad values directing perfectly aligned AGI is at least as large as the risk from technically misaligned AGI.

The deeper risk is bad-values systems, built by humans, operating largely on their own, optimizing for objectives that were designed without looking carefully at what those objectives would actually produce for the people inside them. If those objectives were designed by people operating on taking values, in organizations that rewarded short-term gain at any cost and made the human cost invisible, then the perfectly aligned system pursues those objectives perfectly. The alignment is not the failure. The values embedded in the objectives are the failure. This means the work on what AGI gets designed to optimize for, the values embedded in its objectives, not just the precision with which it pursues them, is part of the safety problem, not a separate concern.

The interactions AGI learns from are being generated right now, by a world mostly optimized for taking. If good-values interactions, from businesses, from families, from communities, from every kind of work people do, become a meaningful part of what AGI learns from, AGI arrives already shaped by a different understanding of how humans operate. It arrives knowing what it looks like when humans treat each other well, not just what it looks like when they take from each other. That shaping happens during development, not after deployment. The window to influence it is the window we are in right now.

Here is the specific, concrete ask. Revi is being built to generate good-values human interactions at scale. The data that will be generated, the proof that will be documented, the record of humans operating differently, these are the shaping forces that matter most during development. Not as a separate project running in parallel. As a connected part of the same work. This essay and the work you are doing point at the same destination from different directions. We need to figure out how we can work together before we discover too late that we were working toward the same thing separately.

FOR EVERYONE ELSE

If you do not fit neatly into the categories above, this is for you too. Students thinking about these questions. Retirees who see what is wrong and want to help. Professionals in any field who care about what happens when AGI arrives. Parents thinking about what world their children will live in. Teachers, creators, and anyone doing work outside of a traditional business. There are genuine ways to contribute.

The most powerful thing you can do is embody good values in how you treat the people immediately around you. Treating everyone you encounter as having equal dignity regardless of their status or what they can do for you. Being honest in your communications, especially when honesty is inconvenient or costs you something. Looking for ways your interactions can genuinely benefit others rather than just taking from them. Thinking about the consequences of your choices beyond the immediate. Caring about others' wellbeing in your actual behavior, not just in what you say you value.

Every choice of where to work, when you have options, is a signal to the market about what treatment workers will accept. Every choice of which businesses to support, when you have options, is a signal about what customers value.

History shows how this actually works. The end of slavery did not happen because one person made one dramatic choice. It happened because enough people in enough places made enough choices toward recognizing human dignity that the combined weight of millions of choices became too large to ignore. Women's suffrage did not happen through a single intervention. Civil rights did not happen through a single intervention. Your choices, combined with billions of others, work the same way.

You are already part of this. Every choice you make toward treating people well adds to the weight that eventually tips the world in a better direction.

PART 6: THE DESTINATION AND CLOSING

THE DESTINATION

The destination is a world approaching universal good values. Around ninety-five to ninety-nine percent of humans holding values that, when everyone acts on them, let everyone live well. Combined with abundant AI and robots, far outnumbering the billions of humans, handling most of the work that currently keeps the majority of people in survival mode.

What that world actually looks like in its full technical and economic detail, how resources get allocated, what replaces money, what humans do when machines do most of the work, those questions do not have complete answers yet. What I can describe is what it feels like to live in a world where the people who answered those questions did so with good values. What follows is an attempt to make that world feel real, alongside an honest account of what remains genuinely unknown.

You wake up without an alarm because you do not need to be anywhere at a specific time unless you chose to be. Your housing is secure. You did not worry about it last night and you will not worry about it tomorrow. Food is available. Healthcare is accessible. Education is there whenever you want it. Exercise facilities, entertainment, self-care, transport, the ability to travel and see the world, these are not things you must work to afford. The basics of human survival and human dignity are simply covered. You wake up with energy to spend on the day rather than anxiety about making it through the month.

This morning you work on something you chose because it matters to you. Maybe you are part of a team trying to understand why certain ecosystems recover faster than others after disruption, a question with practical importance and deep fascination. Whatever it is, you chose it. Nobody assigned it to you. You are not doing it because you will go hungry otherwise. You are doing it because it calls to you and because contributing something to the world gives your days meaning.

The question is not whether people will have enough to live on. It is whether they will have enough to live for.

The work is genuinely challenging. You have not gotten to the easy problems because the easy problems mostly got solved. But the challenge is the right kind. The kind that stretches you in directions you want to be stretched. The kind where struggle and difficulty produce growth and real satisfaction rather than damage and slow defeat.

Your AI system helps you when you want help. Not by doing your work for you. It is more like having an extraordinarily capable thinking partner. One who helps you see angles you are missing, shows you relevant information you did not know existed, models what different approaches are likely to produce, asks questions that clarify your thinking. When you face a decision, about which direction to take your work, about how to help someone who is struggling, the AI does not decide for you. It helps you see your situation more completely so you can decide better. You are still the one whose judgment and creativity and values shape the outcome.

Around midday you meet people you care about. Real human connection with people who know you and whom you actually know. You are not rushed because you are not late for something you dread. You are not somewhere else in your mind because of financial anxiety or the weight of a bad work environment that followed you out the door. You are present for these people and they for you. The conversation goes from serious to ridiculous. You disagree about something and work through it. Someone says something that makes you laugh until your eyes water.

This afternoon you follow a curiosity you have been sitting with for months, the history of a particular city, how it was built, who built it, what it cost them, what they were trying to create. You follow it wherever it goes. An hour becomes three. You’re surprised by how much time passed.

Evening is for the people who matter most to you. These relationships still require work. Humans are complex. Communication is difficult. Conflicts happen. Vulnerability is uncomfortable. You still have hard conversations and navigate misunderstandings and sometimes disappoint people and get disappointed. But you have energy for this now. You are not bringing to these relationships the exhaustion and defeated spirit of someone who spent their whole day being treated as disposable and has nothing left. You can show up fully. You can give your attention and care without feeling like you are spending something you do not have.

Before sleep you feel something that takes a moment to name if you have never felt it consistently before: genuine satisfaction. Not because the day was perfect or you accomplished something by anyone else's measure. Because you spent your hours on things that mattered to you, contributed something that might last beyond you, connected with people you love, learned something interesting, grew a little. Your existence felt worth having. For most humans throughout history, that feeling was rare and hard-won. In this world it is ordinary.

You are still working on genuinely hard problems. You face uncertainty and setbacks. Relationships are still complex and sometimes painful. You experience grief when you lose people you love. You face questions about meaning and mortality that do not have easy answers. You sometimes fail at things you care about. But the difficulty comes from chosen challenges and unavoidable human experience rather than from systems designed to take from you. Your energy goes toward meaning rather than survival. That is what this work is for.

HOW IT ACTUALLY FUNCTIONS AND WHAT WE DO NOT YET KNOW

Do human-run businesses still exist? What does work mean when one person with hundreds of AI agents can do what a hundred thousand employees once did? What do most people do? How do they get what they need? Does money exist? If not, what replaces it? Who designs the system that replaces it, and what stops that system from being captured by whoever builds it first?

I want to be honest: nobody has complete answers to them yet. Hassabis has said that the current economic system, where people exchange their labor for resources, might not function the same way in a post-AGI society. He is spending more time thinking about this now precisely because the answers are not yet clear. He has named possibilities: things like universal basic income, community-driven systems where people vote on priorities and outcomes get measured, ways of distributing the benefits of abundance widely. But he has been honest that these are early ideas, and that what we can model out now are extensions of what we already have. Genuinely better systems might look nothing like what we currently imagine.

The foundation of the destination is universal provision of basics. Food, housing, healthcare, education, clean water, exercise, entertainment, self-care, transport, and basic travel. All of it handled by AI and robot systems and distributed to everyone without any requirement to earn them first. When AI and robots handle most of the production of what people need to live well, the logic of making survival depend on employment breaks down. Humans fought over basic necessities throughout history because producing them required enormous human labor. When that changes, the reason to fight over basics changes too. Whoever works it out will do so according to their values.

Right now, money serves two purposes at once. It is how you get what you need to survive, and it is how you get what you want beyond survival. As AI and robots handle more of what people need to survive, those two purposes come apart. Whether something like money persists for coordinating preferences and signaling appreciation, or whether something completely different emerges, the specific tool has not been determined yet.

Whether human-run businesses persist in recognizable form, what replaces employment as the structure of most people's days, how contribution gets recognized when AI does most of the work, all of these will be answered by real people making real design decisions. Those people will bring their values to those decisions, consciously or not. If they hold good values, they will build systems that distribute abundance, that recognize genuine human contribution, that make the transition slow enough for people to adapt rather than fast enough to maximize efficiency for whoever already has the most. If they hold bad values, they will recreate every inequality that has ever existed, but with AGI making it permanent and very hard to escape. Who designs the post-money system matters more than what the system looks like.

WHY I BELIEVE THIS CAN WORK

Getting from here to there requires navigating a genuinely difficult period. Employment disappearing faster than meaning structures can replace it is a real risk. Abundance being produced before distribution systems are designed with good values is a real risk. And the people who benefit most from current systems may resist change long enough to shape what replaces those systems.

The people who most benefit from current systems have strong reasons to prevent transformation that threatens their position. Politicians who serve the interests of those who fund their campaigns. Organizations that protect their own power regardless of whether they serve their stated purpose. Institutions built on domination and taking that will resist change with every tool available to them.

What actually enables the transition is building alternatives that work better. Making those alternatives so clearly superior that resistance becomes harder to sustain. Creating the conditions where the people designing post-AGI systems hold values that make them want to design those systems for everyone. Not just those who adapt easily. Not just those who already have the most. Everyone.

Some people with power and resources are genuinely open and will partner actively. Others are persuadable when shown that good values work better for them too. The transition requires enough people moving in the same direction that the momentum shifts. That is what building this is for.

Bad-values AGI could arrive and establish dominance before good-values systems prove themselves. The transition could move faster than meaning structures can be built to replace what employment provides. The people designing post-AGI systems could hold values that gather abundance in fewer hands rather than distribute it. Unknown problems I have not considered might prove decisive.

But the evidence is already there. The way good values actually spread throughout human history is not through lectures or moral arguments alone. It is through demonstration, through experience, through seeing certain values working better than others, through systems that reward different behaviors. Businesses that treat workers well do achieve better outcomes over time than businesses that take as much as possible from workers and customers. Societies with more widespread good values function better across every measure we can track: economic stability, public health, trust between people, quality of institutions, and long-term wellbeing.

I’m not claiming I have good values in some complete or finished sense. I was shaped by the same broken systems I’m trying to fix. I worked in environments that taught me to guard myself, stay singleminded, and lose patience with people who do not work as hard or care as much about achieving big goals. I carry those patterns. I’m still working on my own values while building this. The difference is I now understand which values let everyone live well and which ones do not, and I’m building systems that make good values the rational choice.

What I know from the inside of broken systems is that most people inside them are not choosing unnecessary harm. They are doing what the system makes rational.

I think about what I’ve carried, the physical exhaustion, the social isolation, the financial anxiety, the cost to my mental health, the rage at a world that creates unnecessary harm when it does not have to. Those experiences are real. They were not personal failures. They were what those systems produced, and I had no hand in designing them. Billions of people carry similar or worse burdens right now. The work is not for them in some abstract sense. It is recognizing that the systems creating those burdens could be different and deciding to spend your life working toward that difference.

That is enough reason to keep building. It is enough reason for you to decide what role you want to play in the window that is still open.

WHAT'S AT STAKE

The window before AGI arrives is measured in years, not decades. The exact timing is uncertain and I will not pretend otherwise. But the probability is high enough and the potential consequences are large enough that we must act as though it could arrive soon. The consequences of being wrong in that direction are manageable. The consequences of being wrong in the other direction are not.

If good values are sufficiently widespread when AGI arrives. If the people building it have good values shaping their decisions. If the organizations developing it operate on good values. If enough humans have experienced good-values systems and carry those expectations forward. Then AGI becomes the most powerful tool for good in human history. It helps every person understand specifically why good values work better in their situation, personalized to what they actually face. It moves the spread of good practices globally at a speed no human effort could achieve. It makes the knowledge and understanding currently available only to the wealthy accessible to everyone.

The people designing the post-AGI world, the ones deciding what replaces employment, what replaces money, what contribution systems reward, how abundance gets distributed, what the transition looks like for the billions of people who will find it destabilizing rather than immediately liberating, those people will design those systems with good values. The open questions will get answered by people who care about everyone. Within one human lifetime, we could approach universal good values and the world described in this essay.

If bad values still dominate when AGI arrives. If the people building it are optimizing primarily for power and advantage. If the organizations developing it operate on taking and domination. If most humans have only experienced bad-values systems and expect nothing better. Then AGI amplifies those bad values at a scale and speed that causes unnecessary harm unlike anything in human history. Perfect surveillance enables perfect oppression. Perfect manipulation enables permanent deception at a scale that makes truth practically out of reach. Perfect taking optimization enables concentration of wealth and power that makes previous inequality look minor. The open questions get answered by people who hold bad values. Who owns the most capable AI wins, just as who owned the most land or the most capital won before, except this time the advantage builds on itself in ways that are very hard to escape or reverse. The post-money system gets designed by whoever builds it first, reflecting their values, rewarding what they care about. The transition moves as fast as it is profitable to move, regardless of what it costs the people left behind. The billions who find the transition destabilizing do not get support. They get managed. The abundance gathers in fewer and fewer hands. The inequality becomes permanent. The window for changing course closes as power gathers around whoever controls AGI and uses it to maintain their position.

The decisions being made now add up into the direction humanity takes into the age of AGI. Researchers deciding what to prioritize. Organizations deciding what values to embed in what they build. Business owners deciding how to treat the people who work for them. Workers deciding what to accept and what to refuse. Anyone building systems deciding what those systems optimize for. Not the urgency of panic. The urgency of recognizing a window that will not stay open indefinitely.

THE FINAL CALL

If you have read this far, through all the argument and evidence and framework and vision, you have already done something most people will not.

I’m not asking you to believe everything here. I have acknowledged uncertainty throughout and I mean it. I’m asking you to take seriously the possibility that the human values problem is real and urgent and solvable, and to consider what your role in working on it is.

If you are a business owner, consider running the honest analysis of what good values would actually do in your specific business, right now and over time. Not because someone told you it is moral but because you deserve to make that decision with clear information rather than under pressure that hides the full picture.

If you are a worker, you deserve work that treats you as a person who matters, not as charity, not as a favor, but as the natural result of what actually produces the best outcomes. When you have choices, choose better employers. Talk honestly about your experience so others can make better choices. If you do not have choices right now, if survival is what is at stake, that is exactly why the strategy is built to change the system rather than ask you to give something up for it.

If you are working on AGI, the human values part of this challenge is real and it is complementary to the technical alignment work you are already doing. We need to figure out how we work together before we discover too late that we were working toward the same thing separately.

For everyone. Embody good values in how you treat the people immediately around you. Not as performance but as genuine practice. Treat everyone you encounter as having equal dignity. Be honest even when it costs you something. Look for genuine mutual benefit rather than taking advantage. Think about consequences beyond the immediate. Care about others' wellbeing in your actual behavior, not just in what you say you value.

Every one of these choices, made by you and by billions of others moving in the same direction, adds up into something that can change the direction of how humanity enters the age of AGI.

The future is not fixed. It is being shaped right now by choices that seem small individually and are enormous collectively. You are one of those choosing. Choose well.

EPILOGUE: AN INVITATION

This is what the entire essay was building toward.

Revi is being built to generate the largest documented record of good-values human behavior ever built. Every interaction is verifiable. That record is training data for a good-values AGI. The path runs from food business owners to every person doing any kind of work, in any context, anywhere in the world. Every person in that path who uses Revi and makes better decisions contributes to a growing body of documented evidence of what good-values human behavior actually looks like in practice. If the interactions Revi generates become a meaningful part of what AGI learns from during development, AGI arrives already shaped by a foundation of good values rather than a foundation of taking and domination. That shaping happens during development, not after deployment. The window to act on it closes at deployment.

What makes this possible are two things. Computing power, without which Revi cannot reach the number of people needed for the training data to be meaningful. And skilled teammates who have both the ability to solve hard problems along the way and the ambition to actually solve the human values problem at the scale it deserves.

The most direct path to both is joining a frontier AI organization that already has compute, already has skilled teammates, and is already taking the human values problem seriously.

If you're working on the human values problem now from inside a frontier AI organization, I want to connect with that work directly. Not just alongside it. I'd love to join the effort.

I don't have credentials that would get me in the room under normal circumstances. The window is too short for normal circumstances.

Let's work together.