Because values are not external, the internal values assume fully objective status.
Agents have preferences. Preferences can be satisfied. Values are a particular kind of preference.
"Consider (someone or something) to be important or beneficial; have a high opinion of:"
Philosophically speaking 'important' or 'beneficial' mean the same thing as 'valuable' so this definition begs the question.
Can value be physically real? Can we potentially build an importometer?
Even if the answer seems obvious, philosophy requires us to prove it.
To be physical, it would have to be local: no FTL measurements allowed. Since we can move all agents away from any particular entity, it would have to be agent-independent. To still be physical, the property would have to be necessary to predict the behaviour of the entity. Importance as physically existent has been ruled out to fifteen or so decimal places...but I can do better.
The nonlocal thing kills it. It would have to adhere to individual particles. An electron cannot tell if it's part of a diffuse, invisible nebula in space or part of a weapon currently killing someone. But, to be important, it has to be able to tell precisely this kind of difference. Importance cannot physically exist. It is not 'out there' so to speak.
This means that a universe with only one agent, that agent's preferences cannot be wrong. There is nothing to contradict them. (Caveat: time evolution introduces complications. Preferences evolve, and so the agent can be wrong about what their future satisfaction will be. Wisely trading present satisfaction for future is hard.)
If the agent is a person, and the universe satisfies their preferences, then it is simply better than a universe that doesn't. As per Subjectivity and Objectivity, if a person thinks they are satisfied, they cannot be mistaken. Satisfaction is a positive quale. A more satisfying universe is more positive, i.e. better, than a less satisfying universe.
A better universe is a more valuable universe.
But, surely, not all preferences can be values.
If there is a second agent, within epsilon of 100% certainty, their preferences will conflict. If we have an ice cream merchant and a customer, the customer would prefer to steal the frozen treat, and the merchant would prefer to steal the cash. (If we define value as simply the sum of satisfaction across agents, we get utilitarianism.)
While the coherent values can still be reified, the contradicting preferences cannot be straightforwardly converted to values. The only possible way to resolve the conflict is to invalidate one of the preferences; if an objective way of doing so can be found, then those preferences too can be reified into values.