Tycho:
which basically amount to "we'd be able to do anything at all!" which I think requires a bit more of a leap of faith than I'm willing to give.
katisara:
By our standards, it does. But remember, by the standards of someone living in 1,000 AD, we can basically do 'anything at all'. We can go across the world within a day. We seem to create food and prevent it from spoiling. We make water appear, apparently from nowhere. We can make cows pregnant without bulls. We reverse disease and even aging.
And yet, if someone in 1,000 AD had said "In 1000 years, humans will be able to snap their fingers and teleport to different planets, will never have to eat again, will live forever, and simulate all of human history billions of times on their ipads in just a few seconds!" they would have been wrong. The fact that we can do LOTS more than we could 1000 years ago doesn't mean we can do EVERYTHING. Likewise, though we can be confident humans will be able to do a lot more than we can now in 1000 years (provided we last that long), we shouldn't conclude that they'll be able to do EVERYTHING.
katisara:
Having a computer operate all of the processes of a human mind for 80 years costs in the range of $100,000 (depending on circumstances). That computer is called 'a human mind'.
And, interestingly, unlike most of the bits of Moore's law, that number tends to go up with time, rather than down. Not that that impacts the argument at all, but I think it's interesting to note that the cost of running one of these experiments is already quite expensive, time consuming, and increasingly so.
katisara:
Right now, our computers are extremely simple by the standards of a biological brain. There's a lot of reasons for that. But our technology is catching up. In fact, we are currently learning how to operate mathematical transactions at the quantum level. It seems almost self-evident that technology will, at minimum, reach the computational capabilities of a human brain, and reduce them to same energy and size requirements (which means it takes less electricity than a light bulb, and less space than a desktop -- i.e., your normal laptop).
That doesn't seem self-evident to me. Possible, perhaps, but nowhere near certainty. We're no where near the complexity of the human brain right now. People like to talk about the "number of computations per second" or "transistors per cubic meter" and compare what we can do with that of the human brain because by some measures we're already ahead of the human brain. But when we look at any computer that's actually designed to do what the human brain does, we realize we're not even in the parking lot yet, let alone the same ball park.
katisara:
After all, we know that processing capability is physically possible; it's happening right now!
True, we may someday be able to
match the human brain, by copying its functionality, but I don't think it then follows that we'll make more copies of it than have ever been biological versions (a key assumption to reach the "therefore each of us is likely to be a simulation" conclusion). The "if we could do it once, we'll do it a gazillion times" idea doesn't follow in all cases. We went to the moon once, but we didn't keep going over and over again. In fact, we stopped going after just a handful of times. Far from being as common as going to the grocery store, it's returned to being essentially impossible for someone to get there 40 years later.
katisara:
We accept that the minimum cost for our future society to run a single mind is about $1,000 in today's money (i.e., the cost of a high-performance laptop).
And even that would prohibitively expensive for simulating the whole of human history. Just to simulate the number of people who are alive right now would take trillions of dollars. And that's just a tiny fraction of all the people who have ever been alive.
As I said before, the idea of simulating
some people's experience doesn't seem too impossible. The idea of simulating more people's experiences than have ever been alive doesn't seem nearly as likely.
katisara:
So that establishes our lower bound. But establishing the upper bound is much more difficult. We can all agree that transistors cannot be shorter than the planck length and still hold useful data, but most people will reasonably expect that upper bound to still be pie in the sky. But the problem is, we really have no idea what the upper bound is. So far, Moore's Law has continued to proven itself correct. Every time we worry about transistors getting too small, we exploit a new technology to keep shrinking them.
But the "it's always kept doing this in the past, so surely it will do it forever" argument is proven wrong every time someone dies. Moore's law has held for what seems to us like a long time, but it's really no time at all in the grand scheme of things. It's a very local observation, not a global (in the abstract sense, not the literal sense) trend. It's a description of our current (ie, right now plus or minus 50 years or so) rate of technological advancement, not a rule that universe is bound by. Will it go on for the foreseeable future, sure, probably. Will that get us to simulating all of humanity's experience many, many times, not nearly as obvious or certain. There's a big difference between saying "this car's run great the last 5 years, so I don't see any reason to expect it to stop running next year" and "this car has run great the last 5 years, so I think it's reasonable to conclude that it will never stop running ever."
katisara:
We also have to remember that the cost for projects like this will not be significantly higher than buying a billion laptops, since if it's cheaper to do it that way, we will.
Not true. There's been WAY more than a billion people who've lived. And we need to do it more than just once for each person, we have to do it
many times for each for the conclusion to hold. Even spending $1000 per person currently alive is more than humanity has ever been willing to spend on any project. If we were willing to spend that much on something, I have a long list of higher-value projects than simulating reality for a billion minds.
katisara:
But it may be lower. So if we wanted to simulate a single country, say the US, with 200 Million people, the absolute maximum is 200 billion. But if Moore's Law holds, in ten years that'll be $20 billion. Even if we don't stick to that pace, but futher enhancements are possible (which they almost certainly are), we keep chopping that cost down.
Wait, wait, wait! No. You're assuming that we could
right now simulate one human's existence with a $1000 laptop, which is not the case at all. Once you say "some day we'll be able to do that" we've already made a huge leap. If you tack on "and at that point, Moore's law will still be running just like it does today," you've made an even larger leap. I'm not willing to accept those as "obvious" or "granted" assumptions. The logical conclusion of this line of reasoning is that eventually we'll have infinite computational power, at zero cost, at zero energy consumption, and taking up zero space. I don't think that's reasonable.
katisara:
And do recollect, we don't need to simulate the entire world, any more than Blizzard needs to simulate the bits of WOW no players are accessing. We only need to create feedback appropropriate to where the viewer is, and the entire processing is already being done by the brain! So on the one hand, the additional processing power necessary to 'run' the world is negligible. But a lot of processing our brains do (keeping our heart running, keeping our digestive track working, etc.) are no longer necessary, which gives us a SURPLUS of computing power.
But for the conclusion to hold, we do need to simulate the experienes of many times the number of humans that have ever existed. Like I've said, simulating one person's experience seems fairly possible. Repeating that many times for every person who's ever existed seems a much, much bigger ask.
Tycho:
We've got a real world already that we don't have to simulate, why not just use it?
katisara:
Is Tycho the scientist actually asking this? Tycho the guy who plays RPGs? Tycho the philosopher? The explorer? You can't think of any reason not to simulate another world, when we have this real one just sitting around?
Simulating an entire world for the purpose of looking at simulated experiences of minds seems unnecessary. For other purposes, sure, but that won't lead to the conclusion that we're all likely to just be simulations. If we made a simulated world so we could go back and see what it was like in the ice-age, we wouldn't need to recreated the consciousness of every creature in the simulation. Much of the point of simulating thing is to avoid having to do that. That's sort of the difference I'm getting at.
katisara:
It is indeed currently philosophy (similar to the current theory that black holes are the seeds of new universes). The math fits, but our technology isn't far enough along yet to test it.
I'm not even sure it's that far along. It's not that the math fits, it's that "if we assume anything is possible, then this is one of the infinite number of things that follow." There's not really math to fit, just a handful of large assumptions. We could just as easily argue "evil people exist now, so there will probably be evil people in the future. In the future we'll have near-infinite computing power, and evil people will probably have access to it. If evil people could do anything evil by simulating reality, they would. If they'd do it once, they'd do it many, many times. Therefore, it follows that we're part of some evil person's simulation, and should be doing our best to break the simulation." Arguing from the "in the future we'll be able to do anything at all" is a bit like starting an argument from a contradiction. You can derive absolutely anything you want from it.
Revolutionary:
Do you think there's something "Special" (in almost all but the trivial sense I mean that word) about the "stuff" of brains that "contains" and in fact exclusively implies consciousness?
Depends on what you mean by 'special,' I guess. From what I think you mean, I would say no. I think consciousness could be obtained by material different from our brains. I do think our brains are special in the sense that they're the product of billions of years of evolution, and are very, very good at what they do. It would take more than just a really fast processor, or a very big hard drive to compete with them. I don't think any piece of technology, with the possible exception of the internet in its whole, comes anywhere near the complexity required to have consciousness.
Revolutionary:
Or do you think that what we understand as a computer with sufficient computational power will evolve into what we reasonably would consider to be "conscious?"
Not a 'computer', no. I think we'll probably develop conscious technology in the future, but not just making computers better. It's not that you get past a certain number of flops and suddenly a computer wakes up. You don't just plug a few thousand laptops together and get a conscious entity. Consciousness is not just computational power. Admittedly, we don't have a great grasp of what consciousness is, or how it comes about. My own personal view is that it cam about from the fact that our brains needed to be able to predict how other organisms would behave under given situations. It needed to 'model' other organisms' behaviors, which meant sort of 'simulating' those other organisms within the brain. Once we had such a capability, we could also turn it on ourselves, and model our own behavior the same way. This allowed us to 'tell a story' about what our selves were doing, that was much more simplified than a detailed list of all the biological goings-on. Consciousness, under this idea, is sort of like a running narrative that explains and tries to make sense out of what we're doing. From there we could get a sort of feedback loop, where the consciousness not only monitors and tries to explain what the self is doing, but also can influence what it does. Obviously this is only a very rough sketch of the idea, and leaves out all the important details (because I don't know them!), and could very well be completely wrong. The important bit, though, is that a computing-device doesn't need to be conscious to do stuff. There needs to be a reason for it to become conscious, and just adding more processing power isn't enough to make that happen.
Revolutionary:
* If a world can be simulated, it will be simulated.
But it's not just a simulated world, it's the experiences of all the entities within that world being simulated as well. Simulating a world, but not having any (or at least not all) of the inhabitants being conscious is very different from simulating a world with billions of conscious entities.
Revolutionary:
* If a world is simulated, several will be.
Not just 'several' by 'many, many' more than 'real' worlds exist, or have ever existed throughout time.
Revolutionary:
* If several worlds are simulated, if you seem to live in a world--most likely it's a simulate one.
This part I'm okay with, if we accept the first two premises.
Revolutionary:
Furthermore, that we are in a simulation and as of yet don't know it has no moral or ethical implications what so ever.
Does it have any implications at all?