One thing I’m always asked when I talk about interstellar topics is how long it would take a spacecraft like Voyager to get to the nearest star. After explaining how far away Proxima Centauri and the slightly farther Centauri A and B really are, I tell the audience that Voyager, if headed in that direction, would be facing a travel time of over 70,000 years. That usually shifts the conversation considerably, because many people assume that if we can get to the outer planets, the nearest stars can’t be that far behind. If only it were so.
The Centauri stars are, of course, only the closest known (and who knows, perhaps there’s a brown dwarf a bit closer). Assume a space technology able to travel at close to the speed of light and you’re still dealing with travel times that amount to years, although time for the crew would be shorted according to those interesting Einsteinian effects that cause the crew of a vehicle traveling at 86 percent of lightspeed to experience half the elapsed time felt by those left behind. Getting to anything but the closest stars at such speeds is a long haul for any crew.
Seth Shostak talks about this issue in a recent essay, noting that 61 Cygni, the first star whose distance was correctly measured (in 1838, at the same time that Thomas Henderson was measuring the distance to the Centauri trio) is eleven light years away. To understand the distance, we play the analogy game: A ping-pong ball representing the Sun, placed in New York, would be matched by a smaller ball, representing 61 Cygni, sitting in Denver. We’re talking tens of trillions of miles.
Shostak’s point is to examine what he calls the ‘one percent’ rule. The Romans could hold an empire together as long as travel times to connect the empire were no longer than about one percent of the lifetime of the average centurion. Apply that to a space ’empire,’ even one moving at close to light speed, and you run into problems:
Even if we could move people around at nearly the speed of light, this “one percent rule” would still limit our ability to effectively intervene – our radius of control – to distances of less than a light-year, considerably short of the span to even the nearest star other than Sol. Consequently, the Galactic Federation is a fiction (as if you didn’t know). Despite being warned that Cardassian look-alikes were wreaking havoc and destruction in the galaxy’s Perseus Arm, you couldn’t react quickly enough to affect the outcome. And your conscripts would be worm feed long before they arrived on the front lines anyway.
Lively discussion, but what about communications? Information exchange usually takes place quickly, with our idea of maximum delay often limited to the amount of time it might take an overseas letter to arrive. That time is clearly shortening — we live in a world where a one-day delay in returning an e-mail can be perceived as mystifying, and the generation now being raised on iPods and iPhones, texting away at each other at whim, is unlikely to accept anything but instantaneous communications.
This may be useful, at least for those of us who worry about METI — the idea of sending messages to nearby solar systems rather than listening for signals from them. Do we as a civilization have the long-term approach needed, even if we decided such a thing were benign, to mount a continuing world-wide attempt to communicate with a civilization hundreds of light years away? The attempts made thus far have been sporadic. Will they become more than that?
Our cultural patterns argue against the idea, and although I am a champion of long-term approaches in most respects, in this case I defer to impatience. Because until we understand what we are doing and have an informed consensus on the matter, shouting to the cosmos could have implications we have yet to understand. Let’s put METI on hold.
A greatly enlarged public debate on METI is needed, one that incorporates a wide variety of disciplines, before further signals are sent. Meanwhile, that Great Silence that Fermi speculated about, and which we now seem to be encountering in our SETI searches, may simply imply that other cultures are much like ours, knowledgable about the distances involved and unwilling to make the generational commitment to a kind of communication that may never pay off. Shostak puts it this way: “…while the cosmos could easily be rife with intelligent life – the architecture of the universe, and not some Starfleet Prime Directive, has ensured precious little interference of one culture with another.” That may not be such a bad thing, at least until we have sound reasons for making our presence known in a cosmos we are only beginning to understand.
Shostak’s 1 light year limit on intervention as implied by the 1% rule assumes that we will always be saddled with a sub-100 year life span. Given the astounding advanced in biologic and medicine we have seen in just the past few decades, I would not be at all surprised that someone alive today (or perhaps a child of someone alive today) doesn’t live to see their 200th birthday.
The natural upper limit on human life expectancy appears to be 120 years and it will take a revolution in biotechnology to break that limit. Simply curing cancer and heart disease won’t cut it. But given the almost limitless profit making potential for the first company to make the breakthrough, as our knowledge and technology improves, it would seem almost a certainty that we will one day find the answer to extremely long life spans.
Perhaps then, in a star system-spanning civilization, life-extending technology is simple part of a species “natural” evolution. When you can live for 1,000 or 5,000 years, then a few years traveling between stars (perhaps in suspended animation anyway) would not seem as onerous.
Either way, I don’t think any limit on the size of an administrative unit, be it empire or commonwealth, will be much of an impediment to get out there and explore our galactic neighborhood and expand into it. And I don’t see why it would be for any other species either. People will do it even if they risk being isolated for the rest of their lives.
As for caution over METI. Well, I can understand it, even if I think the risks of announcing our existence to the Universe are often overblown. But wouldn’t it be a real shame if there were dozens of alien civilizations out there, all believing that they are alone in the galaxy because they are all too fearful of speaking up and reaching out to their galactic neighbors?
I’ve read that Stephen Hawking has expressed reservations about METI, his concern centered on our utter lack of knowledge about who- or what- might receive our message, and what their/its reaction might be. While some people ascribe negative human traits and emotions to intelligent alien life forms (anger, suspicion, intolerance, etc.) and assume therein might lie the danger, it’s also likely that any risk we face comes from an utterly dispassionate, coldly calculated alien conclusion that we could pose a risk to them instead (however slight,) particularly if they become aware somehow of our historical penchant for senseless violence. The logical thing for them to do then might be to launch an overwhelming, immediate preemptive strike, perhaps with some technology so advanced, not only would we be defenseless against it, we wouldn’t even understand it.
No hard feelings, of course.
I think I’m with Hawking on this one.
About twenty years ago, Project Longshot was conceived
by NASA and the US Naval Academy to send an unmanned
probe to Alpha Centauri in only about 80 years with
current technology.
The full report:
http://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19890007533_1989007533.pdf
Hi All
I think the odds against a hit by METI are too insanely high that it’s futile – but then that makes SETI via radio kind of pointless.
IMHO the only worthwhile strategy is SETA – the Search for Extra-Terrestrial Artefacts – and passive optical observation. Civilisations will indulge in planetary engineering to survive, I suspect, and such ventures will eventually be visible to our telescopes. Super-sized ventures like Matrioshka Brains will be a sure sign of ultra-intelligence tempered by compassion and reason because They will know we exist already – They would have the resources and motivation to map the Galaxy in detail – and we haven’t been on the receiving end of a relativistic bombardment.
Project Longshot was really little more than an academic exercise at the Naval Academy, but the sketchy description available is still fun to read. Glad to see it’s finally online; I had to move heaven and Earth to get a NASA copy five years back, with help from the good folks at GRC in Cleveland.
Could it be that our DNA is computer code that is a message that awaits its Rosetta Stone?
I don’t know enough about DNA’s data-storage capacity — is there enough room “in there” to not only have the code to “run a meat robot” but also to have the Encyclopedia Galactica?
It could be that immortality is waiting right there in our genes — that and FTL physics, a passport into the invisible worlds of Dark Reality (whatever that would be,) time travel, and the Cosmic Ten Commandments.
There, woo, glad to get all that out of my nervous system.
Now you guys have to deal with it. Thankfully I am not burdened with an education that would make thinking about these concepts way too big a scholarly burden to actually address.
Speculation, gotta love it.
;-)
Edg
Project Longshot could be accomplished using the original Project Orion Nuclear Rocket technology IMHO.
The problems would be shielding against interstellar radiation and dust as the speed increases and we still haven’t made an AI smart enough yet, but we’re close.
If it can be imagined, it can be engineered.
It wouldn’t surprise me one bit if we’re not the first to think about it.
Adam: your SETA idea is one that might become relevant as transit searches start covering more of the sky: a transit of a non-spherical object should have a different lightcurve, which could in principle reveal the presence of artifacts in orbit around a star.
I disagree with the idea that Matrioshka Brains would necessarily be benevolent though.
Edg Duveyoung,
You might be amused by some work done by Austrailain astrobiologist P. C. W. Davies. He calculates, in “Emergent Biological Principles and the Computational Properties of the Universe”, that “…one arrives at an upper bound for the total number of bits of information that have been processed by all the matter in the universe that] is also ~ 10^120. (Also see Lloyd, S. (2002) ‘Computational capacity of the universe. Phys. Rev. Lett. 88, 237901-8.’ Davies also suggests “…A similar calculation may be performed for genes. A string of n nucleotides of 4 different bases may be arranged in 4n ? 100.6n different combinations, yielding a lower bound for emergent properties of n > 200. …. Most genes are somewhat longer than 200 base pairs (typically ~ 1000).”
It seems that not only does DNA’s data-storage/computation capacity have enough room “in there” to “run a meat robot” but also to compute considerably more than the entire physical universe!
The problem with DNA as information carrier is that it is so ‘fluid’, i.e. so prone to mutations. Hence it would be difficult to keep the information constant through the eons of time. OK, the information in there has been stable enough to code for one and the same species (like humans) for a couple of hundred thousand years, but that’s about it. The power of DNA is exactly that: it’s great resilience and adptability. Great to withstand selective pressures, but hardly good enough to code for FTL travel.
So, to summarize: the problem with DNA as an information carrier is not its storage capacity, but its durability as a medium.
Hi andy & Edg
andy, I didn’t say benevolent just not paranoid enough to hit us with pre-emptive strikes via relativistic missiles, but your point is taken. Such entities would be too beyond us to make a guess either way as to their opinion of us – I suspect they’d treat us like we watch wild-life, out of curiosity. But then they might also replicate us in virtual worlds and see what we do. Heaven or Hell? Or some gruesome lab like Eli Roth’s “Hostel” series?
Edg, human DNA is about 3.3 billion base-pairs long, but much of it is repeated short segments and lots of copies of retroviruses and transposons. I’m not sure how much actual information that would contain, but in raw terms each base-pair is roughly 2 bits and each codon (3 base pairs) is 6 bits. Thus in terms of 6-bit code the genome is 1.1 billion “words” or 825 megabytes (8 bit bytes.) The coding part, that makes genes, is 1.2% of that, while the regulatory regions that control it are about 3.8% – thus 5% of 825 megabytes makes a human body… 41.25 megabytes. Smaller than Windows and most desktop games these days.
Oddly enough some
I strongly agree with Tacitu’s second last paragraph: the main incentive for humans (and other sentient species?) to travel to the stars will not be to establish an empire, but to explore and settle. Therefore, the main criteria for colonization of the galaxy will not be whether we can keep it together administratively, but whether humans are able and willing to bridge the gap, even if only once in their lifetimes.
…oops… the button got hit by mistake.
Oddly enough some species of “lower” animals & plants have much bigger genomes than humans, but most of it appears to be “junk” – useless duplicates of meaningless segments and genes.
But, and this is weird, there are segments in genomes which are the same across almost all species – yet when removed from “knock-out” mice there’s no apparent effect. Thus there are seemingly special bits of DNA, preserved across a huge range of species, that seem to do nothing detectable. The total amount isn’t huge – a few megabases – but it seems to be copied at high-fidelity for some unknown reason.
Why? What are the ultra-conserved sequences for? Could they be a message? Or some new kind of self-preserving transposon? No one yet knows. I’ve speculated, on my blog, that they might be part of distributed program that all living things are a part of – the natural Mind that guides evolution (if it’s guided), but that’s just speculation.
…and after a couple more decades, the ultimate triumph of the genome project will be decoding these sequences to find the message:
“This biosphere Copyright (C) 2159801.26843 Xenoterraforming Inc. 4158 Jhrkszqurt Street, Myxhgrlln, Zeta2 Reticuli IV. Reverse engineering, disassembly, duplication and propagation to other planets forbidden under Act LCCXIV Paragraph 23.4 of the Restriction of Genetically Engineered Organisms Act 2058321.86921.”
Shortly before we are annihilated by alien spacecraft for violating said legislation by decoding our own genomes…
There is something that bothers me a bit in this whole discussion about ‘highly advanced civilizations’ and their views of us, life, the universe, etc.
There seems to be a general idea, that those ‘advanced’ beings would automatically also have more advanced intelligence, morality, a level of thinking’ way beyond us’, etc.
I dare to question this; technological advancement does not necessarily imply greater intelligence, nor ‘higher ways of reasoning’, let alone morally higher objectives. It is a wrong idea that intelligence would automatically and inevitably keep increasing over time. From an evolutionary point of view, it is very well possible that, once a level of intelligence is reached which is amply sufficient for survival, this level does not increase anymore over time.
Sobering example: humankind has not become (significantly) more intelligent over the past xx thousand years and probably not a lot during our existence as the species Homo sapiens (or at least as a subspecies H. sapiens sapiens), judging by brain volume.
We have just learned a lot more through experience, trial and error. In other words, what we call (technological) advancement is largely a matter of experience and learning by the same species over time, not increased intelligence, let alone a ‘higher state of consciousness’.
And with this same intelligence, and mentality, and morality, we may one day be able to reach the stars.
And the same principle may hold true for other sentient species, a sobering thought in a way.
I guess the relativistic missiles thing depends on how far away the launcher is, how well they know the universe to ensure that said missile hits its designated target, and whether they really are capable of the massive energy expenditure required.
After all, a relativistic missile smashing up the wrong planet is going to be a pretty good way to get someone’s attention that there are hostile entities out there targetting them. Such collisions might even be detectable in nearby solar systems, which could bring new players into the game, which would be the last thing a sufficiently paranoid entity would want to do.
In a finite resource – energy and matter – (and ever increasing entropy) universe any imortal civilizations, no matter how peacefully *minded* and *advanced*, would know conflict is unnavoidable in the very long term.
After all, by definition:
1 planet can accomodate for 1 single Kardashev Type I civilisation.
1 star can accomodate for 1 single Kardashev Type II civilisation.
1 galaxy can accomodate for 1 single Kardashev Type III civilisation.
And the *strategy* for defering conflict can be only one: stay a Type II civ for as long as possible, hopping from star to star strictly as needed for survival, and make sure you are able to preempt any one else’s potenticial to go Type III (or Type 2.1) before you do, otherwise face extinction in the long term. But the day will come when the galaxy is running out of uninhabited stars for Type II civs to migrate when their current star begins to fade…
So I think attaining Type-II status is mandatory for long term survivability and THEN you should make everyone else aware that you exist and what your long-term strategy is: cooperation is the only way for a galaxy to stay below 2.1 for as long as possible …
As for galactic spanning METI I think it would be trivial for Dyson sphere type II civs: just modulate the sphere opacity (either locally or globally) and you would be broadcasting a signal with the power output of the entire star.
By modulatting opacity globaly you get a *broadcast* message, by modulating opacity localy (creating shapes if needed) you get a directed message.
And you can have simultaneous *conversations* going on without even having the cost to construct special purpouse hardware for communications other than your day-to-day civ infrastructure.
If you are immortal and you care to filter out short lived civs (that probably do not share your long term conflict avoidance survival strategy) from the channel you just take your time with the modulation using the appropriate time scale to ensure any *listener* would have to be equally long-term minded.
Interesting discussion!
Afonso: if a universe with finite resources were the case, and two or more type IIs were on the scene at the same time, wouldn’t they fill the galaxy with dyson shells and store the energy to avoid conflict for as long as possible? Wouldn’t a single type II do the same to avoid the cost and danger of migrating to a new galaxy?
Is the absence of a large number of dyson shells (the presence of stars in the galaxy??) some sort of proof that ATCs don’t rely on them for power?/aren’t worried about limited resources?
A
@andy: love your humor (10:40), reminded me in a way of The Hitchhiker’s Guide.
@Alfonso: though I partly agree with you, I would like to make the following comments:
as I have mentioned in a different post on this website before, I think the Kardashev classification’s weakness is the huge and relevant gap between levels II and III: from one solar/planetary system to an entire galaxy! It is exactly the missing step in between, namely capability to utilize more than one solar/planetary system, that is probably the most essential for survival ánd at the same time the biggest gap to bridge. Let’s make the distinction between IIIa (an interstellar civilization) and IIIb (a galactic civilization).
While then IIIa may become an imperative for survival of any advanced civilization, IIIb is more of a ‘cultural’ or political thing, not a necessity: domination of the galaxy.
Therefore I do not agree that the first and main reason for conflict would be scarcity of resources, also, as I stated above, because a technological civilization would be capable of stellar travel and colonization long before and irrespective of whether its resources are running out. Besides, many, most (nearly all?) resource problems can be solved within ones own planetary system more easily and cheaply than by obtaining those resources from another system. The only real imperative for moving to another system would be an impending catastrophe of planetary of solar system proportions.
Although, here on earth, many historic conflicts have found their root causes in resource scarcity and competition, I therefore think that an interstellar conflict, however unlikely, would rather have to do with culture and mutual distrust.
Likewise, such a conflict could then probably be avoided by agreement among civilizations on a fair division of available planetary systems.
Check out galaxy NGC 5907, which appears to have a
high abundance of red dwarfs, though these could also
be interpreted as Dyson Shells, which would radiate
heat in the infrared.
http://www.ifa.hawaii.edu/~meech/bioast/program/LEINT.1.9.pdf
http://www.telescopes.cc/ngc5907.htm
As you know, science fiction writers have been exploring this whole idea of alien psychology ever since Jules Verne and H.G. Wells, if not earlier.
The alien Moties in Niven and Pournelle’s The Mote in God’s Eye are among the most unique and believable I’ve read, and the Taurans in Haldeman’s The Forever War are interesting too- the Taurans are clones, and it’s our inability to understand their unique mindset that leads to misunderstands and the war. I’m sure others have their own alien “favorites.”
There have been previous discussions here regarding the melding of man and machine to not only “improve” humans and vastly increase lifespan, but make interstellar travel a possibility someday. We’re seeing now the migration of electronics from hand-held to wearable- are implantable cell phones too far away? We already have pacemakers, knee replacements and other medical implants, out of necessity. It’s not a large leap to imagine the “unnecessities” soon being implanted. And as the distinction between man and machine blurs, what does that do to our own mindsets, particularly if we begin to meld computer technology with the human brain? Will we never forget a memory, or have a false one? Will our new logic and clarity of thought eventually supplant such basic, “illogical” human traits as empathy, forgiveness, generosity and other traits that might seem “flawed” but define our very humanity? Evolution will cease to be random and start to become selective and deliberate, both through biological and mechanical means. Much can be gained, but the risk is that certain intangibles might be lost, and humans become unrecognizable in fascinating yet frightening ways.
It’s not a large leap to imagine that other intelligent species have faced the same slippery slope in their own evolution. Those who took that path- perhaps out of necessity due to climate changes (ahem) or some other threat to their existence- might have deliberately mutated to something so alien, we have no basis to even begin communication with them. The flip side of that coin is that if we resemble them because we too embraced artificial “improvements,” then communication might be easy once any software incompatibility is overcome, possibly through some mathematical touchstone that allows for translation.
The question is, will it really be two unique species communicating, or essentially the same specie that arrived at nearly the identical place through technology, computer-to-computer as it were, despite (perhaps) vastly different biological beginnings?
I guess we can speculate until we’re blue in the face about what “they” might be like, but with our rapidly-growing potential to change ourselves in pursuit of some God-like ideal, we also have to ask how “alien” humans will be if and when contact is finally made.
adam and andy:
Check out “Bringing Ancient Human Viruses Back to Life: A Jurassic Park or Salvation?” over at The Daily Galaxy. Seems some of the genome ‘junk’ has been decoded and it’s a note from and ancient virus!
I guess we can speculate until we’re blue in the face about what “they” might be like, but with our rapidly-growing potential to change ourselves in pursuit of some God-like ideal, we also have to ask how “alien” humans will be if and when contact is finally made.
Mark makes a valid point about a Technological Singularity, how it could come about and how it can affect ETI issues. What if a culture turns inward, into a virtual environment? What if the resulting post-singular intelligence turns all of the material in the Solar System into a computronium Dyson Shell around the Sun? Would it deem the outside Universe useful anymore? Could it control its star enough to control expansion into a red giant? Are K-Type II civs of this type? When the time came, would it upload itself into another Universe/dimension?
So many questions. Excellent thought experiment!
“…it is very well possible that, once a level of intelligence is reached which is amply sufficient for survival, this level does not increase anymore over time…”
If higher intelligence means more energy needed to drive that intelligence then there might be a plateau at which it levels out. We are already smart enough to develop spears, bows and arrows, guns, and such for defense. Within the natural world against non-tool makers it’s pretty effective. In such a world the higher cost in food needed might have outweighed any benefit.
But food isn’t such a big issue in the developed world any more. We can spare a few more calories towards brainpower. We are also living in an environment where brain power now has more value than just getting food, defending against predators, and providing basic shelter.
With the change in environment what we have observed in the past might no longer hold true. Especially if we start making changes to ourselves intensionally.
——-
Agreed with putting METI on hold. It’s like jumping into a pond head first without first knowing how deep the water is. It’s probably safe, but unpleasant things can still happen.
About the only thing we can say is that there probably are no berserkers out there. If there were we wouldn’t be here. We are currently a much bigger threat to ourselves than aliens.
So many interesting thoughts and view points.
Hi All
Milan Cirkovic is a Serbian futurist with some interesting ideas on SETI… here’s his latest…
On The Timescale Forcing in Astrobiology
Authors: B. Vukotic, M.M. Cirkovic
(Submitted on 10 Dec 2007)
http://arxiv.org/abs/0712.1508
Abstract: We investigate the effects of correlated global regulation mechanisms, especially Galactic gamma-ray bursts (GRBs), on the temporal distribution of hypothetical inhabited planets, using simple Monte Carlo numerical experiments. Starting with recently obtained models of planetary ages in the Galactic Habitable Zone (GHZ), we obtain that the times required for biological evolution on habitable planets of the Milky Way are highly correlated. These results run contrary to the famous anti-SETI anthropic argument of Carter, and give tentative support to the ongoing and future SETI observation projects.
…in sum Brandon Carter argued that because the odds of life developing are unknown and possibly random then it’s very strange that it has existed on Earth almost as long as Earth has existed although the two events are seemingly independent – therefore the odds are that life is rare and we’re alone. Or so he argues with some statistical mathematics.
What Cirkovic has done is argue for an astrophysical cause of correlation between life’s evolution and the evolution of stars and planets – thus invalidating Carter’s assumption of independence.
So what does that mean for SETI? And METI? In galaxies of old, stable civilisations we might expect a benign galactic milieu… but amongst a bunch of newcomers like us???
@Mark Wakely: twice you mention ‘It’s not a large leap’. Ia am sorry, but I think that in both cases you do make very large leaps indeed: human mind-machine (singularity) integration, which is highly speculative; and the complete unknown of other advanced civilizations and their reasoning.
@david: interesting thoughts and view points.
But with regard to the first part (intelligence) I think we should not confuse the need for a (greater) development, driven by selective pressure, and the ability to utilize more resources, even with the same level of intelligence.
Yes, we developed spears, bows and arrows, and the like with roughly the same level of intelligence as space ships and nuclear bombs. Apparently, to achive those things, we don’t need a lot more brain power, just time and experience (learning).
Our present day environment does NOT select for greater brain capacities (unfortunately?): highly intelligent people don’t get more children, nor do they live much longer than average or dumber people. Apparently it is not a selective advantage for survival anymore.
The way it looks we will have to do, and we can do, with what we have for intelligence. And something similar may be true for other intelligences. It is definitely not a logical given that a ‘good thing’ keeps developing indefinitely.
Hi Ronald-
The first leap I mention are implantable cell phones. Rather than singularity integration, what I imagine they will be at first is something far more rudimentary, perhaps akin to a cochlea implant with a wireless beneath-the-skin touchpad located on the arm of your choice. Nothing as grandiose as a mind-machine- just a phone you couldn’t lose.
As for the second leap, I suppose I’m as guilty as everyone else who speculates about alien life. I’m assuming at least a few intelligent alien species out there would also have the desire to “improve” themselves by biological and mechanical means, whereas they might all be perfectly content to live life on a more natural level, with no desire at all to tinker with their “alienness.” But if there are other intelligent species that embrace technology the way we do, then no, I don’t believe it’s a large leap to imagine they might face the same temptation we do to apply that technology inward and use it to become much more than they are without it.
Curiosity about the way things work and the desire to experiment and explore might indeed be unique human traits not shared by any other life form anywhere.
If any/all intelligent alien species out there are living without advanced technology, however, then projects like SETI are doomed to failure, which could very well be the case. We do not, as you state, know anything at all about how aliens reason, since we don’t know any aliens.
Are Humans Evolving Faster?
PhysOrg.com Dec. 6, 2007
*************************
University of Utah researchers have
discovered genetic evidence that
human evolution is speeding up —
and has not halted or proceeded at a
constant rate, as had been thought
— indicating that humans on
different continents are becoming
increasingly…
http://www.kurzweilai.net/email/newsRedirect.html?newsID=7583&m=25748
@Mark Wakely: regarding implantable cell phones, implanting transmitters could be a very bad idea unless you like cancer. So I wouldn’t regard doing that as “not a large leap” even without considering sociological aspects of such a technology.
Hi andy-
Like any implant, I had assumed FDA approval would be required, but forgot about this recent report you linked to. Interesting that the FDA stands by its approval of the microchip technology- there are certainly enough question marks to study the device further.
You can bet though that if a way is found to prevent the transmitter/cancer link, there are those who would have a cell phone implanted in an instant, if only to be the first one on their block with the latest thing. Even when legitimate fears were raised a few years ago about cell phones and cancer, cell phone sales continued to skyrocket. For many, it was a risk/benefits decision, and the benefits clearly won
It’s surprising how many people don’t mind the role of guinea pig for something they feel they can’t live without.
To Ronald:
ad stagnating morality. I don’t think morality si so dependent on the level of inteligence or physical prerequisites, it is rather product of society. And as every social scientest can confirm you societal products (like philosophy, culture, religious or morality) are uninevitebly bound with current technology. Thereby, i guess since inteligence can stagnate at certain level (cause there is strong bindings with physical/biological prerequisites) and technology will be evolving thanks to gathering more ad more experience (as you claim), the level or nature of morality will be changing all the time. I think there can be some deterministic way.
Ad inteligence of humankind. Despite the may-be-limited nature of inteligence of individual of human species, the total one may be increasing also thanks to more effective sharing among units (individuals) or possible bio-tech innovations. But that was already mentioned above..
To Afonso:
ad Kardashev civilizations fighting for system/galaxy. Well, i guess the annihilation of current concurent civs is not the only way how to achieve the dominance in system. Civs are being rising in various ways; along with the elimination of concurents let’s mention also integration. I don’t see that global civilization would be taking place by total war of dominant tribe against all others but rather in different ways. The same rules may apply even in broader scale.
In other worlds: it’s not neccesaire if the galactic Kardashev Type III civ will be Humans OR Vulcans but maybe it is question of single Human-Vulcan Kardashev Type III civilization.
Ad inteligence once more: The darwinistic selections on the basis of inteligence does not necesarilly means that inteligent people would have more children. It is also about stuff like position in the societal hierarchy or the power you keep among others. Or maybe the level of your influence among others. And still, the political or philosophical leaders are at least in the upper hlaf of the intleigence scale (mostly).
@Michal;
thanks for your interesting elaborations.
I like your 3rd paragraph, it presents an interesting outlook: to be incorporated in the Galactic Empire (or better: the New Republic) ;-)