... Singularity to what? As I understand it a technological singularity is something that depasses HUMAN cognitive ability, so what are we depassing for it to be a singularity? We can`t actually depass ourselves because we ARE ourselves. Or are you referring to some giant collective brain here...?
A singularity is not the same thing as exponential growth. Exponential growth continues at an ever increasing rate, but at any point it always has a definite and finite value. A singularity is where the quantity does not have a defined value at all - usually because it runs off to infinity at a particular point. 1/x has a singularity at x = 0, the value is undefined. 2^x is exponential growth, and has a definite value for all values of x.
My view is that there's no such thing as a singularity in the physical world. (Don't ask about black holes unless you want to get into General Relativity.) Think about it, what would a technological singularity actually mean? Infinitely fast computers? That doesn't really make sense - that would mean a computer that can make any number of calculations in exactly zero time. So it seems to me that what we have with computers is clearly exponential growth and *not* a singularity. And in the physical world exponential growth is always limited by something. The population of rabbits grows exponentially - until they run out of food. Computers will at some point reach the physical limitations of the materials and technologies in use.
Whether or not computer AI's can be truly sentient is another question - one that I think there just isn't an answer for yet. I think it's dangerous to extrapolate too much from current technology - there's always the possibility of unforeseen tiping points, emergent properties, etc.
Speaking as an IT expert, it's technically impossible for an AI singularity to exist. At least according to the current computer technology we have and are predicted to have in the future.
The AI needs two requirements to qualify as a singularity: 1) It needs to have access to greater intelligence as humans. 2) It needs to have the ability to improve itself.
Part 2 is the easy part, relatively speaking. It's possible to write an AI that "learns" and is able to improve its own source code. However, doing so is a memory and time consuming task, which limits the AI's ability to do other things. Since the AI doesn't need to update itself constantly, it's possible to give it a scheduled time where it performs updates.
However, an AI cannot have the same or greater intelligence as humans, though it may depend on how you'd define intelligence. At the most basic level, I can tell you a computer cannot reason as we do. Even through complex algorithms, it will never be able to produce thoughts like ours. It can do calculations at high speeds, and through that, it is possible to program an AI capable of performing analysis and synthesis, two of the tools we humans to learn about the world around us. Thus, the AI will be strong at logical tasks. One of the tenets of any program is that it must execute its code step by step, line by line. The order of operations is set. This means the sheer basis on which the AI is built, the binary system, does not allow lateral thought.
Now, to be honest, an AI may come to the same conclusion as a human when presented a lateral puzzle. The AI will have spent a lot more memory, time, and resources to reach that answer though.
Why is this important? Because lateral thinking is what drives innovation. This AI, even when it writes itself, is limited to its maximum specifications. Just like us, it cannot upgrade its hardware on a whim. It requires lateral thought to invent itself new hardware to further increase itself on, and it requires us humans to build that hardware, since an AI, even one that advanced, cannot do anything beyond its capabilities.
Hence, it cannot actively overtake us in every way and evole indefinitely. Any AI we create with today's technology is wholly dependent on us.
Indeed, it did not argue for impossibility because that would be foolish. If we'd have computers on a different basis, one that allows for intuitivity, then it would be possible to create an AI equal to a human.
In current programming, it simply cannot be done. Similar results can be obtained, but it cannot "think" just like a human would. When judged on sheer logical processing, it is arguable that even contemporary AI systems vastly outsmart the human brain.
Woah, sounds like a pretty crazy concept! I'm not sure if we are undergoing said singularity yet or what, since it seems like it'd be more easier to find the even t horizon of this event than the real origin itself... Then again, seeing as we are already a lot smarter than we'd be without the technology we have, maybe we are on the horizon already? I'm still really excited to see what the future has to hold!