Where is the next Horizon?
I find it appropriate as we have gotten to the plateau and nobody can seem to see the next higher mountain… and I am in a “giving my opinion” mood lately… So what are the “problems” which need to be solved?
Hardware & Network Latency
We are using fiber-optic cable in many places, researchers are looking at ways of speeding up hardware bottlenecks one such experiment Remy has shown a speedup of 2X current !! We will need to move this type of architectural hardware readjustment throughout the existing infrastructure to get to a point where we can transfer exabytes in mere seconds. The new XBOX ONE Architecture and approach shows that we are moving to a centralized offloading of computational tasks, thus we will move to a paradigm where we have layers of compute power sandwiched between the back-end and the client. But we are also changing what makes up our cloud, in the past we had a predominantly uniform landscape comprised primarily of similar CPU’s (like Xenon Lately). Researches have just added a Quantum Computer to the Cloud, and made it free to public use! Nvidia has just announced a GPU appliance for GPU virtualization, which will allow for workstation performance from a virtual machine, With the NVIDIA GRID Visual Computing Appliance (VCA). With all these advances in hardware
Processor compute “ceiling” vs. Energy Consumption
Intel has been exploring Mineral Oil as an alternative to liquid cooling, The image to the right shows servers running in a pool of mineral oil. Along with the improvements in Semiconductor manufacturing processes it is expected that we will be down to 5nm chips by 2022, for those of you averse to math, that is 9 years from now. Interesting note, the wiki article for the 14 nm chip predicted that we wouldn’t have a 14 nm chip until 2014, and Intel demoed the 14 nm chip today, Sept 10 2013. We’ve put so much effort into improving the efficiency of our processors but the languages we use to do work on them have stayed basically the same. Researches have been making silicon based Neurons for computation for a while now. With these chips we will be able to do faster pattern matching, and be able to develop deeper more complex logic circuits dynamically as hardware (self balancing and structuring physical pathways) just as neurons do, although that may yet be a few years down the road. IBM has
… unveiled a breakthrough software ecosystem designed for programming silicon chips that have an architecture inspired by the function, low power, and compact volume of the brain.
Systems built from these chips could bring the real-time capture and analysis of various types of data closer to the point of collection. They would not only gather symbolic data, which is fixed text or digital information, but also gather sub-symbolic data, which is sensory based and whose values change continuously. This raw data reflects activity in the world of every kind ranging from commerce, social, logistics, location, movement, and environmental conditions.
Please note that most of the articles posted here were announced in the week that this article was put together.. this technology is clipping along…
Speaking in a new Language
Since SyNAPSE is a Neuro based chip architecture it would make sense that we would need some programming languages and/or files which would be more “Native” so it only makes sense to start looking into Fuzzy Markup Language (FML), since it can be derived from the coefficients developed in Artificial Neural Network (ANN) Simulations (which van be descried using Extensible Markup Language for Artificial Neural Networks (XMLANN)), keep in mind the two are different. Alternatively Predictive Model Markup Language (PMML) can be used to fuzzily describe a data set. It is conceivable that we may be looking at a future where we break from analog computing and start going into a more neural approach. Again IBM’s SyNapse project has started us down this road :
reusable building blocks called “corelets.” Each corelet represents a complete blueprint of a network of neurosynaptic cores that specifies a based-level function. Inner workings of a corelet are hidden so that only its external inputs and outputs are exposed to other programmers, who can concentrate on what the corelet does rather than how it does it. Corelets can be combined to produce new corelets that are larger, more complex, or have added functionality.
We have numerous OS’s for multiple form factors, some of which even tech people are unfamiliar with as they are internally brewed (like what is the OS of the Sony Blu Ray player I have..? Android? Unix? it sure isn’t Apple, or anything that is recognizable as windows!.. We have grown into an ecosystem of incompatibility and differing approaches. How bad would it be if we were to find ourselves as the last people on earth, with a PC, and an android phone in a European country without a converter? (Yes I realize we could probably scavenge one up somewhere.. but the point stands that in a time crunch our hardware’s native language reader is speaking as many languages as there are countries on this planet! (and I just remembered about Java (which i try to avoid) but that too throws a monkey wrench into our compatibility issues.. as PC’s try to close security gaps that make Java work!. We need to rethink how we think about OS’s and competitive advantage in a capitalist society… also I may go as far as to say it might require a change in how we view the OS-Browser relationship, there is no reason we can’t make a “browser shell” and run our website locally in App mode, but still give the user access to that info (after syncing) in the cloud…
The human problem
For sake of argument let’s say we have hundreds of PetaFlop machines for every human, we run into the problem of telling those machines what to do. How do we start to make modeling and programming languages malleable enough and extensible enough to do the things we see the star trek crew do in short time? Now we know that there may be some creative license here after all you simply cannot sequence someones DNA in an hour as they do in the CSI, Law & Order.. etc series. So what could we expect to need to do on the fly (as it were). Lets look at a first contact situation, granted if the aliens are vastly more capable we are goners, but if we were able to analyze the data we have on their ship/force fields/biology quickly enough we may be able to find a weakness. How would this happen, no doubt the information we garnish would be “alien” in structure and most likely would require a unique model, perhaps abstractly based on existing
models, but this would be a herculean development effort to just decipher their hardware network weaknesses, the quantum mechanical engineering of the force fields, and we haven’t even come to a consensus on how the Genome, Proteome, Lipome, Phosphoproteome, Receptome, Glycome, Kinome and others that haven’t even been thought of yet. So how do we make a language easier for development of unknown complex problems? We already built their first robot/computerToday some kids have already built their first robot/engine by the time they are 6th graders!!
we are even making progress on breaking the silico-situ barrier many companies (including We are even ma
We have a CPU for wrought logic, a graphics card for linear algebra, and some day soon we’ll have a neural chipset for reasoning. With the trend to offload compute cycles to larger machines ( ahem cloud & render farms ) it stands to reason that some time in the not to distant future we will have neural node appliances capable of deep reasoning. The development community will need to accommodate these changes .. soon.