In recent months, I’ve been fascinated with the potential collaboration in thoughts of machines and humans. Many times, a huge buzzword being thrown around is “augmentation”. However, what does that really mean? Is it robots replacing all the menial, day-to-day tasks humans have done to free-up time for elevated thinking? How does one label how work should be viewed (people have different preferences and skillsets)?
Looking into the evolution of technology, in the last 3.5 million years, tools were considered to be passive. For instance, when we use a chisel, it only cuts to where artists point to, in a fairly manual process. Nonetheless, fast-forward a few hundred years, past passive technology ideology has transformed into a generative style. Now, with millions of data points at hand with computers’ strength to synthesize information, humans’ contributions have declined to the mere abstraction of a goal.
With that said, how can we humans work with these highly-advanced technologies? Many researchers have repeatedly indicated the inability of machines to re-enact common sense behaviors that a child could easily do (maybe because there’s no set logic that goes from a to b). On the other hand, humans lack precision- resulting in errors, sometimes fatal. In a physical component, humans can use language to instruct machines to perform tasks.
Another potential type of collaboration could be the creation of a digital nervous system. There’s a central question regarding the notion of whether people know what they want. If so, can they envision the specs and features in their mind? To circumnavigate this, there’s the possibility of entering sensors in the everyday item people use. These sensors can track the minuscule patterns of use and details too obscure for the human eye. With these data collections, a plug and play model could be implemented to experiment with various combinations.
Author: Ruby Zhang