As computer technology races forward, with storage becoming cheaper and cheaper and machines becoming more powerful by the year, it’s odd that some things just don’t change. I am using a four year old computer with an Intel quad core processor at 2.4 gigs per, with 8 gigs of RAM, an Nvidia GTX660, and a 750 watt power supply. It runs great; it’s given me zero problems. And yes, I did build this machine but still…
What reason do I have to upgrade? I spend a little time in some shooter games, getting my DNA reduced to gloppy puddles of goo. I work in Blender and Gimp just a bit and I do some low-level audio work via Audacity. From time to time I will find the need to fire up Eric IDE and do some low-level coding. Other than that I just browse, exchange emails and do my banking and parts ordering online. Add to that watching online entertainment and you have my computer use down.
I look at the new offerings and I just cannot justify a new computer, or even new parts for that matter. But even with the insane specifications that are coming out on desktops, their sales seem to be falling behind tablets and phones. Almost everyone I talk to says they can do everything they need to do on a tablet.
Uh…really?
I suppose if you are doing email, social media and online entertainment consumption, then yeah, I’ll give you that. But I have a really nice tablet and I bought a really nice keyboard to go with it, but I put them away about 72 hours after I opened the boxes. I now use the tablet in the living room to check TV scheduling and look up who the actors are in whatever I am watching. For the work I do, I need a desktop or at least a decent laptop.
However, if one thing is coming out of this mobile technology industry, it might be the evolution of the old kludgy mouse and keyboard interface we use today. The first commercially successful typewriter blossomed on the market sometime in 1860s. Today, the main input interface we use is simply an evolution of the same keyboard and functions we started with almost 150 years ago. What we cannot do with a keyboard, we do with a mouse.
I spent some time this week with some high school seniors who were either taking advanced computing classes or planned on working in computers and technology engineering when they graduate high school. The three of us kicked this around for almost two hours. Each one predicts we are heading to the commonality of neural interaction. However, they don’t agree on how we’ll get there.
Brandy is a high school senior who actually graduated at winter break. However, she wants to audit a couple of classes before she starts college next fall. She is a nervous blur of energy and ideas and if you don’t work on keeping up with what she’s saying, you’re going to get lost. She’ll leave you behind and not look back.
Joshua is also a senior and will be seeking a four year degree in computer science. He has applied to Carnegie Mellon for next year. While Brandy has enough energy output to light up a small city, Josh is quietly intense. At first, you think he isn’t listening to you, when the fact is, he’s three steps ahead of you and has taken your thought or idea five different directions to glean the possibilities. Josh is a geek’s geek.
Both Josh and Brandy believe that the gee-whiz-ain’t-that-cool special effects from Minority Report are fun to watch but are impractical in concept. Even if that kind of technology is being developed, it will have little solid application in the work place. All the pinching and sliding on screens in NCIS: Los Angeles works fine in Hollywood…
Many of us have worked on a touch screen, both on tablets or glass-surfaced horizontal and vertical keyboards. I suppose with a lot of practice we might gain a bit of proficiency on horizontal touch keyboards, but traditional typing skills will become less prevalent as swipe and auto-correct become the standard. Josh see’s this as one possibility, but Brandy see’s something else.
Sliding, zooming, three dimensional computer graphics will not be the direction of computing in the future. While Brandy and Josh believe different things will get us there, both believe that neural connection to the computer is inevitable.
“Oh, you think not?” asks Brandy.
She smiles and points to an article she keeps bookmarked on her Lenovo. “Think of Google Glass as being the first shot fired in getting us wired,” she told me.
I’m having trouble finding arguments for her theory.
Even though Josh believes in the same outcome, he thinks that voice control is how it will finally evolve. Remember the Star Trek movie where they had to bring back a whale to their century? I think of Scotty trying to communicate with the computer by speaking into the mouse. That was actually a brilliant piece of humor bringing light to the main fact: the voice will replace the mouse. You won’t “click” something to open it, you will order the computer to make the display.
So what makes me an expert on this? Nothing. However, I know for a fact that several Ph.D’s read this site and some of them have their degrees in computer science and engineering. On behalf of two brilliant kids on the doorstep of their lives and professional careers, let me ask one thing. Tell them what their limited experience and education is missing here. Your serious input will not be wasted…
Ken Starks is the founder of the Helios Project and Reglue, which for 20 years provided refurbished older computers running Linux to disadvantaged school kids, as well as providing digital help for senior citizens, in the Austin, Texas area. He was a columnist for FOSS Force from 2013-2016, and remains part of our family. Follow him on Twitter: @Reglue
The only thing I can adequately do on a tablet is watch Youtube videos in bed.
Our desktops sound amazingly alike in both form and function as well as our use of them. And like you I can find no reason to upgrade. Same for my 4 year old Lenovo ThinkPad (Intel Core 2 Duo w/4.0 GBs DDR2). For my needs both are more than sufficient.
As far as the future is concerned I agree with both Joshua and Brandy. As an old geek who never stopped being one, it’s not a matter of either/or. Voice interaction and neural interaction may come one after another or work side-by-side or both. I get the idea that that it will be both depending on the platform/device. I wouldn’t count out current interfacing technology either. There always has to be a backup and that’s what the older technology is often used for.
What are they missing? I don’t claim to be an expert–I’m just old. Here’s one vote for they’re probably not missing much of anything that’s available today. They sound better informed than a lot of people. What they’re missing is watching stuff happen for the next twenty years or so and it’ll take maybe twenty years or so for that.
My guess is that neural interaction systems are a real possibility. Seems like I recall an experiment in controlling a prosthetic arm with some sort of connection to the person’s nervous system. The Google Glass and voice control are other options. What we’ll probably know in twenty years or so is whether any of them is the most reasonable/desirable way to accomplish some tasks. That will probably depend on the tasks and the conditions and people doing the controlling; horses for courses. A couple of years ago, I had my arm in a sling after shoulder surgery. Not having a secretary, I developed just enough skill with a microphone and speech recognition software that it worked better than typing with one hand. Now that I have two hands usable again, I’m back to the keyboard because it’s less work–more efficient, for a bigger word–for the stuff I do. So encourage them to look at everything in the situation to see what fits best.
And be sure you’re available to attend their graduations.
I think ‘Uncle Geek’ is onto something. Regardless of ‘transport layer’, efficiency of information transmission is paramount. As we type, we consider what we wish to type, monitor what we have typed, and correct errors – if we find them – by correcting them as we go along.
Neural interfaces will have to deal with the fact that we form ideas in an inchoate manner and substantial brain function is use to refine them and then express them through whatever medium. We are not born with the ability to speak, type or play the piano but are able to learn. It will be most interesting to see what it takes not to emit reams of gibberish in a new interface. Think of a seven-year-old playing the violin.
The next question is whether words are the best expression of concepts. Imagine what it would be like to have to program in something akin to Chinese logograms. Isn’t that the goal of software reuse?
There is a way to go, yet and I’m sure I won’t be here to see it.
I buy the voice commands for only certain things, related to tasks where it truly fits in well with.
As I am writing code, which I do a vast majority of the time. I find it completely insane to take my hands off the keyboard and use a mouse. thus I have switched to using emacs, since it most resembles my old favorite editor wordstar (yes I am on old). Because it allows me to keep to the keyboard for the most part.
I finally got sick of all the editors I tried, and as painful as it was. I made myself adapt to emacs. And now that I am several years into it, I would never go back.
But I digress, my point is I want to use the keyboard since it is the most efficient way of interacting.
Which brings me to my main point, I find it hard to keep up with my though process using the keyboard. Voice commands would be even slower and clumsier. As is touch and motions (but would be a good work out).
So I think a true neural interface is the future. We will get to a point where we interlink our thoughts. At that point we will achieve efficiencies and speed only imagined currently.
In closing, I also think this will only be for true computer workers. Not for the common individual on the street.
People downplaying tablets are downplaying what? The (still) toying underlying OS? The meagre resources? Or the kind of (capacitive only) touch screens in use? Before tablets as we know them now, there was only what is currently named as touch screen equipped laptops. I own one, with a capacitive screen plus a resistive layer, plus a stylus, (and finally) plus a good handwrite recognition software. I can’t tell you how good this combination have been in my life. All my work is currently done with it, mainly spreadsheets, querying databases, some coding, presentations, browsing the net and accessing cloud mails, producing long documents, reading and annotating books. Current tablets form factor still have a way to go but they will arrive to this kind of convenience.
I find that using a tablet is more for entertainment or downtime. I COULD use one for some of the things I do, but I prefer my desktop, which is old…(made by e-Machines…which is no longer around!) it’s got a 160GB HD, 1GB of RAM, and an Intel ATOM 230 Processor, and while it won’t win any awards for performance…at least compared to today’s standards, it more than does the job running openSuSE 64-bit. As for the future and the possibilities, while I have no problems with touch-enabled devices…or even touch becoming the norm..(think: having to touch a pad for some form of fingerprint recognition in order to withdraw money from an ATM..) the voice control is a bit of a way off yet. I have seen doctors who have used voice recognition software and its been hit-and-miss, sometimes their words are mistaken for other words, and other times it’s spot on perfect,…so until THAT vehicle of data transfer and manipulation is perfected to 100% functionality I don’t see it becoming the trend some think it will be. And finally the neural-PC-connection? This is a two edged sword that has all kinds of positive and negative possibilities. Imagine how quickly say 911 could respond to a child’s emergency without knowing the address….they could just “follow” her neural path and get to her in record time…or the surgeon who’s “directing” a surgery from half-way around the world with just his thoughts guiding the hands of someone else. But that’s just the positive side, the negative side would be more akin to some of those dark future movies we see on TV. It might be that because we all are “connected” in a neural way….that hackers…(sorry I mean “crackers”) could get into places in your head you don’t want ANYONE to be…and instead of stealing money, they take things like memories, or events that changed our lives…maybe gave us a phobia…and they then play on those thoughts…or they demand a “ransom” for the embarrassing memory you have of being drunk at the last office Christmas party…..see that’s the problem with technology….it can be a blessing…OR a curse depending on who’s using it and for what. So for the neural thing?…I’m gonna say it might make it’s way to the “rich-and-famous” crowd, but the common person won’t have a need for it, or won’t want it because of the risks involved….juts my two cents.
As one who is not a native (though English has been my working language for over 30 yrs) – voice recognition has a loooong way to go. I shiver inside whenever I get some software on the receiving side of a phone-call, I just _know_ I will not be understood – so it usually end up like this:
me: I want to report an internet outage
remote: Sorry, I didn’t get that. Tell me why you have called here today – you can say, for example, “I want to pay my bill”
me: I want to talk to a representative
## now, repeat the above about 5 times ##
remote: Hmm – I still didn’t get that. Maybe you want our billing department? One moment and I shall transfer you
remote-2: This is the billing department – please tell me why you are calling here today. You can say, for example, “I want to pay my bill”
me: ma’am (always polite I am!!), I would like to talk to a representative – a _live_ person.
remote-2: Sorry, I did not get that ….
After more of this I give up and send them an email instead (from work, the next day) – all I really wanted was to report that my internet connection was out of whack!!
I never speak to Speech Recognition Software, sooner or later you get transferred to a real person.