At a semiconductor chip-making facility, many machines and many hands and robots are used to produce and test wafers and chips during the very long process. It seems to me that Google Glass could be productively used for that.
I am a total Google Glass newbie, I'm brainstorming whether it might be a good tech.
Is Glass capable of receiving data from a machine and displaying it as a visual overlay? Would I have to do that by software from the vendor or could it read things by plugging into (a hardware attachment?)?
I'm sure Glass can read anything on the web or a database to which the wearer has login privileges, right? So then it's just a matter of applying a nice "GUI" which is transluscent in this case?
Can Glasses talk to each other directly?
thank you so much!
p.s. My apologies if this post violates a rule of etiquette. If you think it does, my questions to you are (if relevant to your feedback);
1.Which StackExchange site would be correct? The https://startups.stackexchange.com/ is 99% focused on business logistics. This is a tech question: what is feasible?
2.Alot of times "Best x" posts appear to be highly cited/up-voted yet closed/deleted. I honestly do not understand that. Is there a better way to pose the question? I think as long as a specific use case is given then this is a good forum, because it's an architectural software choice and tech input is required? So the 'best' 'most objective' answer would be one that justifies its response well on principles of SOLID, Immutability, etc.