Buscar en este blog

martes, 27 de abril de 2010

Explained: Thermoelectricity

Explained: Thermoelectricity: "Thermoelectricity is a two-way process. It can refer either to the way a temperature difference between one side of a material and the other can produce electricity, or to the reverse: the way applying an electric current through a material can create a temperature difference between its two sides, which can be used to heat or cool things without combustion or moving parts. It is a field in which MIT has been doing pioneering work for decades.

The first part of the thermoelectric effect, the conversion of heat to electricity, was discovered in 1821 by the Estonian physicist Thomas Seebeck and was explored in more detail by French physicist Jean Peltier, and it is sometimes referred to as the Peltier-Seebeck effect.

The reverse phenomenon, where heating or cooling can be produced by running an electric current through a material, was discovered in 1851 by William Thomson, also known as Lord Kelvin (for whom the absolute Kelvin temperature scale is named), and is called the Thomson effect. The effect is caused by charge carriers within the material (either electrons, or places where an electron is missing, known as “holes”) diffusing from the hotter side to the cooler side, similarly to the way gas expands when it is heated. The thermoelectric property of a material is measured in volts per Kelvin.

These effects, which are generally quite inefficient, began to be developed into practical products, such as power generators for spacecraft, in the 1960s by researchers including Paul Gray, the electrical engineering professor who would later become MIT’s president. This work has been carried forward since the 1990s by Institute Professor Mildred Dresselhaus, Theodore Harman and his co-workers at MIT’s Lincoln Laboratory, and other MIT researchers, who worked on developing new materials based on the semiconductors used in the computer and electronics industries to convert temperature differences more efficiently into electricity, and to use the reverse effect to produce heating and cooling devices with no moving parts.

The fundamental problem in creating efficient thermoelectric materials is that they need to be good at conducting electricity, but not at conducting thermal energy. That way, one side can get hot while the other gets cold, instead of the material quickly equalizing the temperature. But in most materials, electrical and thermal conductivity go hand in hand. New nano-engineered materials provide a way around that, making it possible to fine-tune the thermal and electrical properties of the material. Some MIT groups, including ones led by professors Gang Chen and Michael Strano, have been developing such materials.

Such systems are produced for the heating and cooling of a variety of things, such as car seats, food and beverage carriers, and computer chips. Also under development by researchers including MIT’s Anantha Chandrakasan are systems that use the Peltier-Seebeck effect to harvest waste heat, for everything from electronic devices to cars and powerplants, in order to produce usable electricity and thus improve overall efficiency.


"

domingo, 25 de abril de 2010

R-2 rumbo a Estación Espacial Internacional

R-2 rumbo a Estación Espacial Internacional: "El robonauta está pasando el entrenamiento para ser el primer astronauta mecánico con forma más o menos humana en viajar al espacio"

El próximo mes de septiembre se incorporará a la tripulación permanente de la Estación Espacial Internacional (EEI) el más raro de los astronautas: un robot humanoide capaz de utilizar las mismas herramientas que los humanos y, sobre todo, de ayudarles a realizar tareas muy difíciles y peligrosas.

Se llama Robonauta 2, o R-2, y en los primeros meses tendrá que permanecer confinado en el módulo estadounidense Destiny de la EEI, mientras pasa las pruebas pertinentes de resistencia en las condiciones espaciales, pero está previsto que luego se desplace por toda la base orbital.

La idea es que en el futuro este tipo de robots realicen incluso paseos espaciales, pero el primer prototipo no va dotado de los sistemas de protección adecuados para funcionar en las condiciones de temperaturas extremas del espacio abierto, informó la NASA.

R-2, de momento, es sólo medio astronauta, o medio robot: una cabeza, con un torso, dos brazos y dos manos, con un peso total de unos 150 kilos.

El plan es llevarlo al espacio en el tansbordador Discovery.

El robot es un desarrollo tecnológico de la NASA y General Motors que puede utilizarse no sólo en el entorno espacial sino también en la Tierra, para múltiples tareas industriales.

"Es un ejemplo de una futura generación de robots espaciales y terrestres, no para sustituir a los humanos sino para acompañarlos y realizar trabajos clave de apoyo", dijo John Olson, director del Departamento de Integración de Sistemas de Exploración.

"El potencial combinado de humanos y robots es una demostración perfecta de que dos y dos pueden sumar mucho más que cuatro", agregó.

El plan inmediato para R-2 es realizar, dentro de la EEI, pruebas en condiciones de microgravedad y radiación para probar su funcionamiento en el espacio, explicó la NASA.

Las operaciones permitirán ensayar el trabajo del robot codo con codo con los astronautas.

El R- 2 está pasando el entrenamiento debido para ser el primer astronauta mecánico con forma más o menos humana en la EEI.

viernes, 23 de abril de 2010

Alumno del Tec colaborará con la NASA

Alumno del Tec colaborará con la NASA
El Universal
Jueves 22 de abril de 2010

Desarrollará cinco proyectos de ingeniería incluyendo varios vehículos robóticos para exploración de lugares inaccesibles

Junto con la agencia espacial estadounidense, el estudiante del Tecnológico de Monterrey, David Alonso Quiroz Rochell, participará en el desarrollo de vehículos robóticos para el proyecto Greenland Robotic Tractor.

El alumno de cuarto semestre de la carrera de Ingeniero en Mecatrónica, en el campus Estado de México, fue invitado por el Centro Goddard de Vuelos Espaciales de la NASA, para integrarse al proyecto Greenland Robotic Tractor de la NASA.

David, estuvo en contacto con Michael Comberiate, Senior System Manager de la NASA y con Matías Soto de la Universidad de Texas, quienes después de examinar su trabajo lo invitaron a participar con un grupo de 26 estudiantes más de ingeniería provenientes de diferentes países.

Este contempla el desarrollo de cinco proyectos de ingeniería incluyendo varios vehículos robóticos, uno de éstos diseñado para transportar un radar de penetración bajo tierra en la isla de Groenlandia no tripulado y operado de forma remota a través de enlaces vía satélite.

"Nuestra participación como universitarios en la agencia espacial también contempla el desarrollo de software para la exploración y la cartografía de terrenos a través de una red planetaria de robots", agregó el alumno.

"Creo que para todos los seleccionados, el participar en un proyecto de la NASA es ver cristalizado tu esfuerzo. Las participaciones y el trabajo constante es lo que nos lleva como alumnos a concretar nuestros sueños y alcanzarlos", puntualizó.

Trayectoria

David Alonso Quiroz Rochell obtuvo la medalla de bronce en el I-Sweeep 2009 por la creación del Robot RV 800 el cual podría utilizarse para la exploración en lugares de difícil acceso para los seres humanos, como en búsquedas de personas en terremotos o derrumbes.

Asimismo, el Instituto Mexiquense de la Juventud (IMEJ) le otorgó el Premio Estatal de la Juventud 2009 en la categoría de Trayectoria Innovación Tecnológica por su investigación y desempeño en el desarrollo de prototipos robóticos.

Robot Sumo

Robot Sumo

Robot Sumo from Heimo Aga on Vimeo.


During the RobotChallenge 2010 in Vienna/Austria, the First European Robot Sumo Championship was held.

Two robots compete and try to push the competitor off the ring. There are different classes: Standard (3kg), Mini (500g), Micro (100g), Nano (25g) and Humanoid Sumo.
Filmed entirely hand-held with EOS 5D II and 7D and Canon L lenses, post production in Final Cut Pro 7, color grading with Magic Bullet Mojo.

Cast: Heimo Aga

"

Robots: 50 Years of Robotics (Part 1)

Robots: 50 Years of Robotics (Part 1): "



Today we celebrate the 50th episode of Robots! For the occasion, the Robots podcast talked to 12
researchers about the most remarkable developments in robotics over the last 50 years and their prediction for the next half-century. This '50th Special' is split into two parts, with the second half airing in two weeks. In this first part, Rolf Pfeifer from the University of Zurich gives a general overview of the developments in robotics, Mark Tilden from WowWee focusses on robot toys, Hiroshi Ishiguro on androids, Oscar Schofield on underwater robots, Steve Potter on brain machine interfaces and Chris Rogers on eduction robots. Also coinciding with this 50th episode, the Robots website has gotten a major overhaul: Apart from an updated layout, you can now easily browse episodes by topic, interviewee or tags, and you can interact with other listeners by leaving comments below episodes and on the new Robots forum. To do both, just log-in once in the top bar of
the website. Thanks to all our faithful listeners!

"

Domestic robot to help sick elderly live independently longer

Domestic robot to help sick elderly live independently longer

The recently started research project, which has been named KSERA (Knowledgeable Service Robots for Aging) focuses in particular on COPD patients, people with chronic obstructive pulmonary disease. In 2030 this disease will be the third cause of death worldwide, according to expectations of the World Health Organization. The disease especially affects old-aged people.

In three years several demonstration houses should be finished. They will be equipped with a robot and the domestic systems of a 'smart home' -- think of self-opening curtains. The central role is played by the 'domestic robot'. It follows patients through the house, learns their habits, watches them closely, gives sound advice, turns the air conditioning up or down a bit, and warns a doctor when the patient is not doing well. In addition, the robot also provides entertainment in the form of the Internet and videos. "We want to show what is possible in this area ," says project coordinator dr. Lydia Meesters about the goal of the project.

The TU/e researcher, from the Department of Industrial Engineering and Innovation Sciences, emphasizes that this new type of intelligent care house will not be a cold environment. "It should be as homely as possible. In an ideal situation the only technology you see will be the robot. It will be the contact for all the domestic systems. Otherwise the place will just look very homely."

Ethical issues will also be given special attention. The robot must give good advice to patients, but it should not be a policeman, Meesters explains. What to do, for example, when a COPD patient lights a cigarette? And what may the robot system pass on to 'the central operator', and what not? Meesters: "We need to define clear limits, for the robot will continuously measure and see very private data."

The project has a total budget of almost 4 million euros, 2.9 million of which will be furnished by the EU. Other parties involved are the Italian research center Institute Superiore Mario Boella, Vienna University of Technology, Hamburg University, the Italian ICT company Consoft, the Central European Institute of Technology in Vienna and the Israeli care provider Maccabi Healthcare Services.

Light-based localisation for robotic systems

Light-based localisation for robotic systems: "Getting robotic systems to accurately detect both moving and static objects remains an obstacle to building more autonomous robots and more advanced surveillance systems. Innovative technology that uses light beams for localization and mapping may offer a solution."

The technology advances the current state of the art of Light Detection and Ranging (LIDAR), the optical equivalent of radar in which reflected beams of scattered light are used to determine the location of an object. Whereas most LIDAR systems use a one-step process to detect objects by scanning an area and measuring the time delay between transmission of a pulse and detection of the reflected signal, researchers working in the EU-funded IRPS project added a prior step.

They use LIDAR to first build a 3D map of the area, enabling their system to pinpoint the location of not just static objects but also moving ones -- be it a human, an open window or a leaking pipe -- to within a few millimetres. The researchers, from four EU countries and Israel and Canada, have called the technology 3D LIMS (3D LIDAR Imaging and Measurement System) and foresee a broad range of applications for it, from navigating autonomous vehicles around airports to monitoring industrial equipment and enhancing securitysurveillance.

"This two-step LIDAR process, involving first calibration and then real-time navigation, is the key innovation. It allows the system to accurately and rapidly detect changes in the environment," explains Maurice Heitz, the manager of the IRPS project and a researcher at French technology firm CS Communication & Systèmes.

The technology not only detects objects with greater accuracy, but unlike camera-based robotic vision systems it is not affected by shadows, rain or fog, and provides angular and distance information for each pixel, making it suitable for use in virtually any environment.

Robotic airport buggies

To highlight the potential of 3D LIMS, the IRPS team built a prototype application in which the technology was used to navigate buggy-like autonomous vehicles that might one day transport passengers or luggage around an airport.

Showcased at Faro Airport in Portugal last December, the robotic porter application involved first building up a 3D image of the airport environment so the system would know the location of static features such as walls, columns, doors and staircases. The buggies then use onboard LIDAR to accurately calculate their position and detect obstacles as they move around the airport.

"Our vision is that one day people, perhaps elderly or with a disability, will go to the airport and by speaking to a porter control centre on their mobile phone or through a web interface on their PDA would be able to order a vehicle to take them to their boarding gate. The vehicle would transport them autonomously, weaving its way between moving objects such as passengers and piles of luggage," Heitz says.

The IRPS project manager notes that there is real demand for such a system by airport operators, who are finding it increasingly hard to meet the transport needs of passengers and their luggage because of the large size of modern airports. However, he says it will probably be many years before robotic buggies start buzzing around airports autonomously due to a combination of safety concerns and the need for further technological advances.

"Running a 3D LIMS system requires a lot of computer processing power and a large investment," he notes.

Other applications are closer to market. In the field of security surveillance, 3D LIMS could improve upon current techniques for detecting intruders or spotting changes inside a building.

"The system compares the current acquisition [of reflected light] to its reference acquisition, allowing it to detect any change in the environment," Heitz says.

In the case of industrial monitoring, for example, a 3D LIMS system operating in a power plant would be able to instantly and accurately detect something as small as a leaking pipe.

Though the project partners say commercial applications for their system are still a few years away, they are continuing to work on the technology and are seeking support for further research and development.

The IRPS project received funding from the ICT strand of the EU's Sixth Framework Programme for research.

Personal Mobility Robot Operated by Wii Controller

Personal Mobility Robot Operated by Wii Controller: "Japanese researchers demonstrate a robotic wheelchair operated with Wii game controller"

Boston Dynamics Biped Robot Petman Achieves 4.4 mph

Boston Dynamics Biped Robot Petman Achieves 4.4 mph: "Boston Dynamics has released a new video showing its Petman biped robot achieving 4.4 mph on a treadmill. So now the question is, can it jog?"

jueves, 22 de abril de 2010

México brilla en el Mundial de futbol... de robots - Deportes - CNNMéxico.com


México brilla en el Mundial de futbol... de robots - Deportes - CNNMéxico.com

ATLANTA (CNN) — A pocos meses de la Copa del Mundo de futbol, México tuvo su primer ensayo mundial y podría decirse que el equipo funcionó como una máquina. Literalmente, porque fue en una competencia internacional de robots en la que los representantes mexicanos brillaron.

El torneo de Breakaway, una curiosa mezcla de futbol soccer y Nascar, fue parte de una competencia mundial de robótica celebrada en Atlanta la semana pasada y organizada por FIRST, una organización sin ánimo de lucro que promueve la ciencia y tecnología entre los jóvenes.

Participaron 530 robots construidos por más de 10,000 estudiantes de 30 países, entre ellos alumnos de la preparatoria de la Universidad Panamericana (UP) y del Tecnológico de Monterrey campus Toluca.

Panteras, el equipo de la UP, participa en estas competencias desde hace cuatro años y ha ganado varios reconocimientos, como el premio al Novato del Año en 2007.

“Queremos cambiar el paradigma de México, queremos que más gente se involucre y pueda participar”, dice Andrés Carrasco, uno de los integrantes de Panteras encargado de buscar financiación para el proyecto.

Solamente inscribirse cuesta 6,000 dólares, y en la preparación y construcción del robot pueden llegar a invertirse unos 25,000 dólares. “Nosotros tenemos que conseguir los materiales. Las empresas nos pueden ayudar con dinero y en especie: GM nos da los motores, por ejemplo, y 3M los materiales desechables”, explica Sebastián Montes, otro de los miembros del equipo.

Para construir estos robots se necesitan conocimientos de mecánica, electrónica y programación, aunque no hace falta ser un genio para participar. “Lo que se necesitan son ganas”, señala Sebastián. “Hay que meterse, practicar, adquirir experiencia y aprender”. Además, los estudiantes cuentan con la ayuda de los mentores, profesionales de empresas que colaboran con los alumnos.

El otro equipo mexicano que compitió fue el Tecbot, de la preparatoria del TEC sede Toluca y que debutó este año con bombos y platillos. Su delegación, con más de 40 estudiantes y 14 mentores, fue una de las más numerosas... y también "ruidosas", con sus llamativas indumentarias y cánticos constantes.

“Es una experiencia internacional muy buena. Me interesó porque mezcla la robótica con los negocios”, dice Samantha Carmona, una de las encargadas de la mercadotecnia y publicidad de Tecbot. Carmona comparó el funcionamiento del equipo con una microempresa en la que al final “nos volvimos como una familia”.

Los miembros de Tecbot quedaron tan entusiasmados con el proyecto que ya planean organizar un campeonato regional de Latinoamérica.

Aunque los equipos mexicanos no clasificaron para las finales de Breakaway, en la ceremonia de entrega de premios recibieron varios reconocimientos. Panteras ganó el premio Team Spirit, por su “entusiasmo y espíritu a través de un compañerismo excepcional y el trabajo en equipo”, mientras que Tecbot se llevó el Rookie Inspiration, por su “destacado esfuerzo en el trabajo en la comunidad y en el reclutamiento de estudiantes”.

Al recibir los premios, las delegaciones mexicanas lo festejaron con una gran algarabía que contagió a las gradas. Y es que México demostró que, aunque no siempre se gane, su afición es una de las mejores del mundo... también en robots.

lunes, 19 de abril de 2010

iPad Robot Controller

iPad Robot Controller from Bogo Giertler on Vimeo.

Robotic therapy helps stroke patients regain function

Robotic therapy helps stroke patients regain function: "Stroke patients who received robot-assisted therapy were able to regain some ability to use their arms, even if the stroke had occurred years earlier, according to a study published April 16 in the online issue of The New England Journal of Medicine.

The study, which examined the effectiveness of a class of robotic devices developed at MIT, found that in chronic stroke survivors, robot-assisted therapy led to modest improvements in upper-body motor function and quality of life six months after active therapy was completed; these improvements were significant when compared with a group of stroke patients who received the traditional treatment. Moreover, the robotic therapy — which involves a more intense regimen of activity than traditional stroke therapy — did not increase total health-care costs per stroke patient, and could make intensive therapy available to more people, say the researchers who led the study.

The study results also challenge the notion that physical therapy only benefits stroke patients within the first six months after the stroke, says Albert Lo, a neurologist at the Providence VA Medical Center who led the study.

“There are nearly six million stroke patients in the U.S. with chronic deficits,” says Lo. “We’ve shown that with the right therapy, they can see improvements in movement, everyday function and quality of life.”

Mind and body

The study, conducted at four Veterans Affairs (VA) hospitals, found that patients who used the MIT robotic devices for 12 weeks experienced a small but significant gain in arm function. Another group of patients who received high-intensity therapy from a therapist, which matched the number and intensity of the robot movements, showed similar improvements.

Hermano Igo Krebs, a principal research scientist in MIT’s Department of Mechanical Engineering who developed the MIT-Manus robot, has been working on robotic therapy since his graduate student years at MIT almost 20 years ago. In his early studies, he and his colleague, Professor Neville Hogan, found that it’s important for stroke patients to make a conscious effort during physical therapy.

The MIT-Manus system, which Krebs started developing more than 20 years ago, is based on that principle. The patient grasps a robotic joystick that guides the patient’s arm, wrist or hand as he or she tries to make specific movements, helping the brain form new connections that will eventually help the patient relearn to move the limb on his or her own.

In the New England Journal of Medicine study, researchers at VA hospitals in Baltimore, Seattle, West Haven, Conn., and Gainesville, Fla., compared the MIT-Manus system to a high-intensity rehab program delivered by a human therapist, which was designed specifically for this study.

Each group included about 50 patients, who were also compared with a group of 28 stroke patients who received so-called “usual care” — general health care and three hours per week of traditional physical therapy for their stroke-damaged limb.

Patients using the MIT-Manus system grasp a joystick-like handle connected to a computer monitor that displays tasks similar to those in simple video games. In a typical task, the subject attempts to move the robot handle toward a moving or stationary target shown on the computer monitor. If the person starts moving in the wrong direction or does not move, the robotic arm gently nudges his or her arm in the right direction.

“The ability to be interactive is critical,” says Krebs. “We program the robot to only give assistance as needed.”

Patients in the study received therapy three times a week for 12 weeks, and during each hour-long session, they made hundreds of repetitive motions with their arms. At the end of 12 weeks, tests revealed a small but statistically significant improvement in quality of life, and a modest improvement in arm function. When the subjects were tested again at 36 weeks, both the robot therapy group and intensive human-assisted therapy group showed improvement in arm movement and strength, everyday function and quality of life compared to the usual-care group.

The high-intensity, interactive physical therapy offered to patients who did not receive robot-assisted therapy was developed specifically for comparison purposes for this study, and is not generally available. Furthermore, the physical demands on the therapist make it unlikely that it will ever be widely used.

“If you can get a therapist to work at that pace with a patient, certainly the benefits are roughly the same, and we showed this benefit when we designed this intensive comparison group, but it’s not practical,” says Krebs. “Robotics and automation technology are ideal for this kind of highly repetitive tasks. We’re using robotic technology to create a tool for the therapist to afford this kind of high-intensity therapy while maintaining the therapist supervisory role, deciding what is right for a particular patient.”

This particular study was designed to test the effects of only conventional therapy versus only robotic therapy, but Bruce Dobkin, a neurologist at the UCLA Stroke Center, says the best approach may end up being a combination of those two strategies. “If robotic therapy is going to be helpful, you need to find a more integrated way to use the robotic device,” he says.

The value of robots

Another way to make robotic therapy more practical could be to lower the costs, says Dobkin, who was part of the data-safety monitoring committee that supervised the research. In the VA study, the robotic therapy cost an average of $9,977 per patient, and the intensive nonrobotic therapy cost $8,269 per patient. However, overall healthcare per-patient costs, including costs for those who received only usual care, were not very different over the total 36-week study period — $15,562 per patient for robot-assisted therapy, $15,605 for intensive nonrobotic therapy, and $14,343 for usual care.

Krebs believes that once the robotic devices can be mass-produced, which he expects will occur within the next 10 years, the costs will drop. “What you have to do is make more of them, and that will drive down costs to a point where people can have them in their homes,” he says.

Krebs is also encouraged by the fact that many of the patients in the study had either suffered multiple strokes or had experienced their strokes many years earlier, yet still showed improvement. “We put the bar very high,” he says. “If we worked with patients sooner after their first stroke, we may get even better results.” He is now working with doctors to plan such a study.

Krebs and his collaborators are also studying whether the MIT-Manus could help patients with cerebral palsy, multiple sclerosis and spinal cord injury.


"

viernes, 16 de abril de 2010

A Bayesian Exploration-Exploitation Approach for Optimal Online Sensing and Planning with a Visually Guided Mobile Robot

A Bayesian Exploration-Exploitation Approach for Optimal Online Sensing and Planning with a Visually Guided Mobile Robot from Ruben Martinez-Cantin on Vimeo.



We address the problem of online path planning for optimal sensing with a mobile robot. The objective of the robot is to learn the most about its pose and the environment given time constraints. We use a POMDP with a utility function that depends on the belief state to model the finite horizon planning problem.We replan as the robot progresses throughout the environment. The POMDP is highdimensional, continuous, non-differentiable, nonlinear, non- Gaussian and must be solved in real-time. Most existing techniques for stochastic planning and reinforcement learning are therefore inapplicable. To solve this extremely complex problem, we propose a Bayesian optimization method that dynamically trades off exploration (minimizing uncertainty in unknown parts of the policy space) and exploitation (capitalizing on the current best solution). We demonstrate our approach with a visually-guide mobile robot. The solution proposed here is also applicable to other closelyrelated domains, including active vision, sequential experimental design, dynamic sensing and calibration with mobile sensors.

Visual Map-less Navigation based on Homographies

Visual Map-less Navigation based on Homographies: "

Comparison between two different turns (second and tenth) of the square with visual correction. The errors in the trajectory are so small that they can not be apreciated in the movie. The image seems to be blurred because the two videos are superimposed. Check users.isr.ist.utl.pt/~rmcantin/pmwiki/pmwiki.php/Research/Mapless for details.

Cast: Ruben Martinez-Cantin

"

Visual Map-less Navigation based on Homographies from Ruben Martinez-Cantin on Vimeo.

Attemped Snoopy

Attemped Snoopy: "

For Open Engineering House Demo

Cast: Mike Chung

"

Attemped Snoopy from Mike Chung on Vimeo.

Robots: URBI Software Platform

Robots: URBI Software Platform: "



The latest episode of the Robots podcast features an
interview with Jean-Christophe Baillie, CEO of
French robotics software company Gostai (see previous post). Baillie
introduces his software platform URBI which
supports many platforms from the AIBO to the Nao (now used in
the RoboCup
Standard Platform League) and shares his motivation to go open
source at the International Conference on Robotics and Automation
(ICRA) in a couple of weeks. For more information, have a look at the
URBI walkthrough video above, read
on or tune
in!

"

Honda U3-X

Honda U3-X: "
The Honda U3-X
appeared to be like a seated Segway, using gyroscopes to keep the unicycle vertical. The
single wheel is
actually an omni-directional wheel, with turning controlled by the operator tilting their
body. The
maximum speed however, is only 6 kilometers per hour, less than a third of the
Segway's top speed. It lasts for about an hour and can carry a person of up to 100 kg
(220 lbs). Where it really excels over the Segway is its small form-factor and its
lightness, weighing only 10 kg (22 lbs). Honda has not yet released a price point, nor a
release date."

miércoles, 14 de abril de 2010

Undergraduate Develops Antenna To Help Robot Move Like A Cockroach

Undergraduate Develops Antenna To Help Robot Move Like A Cockroach
Can a robot learn to navigate like a cockroach? To help researchers find out if a mechanical device can mimic the pesky insect's behavior, a Johns Hopkins engineering student has built a flexible, sensor-laden antenna. Like a cockroach's own wriggly appendage, the artificial antenna sends signals to a wheeled robot's electronic brain, enabling the machine to scurry along walls, turn corners and avoid obstacles.

The work is important because most robotic vehicles that are sent into dangerous locations rely on artificial vision or sonar systems to find a safe path. But robotic eyes don't operate well in low light, and sonar systems can be confused by polished surfaces. As an alternative, Noah J. Cowan, an assistant professor of mechanical engineering, is turning to the sense of touch, drawing inspiration from bugs that move quite skillfully through dark rooms with varied surfaces.

The key, Cowan said, is the cockroach's antennae, which touch adjacent walls and alert the insect to obstacles. As a postdoctoral fellow at the University of California, Berkeley, Cowan collaborated with researchers at Stanford University to build a crude antenna to show that a moving machine could use the same technique. After joining the faculty at Johns Hopkins, he assigned undergraduate Owen Y. Loh to build a more complex antenna to permit more advanced experiments with a cockroach-inspired robot.

In the fall of 2003, Loh began studying cockroach biology and working up designs for a robot antenna based on the insect model. "I liked the idea of combining biology and robotics," he said.

As a junior mechanical engineering major in the spring of 2004, Loh received a Provost's Undergraduate Research Award from the university, allowing him to continue this work in Cowan's lab during the summer. At summer's end, when the lab team quickly needed an antenna for critical robotic experiments, Loh assembled a simple but effective prototype in less than a week.

These experiments resulted in a peer-reviewed paper that has been accepted for presentation in April at the International Conference on Robotics and Automation in Barcelona, Spain. Loh, who is listed as second author on the paper, plans to attend with Cowan and other members of the lab team.

In recent months, Loh has fabricated a more advanced version of the antenna. This model is made of cast urethane, a flexible rubber-like substance, encased in a clear plastic sheath. Embedded in the urethane are six strain gages — sensors that change resistance as they are bent. "We've calibrated the antenna so that certain voltages correspond to certain bending angles that occur as the antenna touches the wall or some other object," Loh said.

This data is fed to the robotic's controller, enabling it to sense its position in relation to the wall and to maneuver around obstacles. When the antenna signals that the robot is veering too close to the wall, the controller steers it away.

The newer version of the antenna is being tested by Brett L. Kutscher, a former Provost's Undergraduate Research Award recipient who recently finished his master's degree thesis in Cowan's lab. Cowan believes the cockroach-inspired antennae being developed by his team could eventually provide a new generation of robots with an enhanced ability to move safely through dark and hazardous locations, such as smoke-filled rooms strewn with debris.

He said Loh, now 21, from Okemos, Mich., provided crucial assistance. "Owen brought a set of skills to that lab that none of us had," Cowan said. "I'm more of an abstract and theoretical researcher. Owen is very good at making things with his hands."

On March 10, Steven Knapp, university provost and senior vice president for academic affairs, hosted the 12th annual Provost's Undergraduate Research Awards ceremony, which honored the 45 winners who conducted their projects in the summer and fall of 2004. Since 1993, about 40 students each year have received PURA grants of up to $3,000 to conduct original research, some results of which have been published in professional journals. The awards, funded through a donation from the Hodson Trust, are an important part of the university's commitment to research opportunities for undergraduates.

The Johns Hopkins University is recognized as the country's first graduate research university, and has been in recent years the leader among the nation's research universities in winning federal research and development grants. The opportunity to be involved in important research is one of the distinguishing characteristics of an undergraduate education at Johns Hopkins.

The Provost's Undergraduate Research Awards program provides one of these research opportunities, open to students in each of the university's four schools with full-time undergraduates: the Krieger School of Arts and Sciences, the Whiting School of Engineering, the Peabody Conservatory and the School of Nursing.

Human-like Vision Lets Robots Navigate Naturally

Human-like Vision Lets Robots Navigate Naturally
A robotic vision system that mimics key visual functions of the human brain promises to let robots manoeuvre quickly and safely through cluttered environments, and to help guide the visually impaired.

It’s something any toddler can do – cross a cluttered room to find a toy.

It's also one of those seemingly trivial skills that have proved to be extremely hard for computers to master. Analysing shifting and often-ambiguous visual data to detect objects and separate their movement from one’s own has turned out to be an intensely challenging artificial intelligence problem.

Three years ago, researchers at the European-funded research consortium Decisions in Motion (http://www.decisionsinmotion.org/) decided to look to nature for insights into this challenge.

In a rare collaboration, neuro- and cognitive scientists studied how the visual systems of advanced mammals, primates and people work, while computer scientists and roboticists incorporated their findings into neural networks and mobile robots.

The approach paid off. Decisions in Motion has already built and demonstrated a robot that can zip across a crowded room guided only by what it “sees” through its twin video cameras, and are hard at work on a head-mounted system to help visually impaired people get around.

“Until now, the algorithms that have been used are quite slow and their decisions are not reliable enough to be useful,” says project coordinator Mark Greenlee. “Our approach allowed us to build algorithms that can do this on the fly, that can make all these decisions within a few milliseconds using conventional hardware.”

How do we see movement?

The Decisions in Motion researchers used a wide variety of techniques to learn more about how the brain processes visual information, especially information about movement.

These included recording individual neurons and groups of neurons firing in response to movement signals, functional magnetic resonance imaging to track the moment-by-moment interactions between different brain areas as people performed visual tasks, and neuropsychological studies of people with visual processing problems.

The researchers hoped to learn more about how the visual system scans the environment, detects objects, discerns movement, distinguishes between the independent movement of objects and the organism’s own movements, and plans and controls motion towards a goal.

One of their most interesting discoveries was that the primate brain does not just detect and track a moving object; it actually predicts where the object will go.

“When an object moves through a scene, you get a wave of activity as the brain anticipates its trajectory,” says Greenlee. “It’s like feedback signals flowing from the higher areas in the visual cortex back to neurons in the primary visual cortex to give them a sense of what’s coming.”

Greenlee compares what an individual visual neuron sees to looking at the world through a peephole. Researchers have known for a long time that high-level processing is needed to build a coherent picture out of a myriad of those tiny glimpses. What's new is the importance of strong anticipatory feedback for perceiving and processing motion.

“This proved to be quite critical for the Decisions in Motion project,” Greenlee says. “It solves what is called the ‘aperture problem’, the problem of the neurons in the primary visual cortex looking through those little peepholes.”

Building a better robotic brain

Armed with a better understanding of how the human brain deals with movement, the project’s computer scientists and roboticists went to work. Using off-the-shelf hardware, they built a neural network with three levels mimicking the brain’s primary, mid-level, and higher-level visual subsystems.

They used what they had learned about the flow of information between brain regions to control the flow of information within the robotic “brain”.

“It’s basically a neural network with certain biological characteristics,” says Greenlee. “The connectivity is dictated by the numbers we have from our physiological studies.”

The computerised brain controls the behaviour of a wheeled robotic platform supporting a moveable head and eyes, in real time. It directs the head and eyes where to look, tracks its own movement, identifies objects, determines if they are moving independently, and directs the platform to speed up, slow down and turn left or right.

Greenlee and his colleagues were intrigued when the robot found its way to its first target – a teddy bear – just like a person would, speeding by objects that were at a safe distance, but passing nearby obstacles at a slower pace.

”That was very exciting,” Greenlee says. “We didn’t program it in – it popped out of the algorithm.”

In addition to improved guidance systems for robots, the consortium envisions a lightweight system that could be worn like eyeglasses by visually or cognitively impaired people to boost their mobility. One of the consortium partners, Cambridge Research Systems, is developing a commercial version of this, called VisGuide.

Decisions in Motion received funding from the ICT strand of the EU’s Sixth Framework Programme for research. The project’s work was featured in a video by the New Scientist in February this year.

Research Teams Successfully Operate Multiple Biomedical Robots From Numerous Locations

Research Teams Successfully Operate Multiple Biomedical Robots From Numerous Locations
Using a new software protocol called the Interoperable Telesurgical Protocol, nine research teams from universities and research institutes around the world recently collaborated on the first successful demonstration of multiple biomedical robots operated from different locations in the U.S., Europe, and Asia. SRI International operated its M7 surgical robot for this demonstration.


In a 24-hour period, each participating group connected over the Internet and controlled robots at different locations. The tests performed demonstrated how a wide variety of robot and controller designs can seamlessly interoperate, allowing researchers to work together easily and more efficiently. In addition, the demonstration evaluated the feasibility of robotic manipulation from multiple sites, and was conducted to measure time and performance for evaluating laparoscopic surgical skills.

New Interoperable Telesurgical Protocol The new protocol was cooperatively developed by the University of Washington and SRI International, to standardize the way remotely operated robots are managed over the Internet.

"Although many telemanipulation systems have common features, there is currently no accepted protocol for connecting these systems," said SRI's Tom Low. "We hope this new protocol serves as a starting point for the discussion and development of a robust and practical Internet-type standard that supports the interoperability of future robotic systems."

The protocol will allow engineers and designers that usually develop technologies independently, to work collaboratively, determine which designs work best, encourage widespread adoption of the new communications protocol, and help robotics research to evolve more rapidly. Early adoption of this protocol internationally will encourage robotic systems to be developed with interoperability in mind, and avoid future incompatibilities.

"We're very pleased with the success of the event in which almost all of the possible connections between operator stations and remote robots were successful. We were particularly excited that novel elements such as a simulated robot and an exoskeleton controller worked smoothly with the other remote manipulation systems," said Professor Blake Hannaford of the University of Washington.

The demonstration included the following organizations:

* SRI International, Menlo Park, Calif., USA
* University of Washington Biorobotics Lab (BRL), Seattle, Washington, USA
* University of California at Santa Cruz (UCSC), Bionics Lab, Santa Cruz, Calif., USA
* iMedSim, Interactive Medical Simulation Laboratory, Rensselaer Polytechnic Institute, Troy, New York, USA
* Korea University of Technology (KUT) BioRobotics Lab, Cheonan, South Chungcheong, South Korea
* Imperial College London (ICL), London, England
* Johns Hopkins University (JHU), Baltimore, Maryland, USA
* Technische Universität München (TUM), Munich, Germany
* Tokyo Institute of Technology (TOK), Tokyo, Japan

Machines Can't Replicate Human Image Recognition, Yet

Machines Can't Replicate Human Image Recognition, Yet
While computers can replicate many aspects of human behavior, they do not possess our ability to recognize distorted images, according to a team of Penn State researchers.
"Our goal is to seek a better understanding of the fundamental differences between humans and machines and utilize this in developing automated methods for distinguishing humans and robotic programs," said James Z. Wang, associate professor in Penn State's College of Information Sciences and Technology.

Wang, along with Ritendra Datta, a Penn State Ph.D. recipient, and Jia Li, associate professor of statistics at Penn State, explored the difference in human and machine recognition of visual concepts under various image distortions.

The researchers used those differences to design image-based CAPTCHAs (Completely Automated Public Turing Test to Tell Computers and Humans Apart), visual devices used to prevent automated network attacks.

Many e-commerce web sites use CAPTCHAs, which are randomly generated sets of words that a user types in a box provided in order to complete a registration or purchasing process. This is done to verify that the user is human and not a robotic program.

In Wang's study, a demonstration program with an image-based CAPTCHA called IMAGINATION was presented on imagination.alipr.com. Both humans and robotic programs were observed using the CAPTCHA.

Although the scope of the human users was limited, the results, presented in the September issue of IEEE Transactions on Information Forensics and Security, proved that robotic programs were not able to recognize distorted images. In other words, a computer recognition program had to rely on an accurate picture, while humans were able to tell what the picture was even though it was distorted.

Wang said he hopes to work with developers in the future to make IMAGINATION a CAPTCHA program that Web sites can use to strengthen the prevention of automated network attacks.

Even though machine recognizability does not exceed human recognizability at this time, Wang says that there is a possibility that it will in the future.

"We are seeing more intelligently designed computer programs that can harness a large volume of online data, much more than a typical human can experience in a lifetime, for knowledge generation and automatic recognition," said Wang. "If certain obstacles, which many believe to be insurmountable, such as scalability and image representation, can be overcome, it is possible that one day machine recognizability can reach that of humans."

Illumination-Aware Imaging

Illumination-Aware Imaging
Conventional imaging systems incorporate a light source for illuminating an object and a separate sensing device for recording the light rays scattered by the object. By using lenses and software, the recorded information can be turned into a proper image. Human vision is an ordinary process: the use of two eyes (and a powerful brain that processes visual information) provides human observers with a sense of depth perception. But how does a video camera attached to a robot "see" in three dimensions?

Carnegie Mellon scientist Srinivasa Narasimhan believes that efficiently producing 3-D images for computer vision can best be addressed by thinking of a light source and sensor device as being equivalent. That is, they are dual parts of a single vision process.

For example, when a light illuminates a complicated subject, such as a fully-branching tree, many views of the object must be captured. This requires the camera to be moved, making it hard to find corresponding locations in different views.

In Narasimhan's approach, the camera and light constitute a single system. Since the light source can be moved without changing the corresponding points in the images, complex reconstruction problems can be solved easily for the first time. Another approach is to use a pixilated mask interposed at the light or camera to selectively remove certain light rays from the imaging process. With proper software, the resulting series of images can more efficiently render detailed 3-D vision information, especially when the object itself is moving.

Narasimhan calls this process alternatively illumination-aware imaging or imaging-aware illumination. He predicts it will be valuable for producing better robotic vision and rendering 3-D shapes in computer graphics.

Older Adults Want Robots That Do More Than Vacuum, Researchers Find

Older Adults Want Robots That Do More Than Vacuum, Researchers Find: "Researchers have discovered that, contrary to previous assumptions, older adults are more amenable than younger ones to having a robot 'perform critical monitoring tasks that would require little interaction between the robot and the human.'"

The findings will be presented at the upcoming HFES 53rd Annual Meeting, Grand Hyatt, San Antonio, Texas, Thursday, October 22, 2009.

Despite manufacturers' increased development of in-home robots, it's unclear how much interaction people would be willing to have with them. Robots can perform routine tasks such as cleaning, the Roomba vacuum cleaner being the best-known example. Studies have found that individuals think of robots as advanced appliances, but there is not much research on why this is so. Robots could perform more critical tasks, such as reminding a person to take medications, teaching a new skill, providing security, and reducing social isolation.

To gauge how willing people might be to have a robot perform these kinds of more interactive tasks, Drs. Neta Ezer (now at Futron Corporation), Arthur D. Fisk, and Wendy A. Rogers sent a questionnaire to 2,500 Atlanta-area adults ages 18 to 86 and received 177 responses. One of their questions addressed respondents' level of experience with technology and robots that do things like mow, clean, guard, and entertain. Older adults (ages 65 to 86) had significantly less experience with technology than younger ones (18-28), but younger adults had only slightly more experience with robots currently on the market.

When asked about their willingness to have robots perform 15 tasks in the home (categorized as entertainment, service, educational, and general health/self-care), respondents of all ages preferred that robots stick to noninteractive tasks (such as "Help me with housework" or "Bring me things I need from another room in my home") rather than interactive ones (for example, "Have a conversation with me" or "Help motivate me to exercise"). Infrequent critical tasks, such as "Warn me about a danger in my home" or "Inform my doctor if I have a medical emergency," were seen by more older adults than younger ones as important for robots to perform.

Both younger and older respondents reported positive attitudes toward a robot in their homes. They thought a robot would be useful but were less confident that it would be easy to use. Given a choice between receiving care by a robot in their homes and moving to a care facility in the event of illness or injury, 67% of younger adults and 77% of older adults chose the former option. (This finding is not reported in the paper to be presented in October.)

The researchers say their results "suggest that both younger and older individuals are more interested in the benefits that a robot can provide than in their interactive abilities." Furthermore, the results discredit the stereotype that older adults would be less willing than younger ones to accept new technology such as a robot in their home. Manufacturers: Take note.

lunes, 12 de abril de 2010

Pat Metheny and the Orchestrion

Pat Metheny and the Orchestrion: "



Musician Pat Metheny is
currently touring with a robotic orchestra, called the Orchestrion Project.
It includes a wide range of instruments which are robotically actuated
and controlled in real time by Pat Metheny himself. It doesn't appear
any of the instruments are purely autonomous robots but they do seem to
have a very sophisticated automatic control system that allows the human
musician to improvise freely (and it certainly puts my
own feeble efforts to shame!). Looks like a lot of fun, so if you're
interested in robots and music, check the tour schedule to see when Pat
and his robot band will be in a town near you. They're coming to Dallas
tomorrow, so if I get the chance to check it out, I'll post a review.

"

domingo, 11 de abril de 2010

Computers Shown More Creative Than Humans

Computers Shown More Creative Than Humans: "UC Santa Cruz professor David Cope has been working on software, called Emily Howell, that generates original and modern music. Using algorithms that mathematically mixes, recombines, and alters musical combinations, his music can often convincingly mimic the styles of the great classical composers such as Mozart and Bach. So if a machine can be as creative as we are, is a 'soul' really needed to be artistic?"

viernes, 9 de abril de 2010

At French Conference, Virtual Reality Meets Reality

At French Conference, Virtual Reality Meets Reality: "The International Conference on Virtual Reality, in Laval, France, this year brings with it a number of new visions of a virtual future, including robots and interactive multitouch cubes.



"

Evolution of Adaptive Behaviour in Robots by Means of Darwinian Selection

Evolution of Adaptive Behaviour in Robots by Means of Darwinian Selection: "In spite of the apparent behavioral complexity of the robots, all behaviors were achieved with extremely simple brains consisting of only a handful of neurons."

Quadcopter, Hexacopter, Octocopter ... UAVs

Quadcopter, Hexacopter, Octocopter ... UAVs: "Five years ago few people had even heard of Quadcopters. Now they seem to be everywhere. What happened?"


MikroKopter - HexaKopter from Holger Buss on Vimeo.

Robots with better observation

Robots with better observation: "A new 3D sensor will enable robots to observe their environment in a more natural and human-like manner. The TACO project will make it possible to apply current robots in more sophisticated markets so that they will play a major role in the fields of cleaning, construction, maintenance, security, health care, entertainment and personal assistance in the future."

Truckbot: An Autonomous Robot based on Android

Truckbot: An Autonomous Robot based on Android: "


The robot builders over at Cellbots have been busy cranking out
robots based around the Google Android
OS. Their latest effort is
Truckbot, an acrylic differential drive robot that relies on a Google G1
phone running the GNU/Linux-based Android OS combined with an Arduino.
The robot can be teleoperated but also incorporates autonomous features
such as cliff-detection to avoid falling off of tables. The video above
demonstrates an ultrasonic sensor. See the additional videos after the
break for examples of teleoperation and use of the G1's sensors for
orientation. The Cellbots group has also released all their source
code as Free
Software under the Apache 2.0
license. You can use their software to build your own Cellbot with
voice recognition, text-to-speech, compass orientation, teleoperation,
GPS, and other cool features.

"

martes, 6 de abril de 2010

Android Copy of Young Woman Unveiled In Japan

Android Copy of Young Woman Unveiled In Japan: "An anonymous reader writes 'According to IEEE Spectrum, Japanese roboticist Hiroshi Ishiguro, who had previously built a robot copy of himself, has now created a new android — and it's a 'she.' Geminoid F, a copy of a woman in her 20s with long dark hair, exhibits facial expressions more naturally than Ishiguro's previous android. 'Whereas the Geminoid HI-1 has some 50 actuators, the new Geminoid F has just 12. What's more, the HI-1 robot requires a large external box filled with compressors and valves. With Geminoid F, the researchers embedded air servo valves and an air servo control system into its body, so the android requires only a small external compressor.' It's also much better looking. Has the Japanese android master finally overcome the uncanny valley?'



Read more of this story at Slashdot.

"

lunes, 5 de abril de 2010

In Profile: Missy Cummings

In Profile: Missy Cummings: "Mary (Missy) Cummings was exhilarated the first time she landed a fighter jet aboard an aircraft carrier in 1989, but the young pilot's elation didn't last long. Seconds later, a close friend died while attempting the same landing on the back of the carrier.

“I can't tell you how many friends died because of bad designs,” says Cummings, recalling the crash that occurred on the U.S.S. Lexington in the Gulf of Mexico. “After spending so much time as a pilot, I found it incredibly frustrating to work with technology that didn’t work with me.”

It wasn’t until Cummings left the Navy after 10 years and chose to pursue a PhD in systems engineering that she realized she could help improve the severely flawed designs of the technological systems she used as a pilot — from confusing radar screens and hand controls to the nonintuitive setup of cockpits — by making an impact at the research level.

Today, she is an associate MIT professor with appointments in the Department of Aeronautics and Astronautics and in the Engineering Systems Division, and she directs the Humans and Automation Laboratory (HAL). Her work focuses on “human factors” engineering — specifically, how to develop better tools and technology to help people like pilots and air traffic controllers make good decisions in high-risk, highly automated environments. It is a critical field of research that has burgeoned in recent years with the explosion of automated technology. This has replaced the need for humans in direct manual control with the need for humans as supervisors of complex automatic control systems, such as nuclear reactors or air traffic control systems.

But one consequence of these automated domains controlled by humans — known as “humans-in-the-loop” systems — is that the level of required cognition has moved from that of well-rehearsed skill execution and rule-following to higher, more abstract levels of knowledge synthesis, judgment and reasoning.

A novel application

Nowhere has this change been more apparent than in the military, where pilots are increasingly being trained to operate unmanned aerial vehicles (UAVs), or drones, to perform certain cognitive tasks, such as getting a closer look at potential snipers. Prompted by the success of drones in Iraq and Afghanistan, U.S. Defense Secretary Robert Gates announced last year that UAV technology would become a permanent part of the defense budget.

But as UAV technology becomes more prominent, Cummings wants to make it easier for humans to control portable robots in time-sensitive situations. Her goal is to lower the cognitive overhead for the user, who may not have a lot of time to change complicated menu settings or zoom and pan a camera, so that he or she can focus on more critical tasks.

“It’s about offloading skill-based tasks so that people can focus specifically on knowledge-based tasks, such as determining whether a potential sniper is a good or  bad guy by using the UAV to identify him,” Cummings says. The technology could also help responders search more efficiently for victims after a natural disaster.

Over the past year, Cummings and her students have designed an iPhone application that can control a small, one pound UAV called a quad-rotor — a tiny helicopter with four propellers and a camera attached to it. When the user tilts the iPhone in the direction he or she wants the UAV to move, the app sends GPS coordinates to the UAV to help it navigate an environment using built-in sensors. The UAV uses fast-scanning lasers to create rapid, electronic models of its environment and then sends these models back to the iPhone in the form of easy-to-read maps, video and photos. Although the military and Boeing are funding the research, the technology could be used for nonmilitary purposes, such as for a police force that needs a device to help monitor large crowds.








The app is designed so that anyone who can operate a phone could fly a UAV: The easy-to-use design means it takes only three minutes to learn how to use the system, whereas military pilots must undergo thousands of hours of costly training to learn how to fly drones. “This is all about the mission — you just need more information from an image, and you shouldn’t have to spend $1 million to train someone to get that picture,” she says.

The project is valuable for teaching because it represents a “classic scenario” in systems engineering in which a need is conceptualized, a system is designed to address that need and experiments are conducted to test the system, Cummings explains.

The HAL group recently conducted experiments of the app in which participants located in one building flew the UAV inside a separate building, positioning it in front of an eye chart so they could read the images the camera captured. Some achieved the equivalent of 20/30 vision, which Cummings says is “pretty good,” pointing out that, more importantly, the device never crashed. As Cummings and her students continue to refine the technology, their next step will be experiments in the real world where the UAV could reach an altitude of 500 feet. Although the group is working with several government agencies and companies on the design, there are no plans to deploy the app just yet.

Learning from boredom

Cummings began flying planes after graduating from the U.S. Naval Academy in 1988 and received her master’s degree in space systems engineering from the Naval Postgraduate School in 1994. When the Combat Exclusion Law was repealed in 1993, meaning that women could become fighter pilots for the first time in U.S. history, Cummings had already established herself as an accomplished pilot and was selected to be among the first group of women to fly the F/A-18 Hornet, one of the most technologically advanced fighter jets.

Although she loved the thrill of flying, Cummings left the military when her resentful male colleagues became intolerable. “It’s no secret that the social environment wasn’t conducive to my career. Guys hated me and made life very difficult,” she recalls. Cummings details this experience in her book Hornet’s Nest (Writer’s Showcase Press, 2000).

But what is most enduring about Cummings’ military experience is that it fueled her desire to improve how humans and computers can work together in complex technical systems. She focuses on how design principles, such as display screens, can affect supervisory control factors, such as attention span, when humans operate complex systems.

“In order to build a $1 billion air traffic control system, you can’t just do it by rule-of-thumb, you need to use models that take into account human factors, such as that people get bored by advanced automation systems,” Cummings says. The HAL group is currently developing models of human boredom to help design systems that prevent the people who monitor gauges and dials in automated systems from becoming bored, which is what happened last fall when two Northwest Airlines pilots overshot their destination by 150 miles because they weren’t paying attention.

In addition to boredom, other human factors that Cummings studies are what kinds of strategies people use to carry out certain tasks, whether they feel confident or frustrated when using a technology and the level of trust they have toward a system. These factors are tested through a variety of methods, such as analyzing eye movement, the number of times someone clicks or scrolls a mouse, or whether the user responds to alert messages. The iPhone project aims to analyze cognitive workload, and how easily the users are able to carry out certain cognitive tasks.

Yossi Sheffi, director of MIT’s Engineering Systems Division, thinks Cummings’ work is extremely important because technology by itself cannot be the answer when designing large-scale systems. “Her research tying the human operator to technology is crucial — both to the design of the technology itself, but also to the operation of the system as a whole, in order to ensure that it operates efficiently and effectively,” he says.

But the work also matters to civilians operating increasingly complex small-scale systems like cell phones and remote controls, according to MIT Professor of Computer Science and Engineering Randall Davis, who has worked with Cummings on several projects and praises her understanding of how people process information. “As we are increasingly surrounded by technology of all sorts, it becomes increasingly important for someone to understand how to design this stuff so that it’s easy to use; otherwise, we’ll be surrounded by incomprehensible technology,” he says.

Cummings’ ability to infuse the quantitative methods of hard science with the human behavior models of soft science like psychology is what makes her unique, according to HAL Research Specialist Dave Pitman, a former graduate student of Cummings who is working on the iPhone project. “She has a very good understanding of both sides of the coin, and because her background is in hard science, she tries to bring this rigor more into the soft sciences, which allows for better research,” he explains.


"

domingo, 4 de abril de 2010

First Integrated Companion Demo in UH Robot House

First Integrated Companion Demo in UH Robot House:

First Integrated Companion Demo in UH Robot House from foam on Vimeo.

WAbot

WAbot from Roald Joosen on Vimeo.


WAbot: "




The robot is controlled through the WiShield made by Asynclabs. Instead of calling the IP of the robot, i made a website.



SERB - instructables.com/id/How_to_Make_an_Arduino_Controlled_Servo_Robot_SER/

WiShield - asynclabs.com/

Cast: Roald Joosen

"

TOKYODV CLASSICS: Asimo Plays with Japanese Girl

TOKYODV CLASSICS: Asimo Plays with Japanese Girl:

TOKYODV CLASSICS: Asimo Plays with Japanese Girl from T O K Y O D V on Vimeo.



Two Honda Asimo play soccer, climb stairs, and shadow box on a platform. One plays with a girl and chases her around the arena.



In the last frame, you'll see a projection of Asimo's vision. Something like Terminator, it registers the height, angle of the arms, and movement of a child in a computerized brain interface..



Website - tokyodv.com



Music produced by Jay Berlinsky at Musicians Media Inc - musiciansmedia.net - your one stop shop for over 3000+ Music loops in 23 styles, licensed tracks for sale.

Cast: T O K Y O D V

"