Buscar en este blog

viernes, 28 de mayo de 2010

Willow Garage's PR2 Robots Graduate

Willow Garage's PR2 Robots Graduate: "Eleven robots worth more than US $4 million head out to universities and research labs around the world



miércoles, 26 de mayo de 2010

Asimo can run.

Asimo can run.: "

Asimo the robot at the Museum of Emerging Science in Tokyo.

Cast: Derek Koch

"

Asimo can run. from Derek Koch on Vimeo.

Robots: The Nao Humanoid

Robots: The Nao Humanoid: "

Aldebaran's Nao robot


We've already reported on French company Aldebaran's Nao
in a previous post.
Nao has since grown up and made it into the RoboCup Standard
Platform League. The latest episode of the Robots podcast interviews Luc
Degaudenzi, Aldebaran's Vice President in Engineering, and his colleague
Cédric Vaudel, who is Aldebaran's Sales Manager for North America. In
addition, and as a premiere on the Robots Podcast, we also interview a
robot. Nao introduces himself and and then shares his own version of
Star Wars. Read
more or tune
in!

"

lunes, 24 de mayo de 2010

robotik robotlar

robotik robotlar: "

çeşitli robot uygulamaları robot kol lego

Cast: gevv

"

robotik robotlar from gevv on Vimeo.

jueves, 20 de mayo de 2010

The Conference Room That Re-Arranges Itself

The Conference Room That Re-Arranges Itself: "Just pick how you want it set up and the tables move themselves into position"

You can add a new entry to the long list of problems that can be solved by robots: arranging tables in a conference room. On my personal workplace hassle scale, I'm not sure that moving conference room furniture ranks much above "occasional nuisance." But Yukiko Sawada and Takashi Tsubouchi at the University of Tsukuba, Japan, evidently find shoving tables to be an unappealing task for humans. So they built a room that could re-arrange itself.

In this case, the tables are the robots. Select the arrangement you want from a graphical interface, and the tables will move to their new locations. The movement is monitored by an overhead camera with a fish-eye lens, and the software uses a trial-and-error approach to determine the best sequence of motion. But it's best to see the room in action for yourself. Check out the video the researchers presented at ICRA earlier this month.



In the paper, the authors explained the rationale for the project:

In these days, at conference rooms or event sites, people arrange tables to desired positions suitable for the event. If this work could be performed autonomously, it would cut down the man power and time needed. Furthermore, if it is linked to the Internet reservation system of the conference room, it would be able to arrange the tables to an arbitrary configuration by the desired time.

I'm not sure the cost and complexity of such a system could ever be low enough to be practical, but there's definitely something fun about watching the tables reconfigure themselves. And if you already have autonomous, why not go all the way and add a reconfigurable wall?

martes, 18 de mayo de 2010

Explained: Monte Carlo simulations

Explained: Monte Carlo simulations: "Speak to enough scientists, and you hear the words “Monte Carlo” a lot. “We ran the Monte Carlos,” a researcher will say. What does that mean?

The scientists are referring to Monte Carlo simulations, a statistical technique used to model probabilistic (or “stochastic”) systems and establish the odds for a variety of outcomes. The concept was first popularized right after World War II, to study nuclear fission; mathematician Stanislaw Ulam coined the term in reference to an uncle who loved playing the odds at the Monte Carlo casino (then a world symbol of gambling, like Las Vegas today). Today there are multiple types of Monte Carlo simulations, used in fields from particle physics to engineering, finance and more.

To get a handle on a Monte Carlo simulation, first consider a scenario where we do not need one: to predict events in a simple, linear system. If you know the precise direction and velocity at which a shot put leaves an Olympic athlete’s hand, you can use a linear equation to accurately forecast how far it will fly. This case is a deterministic one, in which identical initial conditions will always lead to the same outcome.

The world, however, is full of more complicated systems than a shot-put toss. In these cases, the complex interaction of many variables — or the inherently probabilistic nature of certain phenomena — rules out a definitive prediction. So a Monte Carlo simulation uses essentially random inputs (within realistic limits) to model the system and produce probable outcomes.

In the 1990s, for instance, the Environmental Protection Agency started using Monte Carlo simulations in its risk assessments. Suppose you want to analyze the overall health risks of smog in a city, but you know that smog levels vary among neighborhoods, and that people spend varying amounts of time outdoors. Given a range of values for each variable, a Monte Carlo simulation will randomly select a number within each range, and see how they combine — and repeat the process tens of thousands or even millions of times. No two iterations of the simulation might be identical, but collectively they build up a realistic picture of the population’s smog exposure.

“In a deterministic simulation, you should get the same result every time you run it,” explains MIT computer science professor John Guttag in his OpenCourseWare lecture on Monte Carlo simulations. However, Guttag adds, in “stochastic simulations, the answer will differ from run to run, because there’s an element of randomness in it.”

The aggregation of data makes it possible to identify, say, a median level of smog exposure. To be sure, Monte Carlo simulations are as good as their inputs; accurate empirical data would be necessary to produce realistic simulation results.


"

VEX Robotics World Championship Report

VEX Robotics World Championship Report: "


This year's VEX
Robotics World Championship was bigger than ever with more than 400
high school and university teams from around the world. It was held in
Dallas, Texas again this year, so you can be sure we were there. The
local Dallas Personal Robotics Group
also got in on the act, offering their members as volunteers to help with
the event. I managed to avoid serving as a judge this year, so I had more
time to take photos,
several of which you can see below. Grant Imahara
awarded the top prizes to China's Shanghai’s Luwan team and New
Zealand's Free Range Robotics and Kristin Doves teams. To make things
even crazier, the BEST National
Championship was held alongside the VEX events.
The Metro Homeschool team 229 from Blue Springs, Missouri took the first
place BEST award. Read on for more photos and details from the official
VEX press release or check out my almost 300
photos of the event.

"

viernes, 7 de mayo de 2010

Researchers create software for robot to improve rescue missions

Researchers create software for robot to improve rescue missions: "In disaster emergencies, such as the recent West Virginia mine explosion or the earthquake in Haiti, it is often unsafe for responders to enter the scene, prolonging the rescue of potential survivors. Now, researchers have developed software for a robot with a laser sensor that can enter dangerous structures to assess the structure's stability and locate any remaining people. This technology could lead to safer and more efficient rescue missions."



"We are developing computer graphics visualization software to allow the user to interactively navigate the 3-D data captured from the robot's scans," said Ye Duan, associate professor of computer science in the MU College of Engineering. "I worked with my students to develop computer software that helps the user to analyze the data and conduct virtual navigation, so they can have an idea of the structure before they enter it. The technology could save the lives of disaster victims and responders."

The remote-controlled robot, built by researchers at the Missouri University of Science and Technology, is designed to remotely transport a Light Detection and Ranging unit (LIDAR) so that responders, such as police, military, firefighters, and search and rescue teams, can know more about dangerous structures before entering. When inside the structure, the robot takes multiple scans using LIDAR that takes up to 500,000 point measurements per second. It also can scan through walls and windows. After the scans, the software forms the data points into sophisticated 3-D maps that can show individual objects, create floorplans and color-code areas inside the structure for stability. Depending on the data size, the data maps can take up from half hour to two hours for the software to create.

"Although the software and the robot can help in emergency situations, it could be commercialized for a variety of uses," Duan said. "This system could be used for routine structure inspections, which could help prevent tragedies such as the Minneapolis bridge collapse in 2007. It also could allow the military to perform unmanned terrain acquisition to reduce wartime casualties."

The researchers now are working on a proposal to make the robot faster and smaller than the current model, which resembles the NASA rovers sent to Mars, which weighs about 200 pounds.

Duan's research has been published in International Journal of CAD/CAM. The robot recently was named on the list of Kiplinger's "8 Robots That Will Change Your Life." Duan collaborated with MU students Kevin Karsh and Yongjian Xi, who developed the software and algorithms; Norbert Maerz, associate professor of geological engineering at Missouri S&T; and Missouri S&T students Travis Kassebaum, Kiernan Shea and Darrell Williams.

Robots Help With Deepwater Horizon Disaster

Robots Help With Deepwater Horizon Disaster: "


The image above, from the US Coast Guard's flickr
stream, shows an ROV attempting to activate the Deepwater
Horizon
Blowout Preventer (BOP). The attempt failed and the massive Deepwater
Horizon oil spill continues, threatening to become one of the
biggest environmental disasters of all time. Efforts to stop the spill
now include at
least 10 underwater
robots (in addition to 200 manned sea vessels). US Coast Guard ROVs
located two of the major leaks. There
have been unsuccessful attempts by six different ROVs
to close the BOP. Other
underwater robots are monitoring the disaster site, locating
portions of the spill and dispensing subsea oil dispersents. BP
has rented most of the ROVs they're using but ExxonMobil has donated
the use of one underwater robot plus a support vessel. ROVs working
on one of the three major leaks today successfully installed a half-ton
valve on the broken pipe and were
able to shut it off. Next up for the robots is to assist with the
lowering of a 100 ton containment dome over the disaster site to contain
the spilling oil. This type of operation has never been attempted at a
depth of 5,000 feet. If the containment dome doesn't work,
scientists
warn, the spill may get worse fast. Dr. Robert
H. Weisberg of the University of South Florida says,

It's very likely that at some point oil will be entrained in
the Loop Current. Once entrainment happens, the speed of the Loop
Current could go from that point to the Dry Tortugas in a week, to Cape
Hatteras in another two weeks.
Getting into the Loop Current may take some time. But once in the Loop
Current, the oil will move rather quickly.

If that happens the oil will threaten environments along the Gulf
coast, the Florida
Keys, and Atlantic Seaboard. Particulate pollution from burn offs and
VOCs outgassing from the massive slick could threaten human health as
well. USF is
sending a special robotic sensor platform called the Weatherbird II
into the spill zone to monitor how zooplankton are impacted by the cloud
of toxic water. Tiny oil droplets harmless to larger animals can kill
zooplankton, which are a key element in the undersea food chain.
For up to date information on the disaster see the NOAA photo
stream, NASA
satellite images (and NASA MODIS
rapid response sat images), and the EPA's live air quality monitoring
network.

"

Robots With Knives: A Study of Soft-Tissue Injury in Robotics

Robots With Knives: A Study of Soft-Tissue Injury in Robotics: "What would happen if a knife-wielding robot struck a person?"



The idea of a robot in the kitchen cooking us meals sounds great. We better just watch out for that swinging knife.

To find out what would happen if a robot handling a sharp tool accidentally struck a person, German researchers set out to perform a series of stabbing, puncturing, and cutting experiments.

They fitted an articulated robotic arm with various tools (scalpel, kitchen knife, scissors, steak knife, and screwdriver) and programmed it to execute different striking maneuvers. They used a block of silicone, a pig's leg, and at one point a human volunteer's bare arm as their, uh, test surface.

The researchers -- Sami Haddadin, Alin Albu-Schaffer, and Gerd Hirzinger from the Institute of Robotics and Mechatronics, part of DLR, the German aerospace agency, in Wessling, Germany -- presented their results today at the IEEE International Conference on Robotics and Automation, in Anchorage, Alaska.

The main goal of the study was to understand the biomechanics of soft-tissue injury caused by a knife-wielding robot. But the researchers also wanted to design and test a collision-detection system that could prevent or at least minimize injury. Apparently the system worked so well that in some cases the researchers were willing to try it on human subjects.

We applaud the guy at the end of the video who put his body on the line in the name of robotic science.

The researchers acknowledge that there are huge reservations about equipping robots with sharp tools in human environments. It won't happen any time soon. (Sorry, you'll still have to chop that cucumber salad yourself). But they argue that only by getting more data can roboticists build safer robots.

The experiments involved the DLR Lightweight Robot III, or LWRIII, a 7 degrees-of-freedom robot manipulator with a 1.1 meter reach and moderately flexible joints. The robot, which weighs 14 kilograms, is designed for direct physical interaction and cooperation with humans.

The tools the researchers tested included [photo, right]: (1) scalpel; (2) kitchen knife; (3) scissors; (4) steak knife; (5) screwdriver.




The researchers performed two types of experiments: stabbing and cutting, testing the different tools striking at various speeds, with and without the collision-detection system active.

In most cases, the contact resulted in deep cuts and punctures, with potentially lethal consequences. But remarkably, the collision-detection system, which relied on measurements from force-torque sensors on the robot's body, was able to significantly reduce the depth of the cuts in several cases, and even prevent penetration altogether.

This is the first study to investigate soft-tissue injuries caused by robots and sharp instruments. Previous studies by the same researchers, as well as other groups, have focused on blunt collisions involving non-sharp surfaces.

The video below shows impact experiments using crash-test dummies and large industrial robots. Ouch.

Georgia Tech Robot Masters the Art of Opening Doors and Drawers

Georgia Tech Robot Masters the Art of Opening Doors and Drawers: "Georgia Tech researchers have programmed a robot to autonomously approach and open doors, drawers, and cabinets"



To be useful in human environments, robots must be able to do things that people do on a daily basis -- things like opening doors, drawers, and cabinets. We perform those actions effortlessly, but getting a robot to do the same is another story. Now Georgia Tech researchers have come up with a promising approach.

Professor Charlie Kemp and Advait Jain at Georgia Tech's Healthcare Robotics Laboratory have programmed a robot to autonomously approach and open doors and drawers. It does that using omni-directional wheels and compliant arms, and the only information it needs is the location and orientation of the handles.

The researchers discussed their results yesterday at the IEEE International Conference on Robotics and Automation, in Anchorage, Alaska, where they presented a paper, "Pulling Open Doors and Drawers: Coordinating an Omni-Directional Base and a Compliant Arm with Equilibrium Point Control."

One of the neat things about their method is that the robot is not stationary while opening the door or drawer. "While pulling on the handle," they write in their paper, "the robot haptically infers the mechanism's kinematics in order to adapt the motion of its base and arm."

In other words, most researchers trying to make robots open doors, cabinets, and similar things rely on a simple approach: keep the robot's base in place and move its arms to perform the task. It's easier to do -- and in fact that's how most robot manipulation but limits the kinds of tasks a robot could accomplish.

The Georgia Tech researchers allow their robot to move its omni-directional base while simultaneously pulling things open -- an approach they say improves the performance of the task.

miércoles, 5 de mayo de 2010

Willow Garage Giving Away 11 PR2 Robots Worth Over $4 Million

Willow Garage Giving Away 11 PR2 Robots Worth Over $4 Million: "The robotics company has announced the 11 institutions in the U.S., Europe, and Japan that will receive its advanced PR2 robot to develop new applications"



Willow Garage, the Silicon Valley company dedicated to advancing open robotics, is announcing this morning that it will award 11 PR2 robots to institutions and universities around the world as part of its efforts to speed-up research and development in personal robotics.

The company, in Menlo Park, Calif., hopes that the 11 organizations [see list below] in the United States, Europe, and Japan that are receiving PR2 robots at no cost—a total worth over US $4 million—will use the robots to explore new applications and contribute back to the open-source robotics community.

An open robot platform design and built by Willow, the Personal Robot 2, or PR2, has a mobile base, two arms, a variety of sensors, and 16 CPU cores for computation. But what makes the robot stand out is its software: the open-source Robot Operating System, or ROS, that offers full control of the PR2, including libraries for navigation, manipulation, and perception.

Yesterday I spoke with Eric Berger, Willow's co-director of the personal robotics platform program, who said they’re "really excited about the new applications that will come out of this."

As an example of the possibilities, he mentioned that earlier this year a group at UC Berkeley programmed a PR2 to fold towels. The video of the robot neatly folding a stack of towels went viral.

"People get very excited with the idea of robots doing something that's really useful in their homes," Berger says. "People have seen a lot of military robots, industrial robots, robot vacuum cleaners, but the idea of something like Rosie the Robot, I think it's very powerful."

With its PR2 Beta Program, Willow Garage hopes to foster scientific robotics research, promote the development of new tools to improve the PR2 and other robots, and also help researchers create practical demonstrations and applications of personal robotics.

For the researchers receiving a state-of-the-art personal robot platform worth several hundred thousand dollars, the possibility of working on real-world problems without having to waste time reinventing the robotic wheel, so to speak, is a big deal.

Even more significant, the researchers will be able to "share their software for use by other groups and build on top of each other's work," says Pieter Abbeel, the UC Berkeley professor who created the towel folding demo and is one of the PR2 recipients. "This will significantly boost the rate of progress in robotics, and personal robotics in particular."

"Just as the Mac and PC hardware inspired new applications for personal computers in the 1980s, the PR2 could be the key step in making personal robots a reality," says Ken Goldberg, an IEEE Fellow and UC Berkeley professor. "It's a very exciting step forward for robotics and we're very excited to participate."

Here's the list of lucky 11 PR2 recipients that Willow is releasing this morning:

* Albert-Ludwigs-Universität Freiburg with the proposal TidyUpRobot
The University of Freiburg's strength in mapping has led to multiple open-source libraries in wide use. Their group will program the PR2 to do tidy-up tasks like clearing a table, while working on difficult underlying capabilities, like understanding how drawers and refrigerators open, how to recognize different types of objects, and how to integrate this information with the robot's map. Their goal is to detect, grasp, and put away objects with very high reliability, and reproduce these results at other PR2 Beta Program sites.

* Bosch with the proposal Developing the Personal Robotics Market: Enabling New Applications Through Novel Sensors and Shared Autonomy
Bosch will bring their expertise in manufacturing, sensing technologies and consumer products. Bosch will be making robotic sensors available to members of the PR2 Beta Program, including a limited number of "skins" that will give the PR2 the ability to feel its environment. Bosch will also make their PR2 remotely accessible and will expand on the libraries they've released for ROS.

* Georgia Institute of Technology with the proposal Assistive Mobile Manipulation for Older Adults at Home
The Healthcare Robotics Lab at Georgia Tech will be placing the PR2 in an "Aware Home" to study how robots can help with homecare and creative assistive capabilities for older adults. Their research includes creating easier ways for adults to interact with robots, and enabling robots to interact with everyday objects like drawers, lamps, and light switches. Their human-robot interaction focus will help ensure that the software development is closely connected to real-world needs.

* Katholieke Universiteit Leuven with the proposal Unified Framework for Task Specification, Control and Coordination for Mobile Manipulation
KU Leuven in Belgium is a key player in the open-source robotics community. As one of the founding institutions for the Orocos Project, they will be improving the tools and libraries used to program robots in ROS, by, for example, integrating ROS with Blender. They will also be working on getting the PR2 and people to perform tasks together, like carrying objects through a crowded environment.

* MIT CSAIL with the proposal Mobile Manipulation in Human-Centered Environments
The diverse MIT CSAIL group will use the PR2 to study the key capabilities needed by robots that operate in human-centered environments, such as safe navigation, interaction with humans via natural language, object recognition, and planning for complex goals. Their work will allow robots to build the maps they need in order to move around in buildings as large as MIT’s 11-story Stata Center. They will also program the PR2 to put away groceries and do simple cleaning tasks.

* Stanford University with the proposal STAIR on PR2
PR1 was developed in Kenneth Salisbury's lab at Stanford, and ROS was developed from the STAIR (Stanford AI Robot) Project. We're very excited that the PR2 will become the new platform for the STAIR Project's innovative research. Their team will work on several applications, which include taking inventory, retrieving items scattered about a building, and clearing a table after a meal.

* Technische Universität München with the proposal CRAM: Cognitive Robot Abstract Machine
TUM will research giving the PR2 the artificial intelligence skills and 3D perception to reason about what it is doing while it performs various kitchen tasks. These combined improvements will help the PR2 perform more complicated tasks like setting a table, emptying a dishwasher, preparing meals, and other kitchen-related tasks.

* University of California, Berkeley with the proposal PR2 Beta Program: A Platform for Personal Robotics
The PR2 is now known as the "Towel-Folding Robot", thanks to the impressive efforts of Pieter Abbeel's lab at Berkeley. In two short months, they were able to get the PR2 to fold fifty towels in a row. Berkeley will tackle the much more difficult challenge of doing laundry, from dirty laundry piles to neatly folded clothes. In addition, their team is interested in hierarchical planning, object recognition, and assembly and manufacturing tasks (e.g. IKEA products) through learning by demonstration

* University of Pennsylvania with the proposal PR2GRASP: From Perception and Reasoning to Grasping
The GRASP Lab proposal aims to tackle some of the challenges facing household robotics. These challenges include tracking people and planning for navigation in dynamic environments, and transferring handheld objects between robots and humans. Their contributions will include giving PR2 a tool belt to change its gripper on the fly, helping it track and navigate around people, and performing difficult two-arm tasks like opening spring-loaded doors.

* University of Southern California with the proposal Persistent and Persuasive Personal Robots (P^3R): Towards Networked, Mobile, Assistive Robotics
USC has already demonstrated teaching the PR2 basic motor skills so that it can adapt to different situations and tasks, such as pouring a cup. They will continue to expand on this work in imitation learning and building and refining skill libraries, while also doing research in human-robot interaction and self-calibration for sensors.

* University of Tokyo, Jouhou System Kougaku (JSK) Laboratory with the proposal Autonomous Motion Planning for Daily Tasks in Human Environments using Collaborating Robots
The JSK Laboratory at the University of Tokyo is one of the top humanoid robotics labs in the world. Their goal is to see robots safely and autonomously perform daily, human-like tasks such as retrieving objects and cleaning up domestic environments. They'll also be working on getting the PR2 to work together with other robots, as well as integrating the ROS, EusLisp, and OpenRAVE frameworks.

CyberWalk: Giant Omni-Directional Treadmill To Explore Virtual Worlds

CyberWalk: Giant Omni-Directional Treadmill To Explore Virtual Worlds: "Built by Italian and German researchers, it's the largest VR platform in the world"


It's a problem that has long annoyed virtual reality researchers: VR systems can create a good experience when users are observing or manipulating the virtual world (think Michael Douglas in "Disclosure") but walking is another story. Take a stroll in a virtual space and you might end up with your face against a real-world wall.

The same problem is becoming apparent in teleoperated robots. Imagine you were teleoperating a humanoid robot by wearing a sensor suit that captures all your body movements. You want to make the robot walk across a room at the remote location -- but the room you're in is much smaller. Hmm.

Researches have built a variety of contraptions to deal with the problem. Like a huge hamster ball for people, for example.

Or a giant treadmill. The CyberWalk platform is a large-size 2D omni-directional platform that allows unconstrained locomotion, adjusting its speed and direction to keep the user always close to the center. With a side of 5 meters, it's the largest VR platform in the world.

It consists of an array of synchronous linear belts. The array moves as a whole in one direction while each belt can also move in a perpendicular direction. Diagonal movement is possible by combining the two linear motions.

Built by a consortium of German, Italian, and Swiss labs, the machine currently resides at the Max Planck Institute for Biological Cybernetics, in Tubingen, Germany, where it's been in operation for over two years.

Last year at IROS, Alessandro De Luca and Raffaella Mattone from the Universita di Roma "La Sapienza," in Rome, Italy, and Paolo Robuffo Giordano and Heinrich H. Bulthoff from the Max Planck Institute for Biological Cybernetics presented details of the machine's control system.

According to the researchers, previous work on similar platforms paid little attention to control algorithms, relying on simple PID and heuristic controllers.

The Italian and German researchers came up with a kinematic model for the machine and from there they devised a control strategy. Basically the challenge is that the control system needs to adapt to changes in the user's direction and speed -- variables that it can't measure directly, so it needs to estimate them.

By precisely monitoring the position of the user on the platform using a Vicon motion-capture system, the controller computes estimates for the two variables and tries to adjust the speeds of the linear belts to keep the user close to the center -- all without abrupt accelerations.

The researchers also devised a way of using a frame of reference for the controller that varies with the user's direction. This method allowed the CyberWalk platform to provide a more natural walking experience, without making the user's legs cross when changing direction. The video above shows the results.

The CyberWalk platform is one of two locomotion devices developed as part of the European Union-funded Project CyberWalk. The other is a small-scale ball-array platform dubbed CyberCarpet.

The Technical University of Munich, another partner in the CyberWalker consortium, designed and built both platforms. And ETH Zurich, another partner, was responsible for the VR part -- creating a 3D VR model of ancient Pompeii and implementing the motion synchronization on the head-mounted display of the human walker.

You can read the researcher's paper, "Control Design and Experimental Evaluation of the 2D CyberWalk Platform," here.

A Robot That Balances on a Ball

A Robot That Balances on a Ball: "Masaaki Kumagai has built wheeled robots, crawling robots, and legged robots. Now he's built a robot that rides on a ball"


Dr. Masaaki Kumagai, director of the Robot Development Engineering Laboratory at Tohoku Gakuin University, in Tagajo City, Japan, has built wheeled robots, crawling robots, quadruped robots, biped robots, and biped robots on roller skates.

Then one day a student approached him to suggest they build a robot that would balance on a ball.

Dr. Kumagai thought it was a wonderful idea.

The robot they built rides on a rubber-coated bowling ball, which is driven by three omnidirectional wheels. The robot can not only stand still but also move in any direction and pivot around its vertical axis.

It can work as a mobile tray to transport cocktails objects and it can also serve as an omnidirectional supporting platform to help people carry heavy objects.

Such a ball-balancing design is like an inverted pendulum, and thus naturally unstable, but it offers advantages: it has a small footprint and can move in any direction without changing its orientation.

In other words, whereas a two-wheel self-balancing robot has to turn before it can drive in a different direction, a ball-riding robot can promptly drive in any direction. Try that, Segway!

Dr. Kumagai and student Takaya Ochiai built three robots and tested them with 10-kilogram bricks. They even made them work together to carry a large wooden frame.

lunes, 3 de mayo de 2010

Niños mexicanos van a Taiwan

Niños mexicanos van a Taiwan: "Un grupo de 14 niños, de entre seis y 12 años de edad, representarán a México en la ciudad de Kaohsiung, Taiwan, donde se llevará a cabo un encuentro de robótica denominado Open International Championship Smart Movie"

Un grupo de 14 niños, de entre seis y 12 años de edad, representarán a México en la ciudad de Kaohsiung, Taiwan, donde se llevará a cabo un encuentro de robótica denominado Open International Championship Smart Movie, del 6 al 8 de mayo.

El abierto asiático de robótica congregará a niñas y niños de más de 52 países, con el objetivo de desarrollar una atmósfera tipo deportiva acercando a la niñez y a la juventud a la ciencia y a la tecnología de forma lúdica.

El reto este año dado a conocer por la asociación FIRST LEGO League, organizadora del encuentro, consiste en pensar e imaginar nuevas formas de transporte para personas, bienes y servicios, proponiendo movimientos inteligentes "smart move'.

Se pretende que los participantes planteen una solución para liberar al planeta de consumibles como la gasolina, el petróleo y derivados que se aplica en llantas, plásticos, entre otros, con un transporte limpio.

En entrevista el coordinador de Sistemas del Colegio Williams, Francisco Brito Vidales, destacó que el grupo de niños que representará al Distrito Federal pertenece a ese colegio, plantel San Jerónimo, y se autodenominan Risk Takers o Tomadores de Riesgos.

Los Risk Takers, añadió, presentarán en Taiwan un proyecto de ciudad magnética a escala que mediante la colocación de polos inversos de imanes permite a los automóviles levitar.

Explicó que con más de cinco años de trabajo y decenas de horas de investigación y consultas a especialistas los menores crearon a 'Magnetito', el robot mexicano con el que buscarán poner en alto el nombre de México frente a países como Estados Unidos, Japón y Alemania.

Los Risk Takers son niños y jóvenes que se distinguen por su creatividad, pensamiento analítico, solución de problemas y capacidad para formar un verdadero equipo, comentó por su parte el profesor Sergio Becerril, otro de los 'coach' que acompaña al grupo.

'Ellos se prepararon no sólo para cumplir con el buen desempeño del robot al completar las misiones consideradas en la convocatoria, sino además profundizaron en la investigación de una propuesta tecnológica para modificar la forma en la que nos transportamos', abundó.

Adicionalmente trabajaron en el diseño del robot para cumplir con parámetros como robustez, versatilidad, adaptabilidad, factores que forman parte de la puntuación para la competencia además del trabajo en equipo para la resolución de problemas, anotó.

El reto de los participantes en el encuentro de robótica consiste en proponer un proyecto donde el transporte se vuelva más inteligente, contamine menos y mueva mayor cantidad de masas a un costo más bajo.

Al grupo que representa al Distrito Federal se sumarán otros cuatro procedentes de Querétaro, Toluca, Guadalajara y Monterrey.

El robot 'Magnetito', diseñado para cumplir con misiones determinadas para la competencia, recorre trayectorias sin colisionar con obstáculos, realiza movimientos de precisión, sigue rutas trazadas, manipula objetos y los deposita en áreas reducidas o lugares específicos con exactitud.

De acuerdo a Francisco Brito, de 12 años, los Risk takers son un grupo de niños 'normales', juguetones, pero ante todo tolerantes, pensadores, informados, solidarios, de mentalidad abierta y comunicadores.