Buscar en este blog

miércoles, 31 de marzo de 2010

Could robots and smart devices help older people look after themselves?

Could robots and smart devices help older people look after themselves?: "Researchers are taking part in a European project aimed at creating an intelligent system comprising a robot and smart sensors that can support independent living for elderly people."

Could robots and smart devices help older people look after themselves?

ScienceDaily (2010-03-26) -- Researchers are taking part in a European project aimed at creating an intelligent system comprising a robot and smart sensors that can support independent living for elderly people.

Robots: The Future of Artificial Intelligence

Robots: The Future of Artificial Intelligence: "




The latest episode of the Robots podcast interviews Kristinn R. Thórisson from Reykjavik
University on some of the great advances, but also some of the
disappointments of artificial intelligence, and where he thinks AI will
be used in the future. In the second part of this interview, we conclude
our quest for a definition of the word "robot" with a definition by Prof. Wendelin Reich
from the Swedish Collegium for
Advanced Study at Uppsala University, Sweden. He defines a
robot as an artificial, physically embodied ‘agent tool’ - and
gives some good
reasons for this definition. For details as well as a list of other
definitions have a look at the Robots
website.

"

Random Robot Roundup

Random Robot Roundup: "Nelson Bridwell points
out a NASA article on the increasing
autonomy of the Mars Exploration Rovers and also sent a link to an
essay he wrote on NASA's
future direction. If you're really interested in Mars, you may want
to check a links JPL's Cassie Bowman sent for anyone who wants to be a Martian. Closer to
Earth, Mark Miller will be showing off
his android
creations at the National High Magnetic Field Lab in Tallahassee, FL on
April 2. Mark's androids are part robot and part
artwork and they're well worth seeing if you're able to make it to the
show. Travis Deyle writes, 'thought you might like this
new robot from our lab'
Roschler sent us an
article he's written on the Emotiv
EPOC headset
By the way, sorry for the slowdown in news lately, I've been doing a lot
of work with my local robot group and it's eaten into my time for
posting stories here. Hopefully I'll be able to work out a better
balance of time soon. Know any other robot news, gossip, or amazing
facts we should report? Send 'em our
way please. And don't forget to follow us on twitter."

Phantom of the Operating Shuttle?

Phantom of the Operating Shuttle?: "phantom_shuttle


Dvice.com
reports about a mysterious Robotic Shuttle that will be launched
April 19th. This is the first time I've even heard of such a shuttle
replacement. I mean, I thought NASA dumped the idea of a shuttle
completely and went for the super Apollo type mission to go to the Moon
or Mars?
So at a time when mothballing
the old Space Shuttle debate is going ballistic, what happens?
Well, it looks like the Air Force pulled a fast one and went ahead and
had it's own space shuttle secretly built by Boeing Phantom Works. The
new autonomous robotic Space Shuttle is dubbed the X37B.
Revealing a new Space Shuttle at this time is probably not going to
help the Obama administration with all the harsh
Space
Agency criticism they've been getting lately. In my opinion it shames
the Obama Administration and NASA because neither came up with this
dreamy vehicle, the Air Force had to. One could argue that NASA has
limited funds or how NASA and the Air Force is sort of two
sides of the same coin, yada yada. It probably shames the Air Force too
for not informing Obama they had a secret space shuttle.
Anyway, the Air Force didn't commission just any space shuttle to be
made, they had made a small, efficient, robotic,
autonomous space shuttle. It can go up, deploy some secret
payload, and come down and land
all on it's own the article says!
OK, well, details are sketchy so it's probably not completely autonomous
but it appears to be just as much autonomously controlled by robotic
equipment as the original shuttle was controlled by humans in the
cockpit. That's very impressive. Awesome. So... now that such a robotic
shuttle is
made public and known to exist, I wonder if the Air Force will let NASA
use it
for non-military missions? Naw, probably not."

Bots High: Documentary on BattleBots

Bots High: Documentary on BattleBots: "



Bots High Teaser Trailer from Joey Daoud on Vimeo.



Joey Daoud is working on a video
documentary, called Bots High, about high school teams
participating in the BattleBots competition and he needs your help. Joey
writes,

I'm working on a documentary on high school BattleBots. I've
been following multiple teams around since August, leading up to the
National Championship. I'm trying to raise funds to film the
championship with a multi camera crew, as well as travel to San
Francisco to interview the BattleBot creators and builders.

For those who don't remember, Robot
Wars (1998-2004) and BattleBots
(1999-2002), were
much-hyped game/reality television shows featuring competitions between
remote-controlled
vehicles designed to look like robots. Contests consisted of massive
machines that destroyed each other in entertaining ways. The
hype eventually died and the shows were cancelled. What you may not know
is that BattleBots
spun off a high school league known as BOTSIQ which still exists and attempts
to add an
educational aspect to the competition. The BOTSIQ
championship will be held April 14-18 in Miami, Florida.

"

Robots: Chaos Control

Robots: Chaos Control: "




Walking, swallowing, respiration and many other key functions in humans
and other animals are controlled by Central
Pattern Generators (CPGs). In essence, CPGs are small, autonomous
neural networks that produce rhythmic outputs, usually found in animal's
spinal cords rather than their brains. Their relative simplicity and
obvious success in biological systems has led to some success
in using CPGs in robotics. However, current systems are restricted
to very simple CPGs (e.g., restricted to a single walking gait). A
recent breakthrough at the BCCN
at the University of Göttingen, Germany has now allowed to achieve
11 basic behavioral patterns (various gaits, orienting, taxis,
self-protection) from a single CPG, closing in on the 10–20 different
basic behavioral patterns found in a typical cockroach. The trick: Work
with a chaotic, rather than a stable periodic CPG regime. For more on
CPGs, listen to the latest episode of the Robots
podcast on Chaos Control, which interviews Poramate
Manoonpong, one of the lead researchers in Göttingen, and Alex Pitti from the University of Tokyo who uses chaos
controllers that can synchronize to the dynamics of the body they are
controlling.

"

martes, 16 de marzo de 2010

Explained: Regression analysis


Explained: Regression analysis

Regression analysis. It sounds like a part of Freudian psychology. In reality, a regression is a seemingly ubiquitous statistical tool appearing in legions of scientific papers, and regression analysis is a method of measuring the link between two or more phenomena.

Imagine you want to know the connection between the square footage of houses and their sale prices. A regression charts such a link, in so doing pinpointing “an average causal effect,” as MIT economist Josh Angrist and his co-author Jorn-Steffen Pischke of the London School of Economics put it in their 2009 book, “Mostly Harmless Econometrics.”

To grasp the basic concept, take the simplest form of a regression: a linear, bivariate regression, which describes an unchanging relationship between two (and not more) phenomena. Now suppose you are wondering if there is a connection between the time high school students spend doing French homework, and the grades they receive. These types of data can be plotted as points on a graph, where the x-axis is the average number of hours per week a student studies, and the y-axis represents exam scores out of 100. Together, the data points will typically scatter a bit on the graph. The regression analysis creates the single line that best summarizes the distribution of points.

Mathematically, the line representing a simple linear regression is expressed through a basic equation: Y = a0 + a1 X. Here X is hours spent studying per week, the “independent variable.” Y is the exam scores, the “dependent variable,” since — we believe — those scores depend on time spent studying. Additionally, a0 is the y-intercept (the value of Y when X is zero) and a1 is the slope of the line, characterizing the relationship between the two variables.

Using two slightly more complex equations, the “normal equations” for the basic linear regression line, we can plug in all the numbers for X and Y, solve for a0 and a1, and actually draw the line. That line often represents the lowest aggregate of the squares of the distances between all points and itself, the “Ordinary Least Squares” (OLS) method mentioned in mountains of academic papers.

To see why OLS is logical, imagine a regression line running 6 units below one data point and 6 units above another point; it is 6 units away from the two points, on average. Now suppose a second line runs 10 units below one data point and 2 units above another point; it is also 6 units away from the two points, on average. But if we square the distances involved, we get different results: 62 + 62 = 72 in the first case, and 102 + 22 = 104 in the second case. So the first line yields the lower figure — the “least squares” — and is a more consistent reduction of the distance from the data points. (Additional methods, besides OLS, can find the best line for more complex forms of regression analysis.)

In turn, the typical distance between the line and all the points (sometimes called the “standard error”) indicates whether the regression analysis has captured a relationship that is strong or weak. The closer a line is to the data points, overall, the stronger the relationship.

Regression analysis, again, establishes a correlation between phenomena. But as the saying goes, correlation is not causation. Even a line that fits the data points closely may not say something definitive about causality. Perhaps some students do succeed in French class because they study hard. Or perhaps those students benefit from better natural linguistic abilities, and they merely enjoy studying more, but do not especially benefit from it. Perhaps there would be a stronger correlation between test scores and the total time students had spent hearing French spoken before they ever entered this particular class. The tale that emerges from good data may not be the whole story.

So it still takes critical thinking and careful studies to locate meaningful cause-and-effect relationships in the world. But at a minimum, regression analysis helps establish the existence of connections that call for closer investigation.

viernes, 5 de marzo de 2010

Explained: P vs. NP

Explained: P vs. NP


Science and technology journalists pride themselves on the ability to explain complicated ideas in accessible ways, but there are some technical principles that we encounter so often in our reporting that paraphrasing them or writing around them begins to feel like missing a big part of the story. So in a new series of articles called "Explained," MIT News Office staff will explain some of the core ideas in the areas they cover, as reference points for future reporting on MIT research.

In the 1995 Halloween episode of The Simpsons, Homer Simpson finds a portal to the mysterious Third Dimension behind a bookcase, and desperate to escape his in-laws, he plunges through. He finds himself wandering across a dark surface etched with green gridlines and strewn with geometric shapes, above which hover strange equations. One of these is the deceptively simple assertion that P = NP.

In fact, in a 2002 poll, 61 mathematicians and computer scientists said that they thought P probably didn’t equal NP, to only nine who thought it did — and of those nine, several told the pollster that they took the position just to be contrary. But so far, no one’s been able to decisively answer the question one way or the other. Frequently called the most important outstanding question in theoretical computer science, the equivalency of P and NP is one of the seven problems that the Clay Mathematics Institute will give you a million dollars for proving — or disproving. Roughly speaking, P is a set of relatively easy problems, and NP is a set of what seem to be very, very hard problems, so P = NP would imply that the apparently hard problems actually have relatively easy solutions. But the details are more complicated.

Computer science is largely concerned with a single question: How long does it take to execute a given algorithm? But computer scientists don’t give the answer in minutes or milliseconds; they give it relative to the number of elements the algorithm has to manipulate.

Imagine, for instance, that you have an unsorted list of numbers, and you want to write an algorithm to find the largest one. The algorithm has to look at all the numbers in the list: there’s no way around that. But if it simply keeps a record of the largest number it’s seen so far, it has to look at each entry only once. The algorithm’s execution time is thus directly proportional to the number of elements it’s handling — which computer scientists designate N. Of course, most algorithms are more complicated, and thus less efficient, than the one for finding the largest number in a list; but many common algorithms have execution times proportional to N2, or N times the logarithm of N, or the like.

A mathematical expression that involves N’s and N2s and N’s raised to other powers is called a polynomial, and that’s what the “P” in “P = NP” stands for. P is the set of problems whose solution times are proportional to polynomials involving N's.

Obviously, an algorithm whose execution time is proportional to N3 is slower than one whose execution time is proportional to N. But such differences dwindle to insignificance compared to another distinction, between polynomial expressions — where N is the number being raised to a power — and expressions where a number is raised to the Nth power, like, say, 2N.

If an algorithm whose execution time is proportional to N takes a second to perform a computation involving 100 elements, an algorithm whose execution time is proportional to N3 takes almost three hours. But an algorithm whose execution time is proportional to 2N takes 300 quintillion years. And that discrepancy gets much, much worse the larger N grows.

NP (which stands for nondeterministic polynomial time) is the set of problems whose solutions can be verified in polynomial time. But as far as anyone can tell, many of those problems take exponential time to solve. Perhaps the most famous problem in NP, for example, is finding prime factors of a large number. Verifying a solution just requires multiplication, but solving the problem seems to require systematically trying out lots of candidates.

So the question “Does P equal NP?” means “If the solution to a problem can be verified in polynomial time, can it be found in polynomial time?” Part of the question’s allure is that the vast majority of NP problems whose solutions seem to require exponential time are what’s called NP-complete, meaning that a polynomial-time solution to one can be adapted to solve all the others. And in real life, NP-complete problems are fairly common, especially in large scheduling tasks. The most famous NP-complete problem, for instance, is the so-called traveling-salesman problem: given N cities and the distances between them, can you find a route that hits all of them but is shorter than … whatever limit you choose to set?

Given that P probably doesn’t equal NP, however — that efficient solutions to NP problems will probably never be found — what’s all the fuss about? Michael Sipser, the head of the MIT Department of Mathematics and a member of the Computer Science and Artificial Intelligence Lab’s Theory of Computation Group (TOC), says that the P-versus-NP problem is important for deepening our understanding of computational complexity.

“A major application is in the cryptography area,” Sipser says, where the security of cryptographic codes is often ensured by the complexity of a computational task. The RSA cryptographic scheme, which is commonly used for secure Internet transactions — and was invented at MIT — “is really an outgrowth of the study of the complexity of doing certain number-theoretic computations,” Sipser says.

Similarly, Sipser says, “the excitement around quantum computation really boiled over when Peter Shor” — another TOC member — “discovered a method for factoring numbers on a quantum computer. Peter's breakthrough inspired an enormous amount of research both in the computer science community and in the physics community.” Indeed, for a while, Shor’s discovery sparked the hope that quantum computers, which exploit the counterintuitive properties of extremely small particles of matter, could solve NP-complete problems in polynomial time. But that now seems unlikely: the factoring problem is actually one of the few hard NP problems that is not known to be NP-complete.

Sipser also says that “the P-versus-NP problem has become broadly recognized in the mathematical community as a mathematical question that is fundamental and important and beautiful. I think it has helped bridge the mathematics and computer science communities.”

But if, as Sipser says, “complexity adds a new wrinkle on old problems” in mathematics, it’s changed the questions that computer science asks. “When you’re faced with a new computational problem,” Sipser says, “what the theory of NP-completeness offers you is, instead of spending all of your time looking for a fast algorithm, you can spend half your time looking for a fast algorithm and the other half of your time looking for a proof of NP-completeness.”

Sipser points out that some algorithms for NP-complete problems exhibit exponential complexity only in the worst-case scenario and that, in the average case, they can be more efficient than polynomial-time algorithms. But even there, NP-completeness “tells you something very specific,” Sipser says. “It tells you that if you’re going to look for an algorithm that’s going to work in every case and give you the best solution, you’re doomed: don’t even try. That’s useful information.”

Context is ev … well, something, anyway

Context is ev … well, something, anyway

martes, 2 de marzo de 2010

Detectando cambios en Imágenes

Detectar cambios en un conjunto de imágenes que corresponden a la misma escena, es a menudo una tarea importante para muchas aplicaciones de robótica móvil donde la detección de obstáculos es llevada a cabo a través de diferentes sensores, entre ellos la cámara digital.

Dentro del Procesamiento de Imágenes, la Aritmética es una forma sencilla de detectar cambios sobre un conjunto de vistas a través de operaciones como la división o la diferencia absoluta entre sus pixeles.

Para ilustrar lo anterior, considérense las siguientes dos imágenes etiquetadas cada una como: Environment Image y Memory Image.





Utilizando OpenCV como herramienta para el procesamiento de imágenes, la división de dos imágenes pixel a pixel puede ser implementada mediante la función:

cvDiv(CvArr* Memory, CvArr* Environment, CvArr* Result);
Donde: Result = Memory / Environment



Después de efectuar la división pixel a pixel, la detección de los cambios en las imágenes es evidente a nivel de datos, sin embargo, visualmente no se aprecia con claridad el resultado (Result cvDiv). Por esta razón, si se multiplica la imagen Result cvDiv por un escalar, en este caso 50, el resultado mejora considerablemente (ver Result cvScale).



La multiplicación de una imagen por un escalar es realizada con la función cvScale() de OpenCV.

En la imagen Result cvScalar se observa que los pixeles cuyo nivel de intensidad es mas alto, es decir, se acerca a 255, corresponden a los objetos que son ajenos a la imagen de Memoria.

Adicionalmente, mediante la Diferencia Absoluta entre dos imágenes también es posible determinar que pixeles cambian de una vista a otra. Para este caso, al calcular la Diferencia Absoluta entre Environment Image y Memory Image, el resultado es el siguiente:



La Diferencia Absoluta entre dos imágenes muestra no solo aquellos objetos que son ajenos a la imagen de Memoria, sino también aquellos que no se encuentran en la imagen de Ambiente.

De esta manera, si se requiere detectar obstáculos a través de imágenes, la división de estas proporciona una primera aproximación al resaltar solo aquellos objetos que son ajenos a la imágen de Memoria.

Explained: The Discrete Fourier Transform

Explained: The Discrete Fourier Transform

Science and technology journalists pride themselves on the ability to explain complicated ideas in accessible ways, but there are some technical principles that we encounter so often in our reporting that paraphrasing them or writing around them begins to feel like missing a big part of the story. So in a new series of articles called "Explained," MIT News Office staff will explain some of the core ideas in the areas they cover, as reference points for future reporting on MIT research.

In 1811, Joseph Fourier, the 43-year-old prefect of the French district of Isère, entered a competition in heat research sponsored by the French Academy of Sciences. The paper he submitted described a novel analytical technique that we today call the Fourier transform, and it won the competition; but the prize jury declined to publish it, criticizing the sloppiness of Fourier’s reasoning. According to Jean-Pierre Kahane, a French mathematician and current member of the academy, as late as the early 1970s, Fourier’s name still didn’t turn up in the major French encyclopedia the Encyclopædia Universalis.

Now, however, his name is everywhere. The Fourier transform is a way to decompose a signal into its constituent frequencies, and versions of it are used to generate and filter cell-phone and Wi-Fi transmissions, to compress audio, image, and video files so that they take up less bandwidth, and to solve differential equations, among other things. It’s so ubiquitous that “you don’t really study the Fourier transform for what it is,” says Laurent Demanet, an assistant professor of applied mathematics at MIT. “You take a class in signal processing, and there it is. You don’t have any choice.”

The Fourier transform comes in three varieties: the plain old Fourier transform, the Fourier series, and the discrete Fourier transform. But it’s the discrete Fourier transform, or DFT, that accounts for the Fourier revival. In 1965, the computer scientists James Cooley and John Tukey described an algorithm called the fast Fourier transform, which made it much easier to calculate DFTs on a computer. All of a sudden, the DFT became a practical way to process digital signals.

To get a sense of what the DFT does, consider an MP3 player plugged into a loudspeaker. The MP3 player sends the speaker audio information as fluctuations in the voltage of an electrical signal. Those fluctuations cause the speaker drum to vibrate, which in turn causes air particles to move, producing sound.

An audio signal’s fluctuations over time can be depicted as a graph: the x-axis is time, and the y-axis is the voltage of the electrical signal, or perhaps the movement of the speaker drum or air particles. Either way, the signal ends up looking like an erratic wavelike squiggle. But when you listen to the sound produced from that squiggle, you can clearly distinguish all the instruments in a symphony orchestra, playing discrete notes at the same time.

That’s because the erratic squiggle is, effectively, the sum of a number of much more regular squiggles, which represent different frequencies of sound. “Frequency” just means the rate at which air molecules go back and forth, or a voltage fluctuates, and it can be represented as the rate at which a regular squiggle goes up and down. When you add two frequencies together, the resulting squiggle goes up where both the component frequencies go up, goes down where they both go down, and does something in between where they’re going in different directions.

The DFT does mathematically what the human ear does physically: decompose a signal into its component frequencies. Unlike the analog signal from, say, a record player, the digital signal from an MP3 player is just a series of numbers, representing very short samples of a real-world sound: CD-quality digital audio recording, for instance, collects 44,100 samples a second. If you extract some number of consecutive values from a digital signal — 8, or 128, or 1,000 — the DFT represents them as the weighted sum of an equivalent number of frequencies. (“Weighted” just means that some of the frequencies count more than others toward the total.)

The application of the DFT to wireless technologies is fairly straightforward: the ability to break a signal into its constituent frequencies lets cell-phone towers, for instance, disentangle transmissions from different users, allowing more of them to share the air.

The application to data compression is less intuitive. But if you extract an eight-by-eight block of pixels from an image, each row or column is simply a sequence of eight numbers — like a digital signal with eight samples. The whole block can thus be represented as the weighted sum of 64 frequencies. If there’s little variation in color across the block, the weights of most of those frequencies will be zero or near zero. Throwing out the frequencies with low weights allows the block to be represented with fewer bits but little loss of fidelity.

Demanet points out that the DFT has plenty of other applications, in areas like spectroscopy, magnetic resonance imaging, and quantum computing. But ultimately, he says, “It’s hard to explain what sort of impact Fourier’s had,” because the Fourier transform is such a fundamental concept that by now, “it’s part of the language.”

lunes, 1 de marzo de 2010

Explained: Linear and nonlinear systems


Explained: Linear and nonlinear systems

Spend some time browsing around the web site of MIT’s Computer Science and Artificial Intelligence Laboratory, and you’ll find hundreds if not thousands of documents with titles like “On Modeling Nonlinear Shape-and-Texture Appearance Manifolds” and “Non-linear Drawing systems,” or, on the contrary, titles like “Packrat Parsing: Simple, Powerful, Lazy, Linear Time” and “Linear-Time-Encodable and List-Decodable Codes.”

The distinction between linear and nonlinear phenomena is everywhere in the sciences and engineering. But what exactly does it mean?

Suppose that, without much effort, you can toss a tennis ball at about 20 miles per hour. Now suppose that you’re riding a bicycle at 10 miles per hour and toss a tennis ball straight ahead. The ball will travel forward at 30 miles per hour. Linearity is, essentially, the idea that combining two inputs — like the velocity of your arm and the velocity of the bike — will yield the sum of their respective outputs — the velocity of the ball.

Now suppose that, instead of tossing a tennis ball, you toss a paper airplane. Depending on the airplane’s design, it might sail straight ahead, or it might turn loops. Some paper planes seem to behave more erratically the harder you throw them: the bike’s added velocity might make it almost impossible to get the plane to do anything predictable. That’s because airflow over a paper plane’s wings can be very nonlinear.

If the bicycle had built-in sensors and an onboard computer, it could calculate the velocity of the tennis ball in a fraction of a second. But it could never hope to calculate all the airflows over the paper plane’s wing in time to do anything useful. “I think that it’s a reasonable statement that we mostly understand linear phenomena,” says Pablo Parrilo, the Finmeccanica Career Development Professor of Engineering MIT’s Laboratory for Information and Decision Systems.

To make the distinction between linearity and nonlinearity a bit more precise, recall that a mathematical equation can be thought of as a function — something that maps inputs to outputs. The equation y = x, for instance, is equivalent to a function that takes as its input a value for x and produces as its output a value for y. The same is true of y = x2.

The equation y = x is linear because adding together inputs yields the sum of their respective outputs: 1 = 1, 2 = 2, and 1 + 2 = 1 + 2. But that’s not true of y = x2: if x is 1, y is 1; if x is 2, y is 4; but if x is 3, y is not 5.

This example illustrates the origin of the term “linear”: the graph of y = x is a straight line, while the graph of y = x2 is a curve. But the basic definition of linearity holds for much more complicated equations, such as the differential equations used in engineering to describe dynamic systems.

While linear functions are easy enough to define, the term “nonlinear” takes in everything else. “There’s this famous quote — I’m not sure who said it first — that the theory of nonlinear systems is like a theory of non-elephants,” Parrilo says. “It’s impossible to build a theory of nonlinear systems, because arbitrary things can satisfy that definition.” Because linear equations are so much easier to solve than nonlinear ones, much research across a range of disciplines is devoted to finding linear approximations of nonlinear phenomena.

Russ Tedrake, for example, the X Consortium Associate Professor of Electrical Engineering and Computer Science at MIT, has adapted theoretical work done by Parrilo to create novel control systems for robots. A walking robot’s gait could be the result of a number of mechanical systems working together in a nonlinear way. The collective forces exerted by all those systems might be impossible to calculate on the fly. But within a narrow range of starting conditions, a linear equation might describe them well enough for practical purposes. Parrilo’s theoretical tools allow Tedrake to determine how well a given linear approximation will work within how wide a range of starting conditions. His control system thus consists of a whole battery of linear control equations, one of which is selected depending on the current state of the robot.