Welcome to Gaia! ::

Machine that can see!

See rising of AVM technology since 2007:

*Follow me
http://www.youtube.com/watch?v=HTxNlOpm11U

*Walking by gates
http://www.youtube.com/watch?v=xbCpthKrL0o

*Route training and navigation by map
http://www.youtube.com/watch?v=qVz9iBazqug

See also: AVM Navigator help page, Using of AVM plugin in RoboRealm

Notification:

What your robot see and how it affect to navigation?

In our case the robot uses only limited sequence of images from camera for
navigation. The AVM Navigator application just tries to recognize images so
that be able to understand robot location. And if you show in route training
process of some image sequence ( just lead your robot in "Marker mode" )
then in automatic navigation mode the robot should have possibility to see
the same sequence of images without anyone changes. Otherwise AVM Navigator
will not be able to recognize images and it will cause to fail in
localization and navigation.
Suicidesoldier#1's avatar

Fanatical Zealot

Why does vision = skynet? confused
How robot will be able to feel real world without vision?

The vision is most important system of robot brain wink
Suicidesoldier#1's avatar

Fanatical Zealot

ExDxV
How robot will be able to feel real world without vision?

The vision is most important system of robot brain wink


lol
Your argument is lol?
It is very cogent argument rolleyes

So, you think that robot will be able to navigate with another sensors instead of vision wink
There are lots of things that have vision and haven't managed to take over the world, and we're all pretty sure are not going to take over the world. Goats, for instance. No one is particularly worried about Goatnet, even though we've known for a while that goats can see.

Vision may be a necessity, but there are plenty of other things that Skynet would need that haven't been created yet. For instance, good natural language processing, faster and more flexible learning algorithms, smaller and more mobile processors in general, stronger decryption and keybreaking algorithms, etc. Really, Skynet would probably need something along the lines of Strong AI in order to do anything significant and we are still quite far from Strong AI.

In addition to all the the things Skynet would need internally, we'd then have to proceed to put networked artificial intelligences into all of our industrial, economic and military operations; a Strong AI confined to a single computer with no internet connection isn't going to be a threat.
Suicidesoldier#1's avatar

Fanatical Zealot

ExDxV
Your argument is lol?
It is very cogent argument rolleyes

So, you think that robot will be able to navigate with another sensors instead of vision wink


I wasn't arguing but...

Well yeah, actually, ultra sonar or just flat out sonar comes to mind, which would give them a 3-d rendering of their environment.


Also would give them sound.

Win-win.

No color, but eh.
2Suicidesoldier#1:
However my robot can lock target from video camera without any other sensors:
http://www.youtube.com/watch?v=RmGg3RZ4Hy4

2Layra-chan:
You are quite right but my experiments are good start and Skynet could grow from it wink
Suicidesoldier#1's avatar

Fanatical Zealot

ExDxV
2Suicidesoldier#1:
However my robot can lock target from video camera without any other sensors:
http://www.youtube.com/watch?v=RmGg3RZ4Hy4

2Layra-chan:
You are quite right but my experiments are good start and Skynet could grow from it wink


Um...

That's just some kid probably editing his video, which is fairly easy to do but.


Ultra sonar is better becuase you can get a 3-D mapping, so locking on the targets gives you the entire person's shape.

No weird angles, no shadowing, no low light conditions, just a pure representation of the target, and quite possibly their face.
Glorious Leader Luna's avatar

Omnipresent Cultist

5,250 Points
  • Perfect Attendance 400
  • Partygoer 500
Skynet has been here all along.
Suicidesoldier#1
ExDxV
2Suicidesoldier#1:
However my robot can lock target from video camera without any other sensors:
http://www.youtube.com/watch?v=RmGg3RZ4Hy4

2Layra-chan:
You are quite right but my experiments are good start and Skynet could grow from it wink


Um...

That's just some kid probably editing his video, which is fairly easy to do but.


Ultra sonar is better becuase you can get a 3-D mapping, so locking on the targets gives you the entire person's shape.

No weird angles, no shadowing, no low light conditions, just a pure representation of the target, and quite possibly their face.


Stereo cameras, like the Kinect, are perfectly capable of extracting 3D models with nothing but images. True you need a light source (actually a pattern is often used) which Kinect does in IR but sonar isn't much different.
Just like your light source doesn't go anywhere your audio source doesn't go everywhere either.
To determine a shape with sonar you need more than one, just like with a camera.
Also it's difficult to 'see' detail of things whose size is on the order of your wavelength or smaller.
Suicidesoldier#1's avatar

Fanatical Zealot

Gharbad
Suicidesoldier#1
ExDxV
2Suicidesoldier#1:
However my robot can lock target from video camera without any other sensors:
http://www.youtube.com/watch?v=RmGg3RZ4Hy4

2Layra-chan:
You are quite right but my experiments are good start and Skynet could grow from it wink


Um...

That's just some kid probably editing his video, which is fairly easy to do but.


Ultra sonar is better becuase you can get a 3-D mapping, so locking on the targets gives you the entire person's shape.

No weird angles, no shadowing, no low light conditions, just a pure representation of the target, and quite possibly their face.


Stereo cameras, like the Kinect, are perfectly capable of extracting 3D models with nothing but images. True you need a light source (actually a pattern is often used) which Kinect does in IR but sonar isn't much different.
Just like your light source doesn't go anywhere your audio source doesn't go everywhere either.
To determine a shape with sonar you need more than one, just like with a camera.
Also it's difficult to 'see' detail of things whose size is on the order of your wavelength or smaller.


Run it through like 20,000 and 80,000 megahertz and get all the pings back at varying levels; you'll be able to penetrate various things (like in ultra sonar for babies) meaning that you'll see the back of the guys head, and even inside it, as well as in front.

From one direction you could get virtually everything, inside of a room; essentially, you would have it reflect off his face, and then off his brain, and then off the back of his skull etc. so you would get a large amount of information.


You might have trouble from behind if you don't have anything for the sound waves to reflect off of, but you can have the backside of a person (skull, skin even etc.) reflect back.

The problem would be mapping everything out, but I think if you just made everything stack it would be easy.

Also, you could have localized single direction sound waves for a single image.

But due to the fact I want my robotsz to be able to see colors, I'd probably add cameras.


And infrared cameras and laser range finders.

Also, they'd have like, an advanced adobe reader so they could read books, etc.
Suicidesoldier#1
Run it through like 20,000 and 80,000 megahertz and get all the pings back at varying levels; you'll be able to penetrate various things (like in ultra sonar for babies) meaning that you'll see the back of the guys head, and even inside it, as well as in front.

From one direction you could get virtually everything, inside of a room; essentially, you would have it reflect off his face, and then off his brain, and then off the back of his skull etc. so you would get a large amount of information.


You might have trouble from behind if you don't have anything for the sound waves to reflect off of, but you can have the backside of a person (skull, skin even etc.) reflect back.

The problem would be mapping everything out, but I think if you just made everything stack it would be easy.

Also, you could have localized single direction sound waves for a single image.

But due to the fact I want my robotsz to be able to see colors, I'd probably add cameras.


And infrared cameras and laser range finders.

Also, they'd have like, an advanced adobe reader so they could read books, etc.


How do you intend on generating mechanical movement at 20-80 billion times a second and why does that penetrate matter so easily? Higher frequencies are easily attenuated by almost anything.
Easy mapping everything out? You're proposing interpreting a large amount of multiple reflections, how do you determine what path the signals actually took?
Any time a wave passes through a material interface it is both reflected and transmitted. If you have multiple interfaces you get many reflected and re-reflected waves.
http://www.ndt-ed.org/EducationResources/CommunityCollege/Ultrasonics/Physics/reflectiontransmission.htm
Suicidesoldier#1's avatar

Fanatical Zealot

Gharbad
Suicidesoldier#1
Run it through like 20,000 and 80,000 megahertz and get all the pings back at varying levels; you'll be able to penetrate various things (like in ultra sonar for babies) meaning that you'll see the back of the guys head, and even inside it, as well as in front.

From one direction you could get virtually everything, inside of a room; essentially, you would have it reflect off his face, and then off his brain, and then off the back of his skull etc. so you would get a large amount of information.


You might have trouble from behind if you don't have anything for the sound waves to reflect off of, but you can have the backside of a person (skull, skin even etc.) reflect back.

The problem would be mapping everything out, but I think if you just made everything stack it would be easy.

Also, you could have localized single direction sound waves for a single image.

But due to the fact I want my robotsz to be able to see colors, I'd probably add cameras.


And infrared cameras and laser range finders.

Also, they'd have like, an advanced adobe reader so they could read books, etc.


How do you intend on generating mechanical movement at 20-80 billion times a second and why does that penetrate matter so easily? Higher frequencies are easily attenuated by almost anything.
Easy mapping everything out? You're proposing interpreting a large amount of multiple reflections, how do you determine what path the signals actually took?
Any time a wave passes through a material interface it is both reflected and transmitted. If you have multiple interfaces you get many reflected and re-reflected waves.
http://www.ndt-ed.org/EducationResources/CommunityCollege/Ultrasonics/Physics/reflectiontransmission.htm


It would take computers that can process information very quickly and a large amount of very sensitive/specific receivers and very powerful speakers that of course, go around in most directions so, it wouldn't be easy at all.

It just has to be tuned to work correctly so the other signals won't interfere, or garbled signals won't be complete crap.


You may even be able to use it to your advantage.

If say, concrete garbles the signal a little each time a certain way, you'll be able to tell it's concrete.
It's test of new algorithm for AVM Navigator v0.7.3. In this video was demonstrated how robot just try to go back on checkpoint from different positions of the learned route.



First in video the robot has received command: "go to the checkpoint" and when robot arrived to the checkpoint then I brought robot back (several times) in different position of learned route. When robot noticed the changes then it indicated that robot was displaced because any commands were not given by robot to his motors however changes were seen in the input image.

Then robot started looking around and localized his current position. Further the robot just calculated path from current position to the checkpoint and went there (and so forth).

Quick Reply

Submit
Manage Your Items
Other Stuff
Get Items
Get Gaia Cash
Where Everyone Hangs Out
Other Community Areas
Virtual Spaces
Fun Stuff
Gaia's Games