Wednesday, 30 October 2013

Structure Sensor 3D Scanner Works with New iPad Air and iPad Mini

For better or for worse, you can get a 3D printer from dozens of manufacturers these days, for as little as $200 (€145) or many, many times more if you want. But 3D scanners are a different story. The first relatively affordable model only came into market earlier this year.

But things are evolving fast and there's an obvious need for such devices. That explains the phenomenal success of the Structure Sensor 3D scanner for the iPad.

Occipital, the company behind the Structure Sensor, put the device on Kickstarter where it met its funding goal of $100,000 (€72,650) in just 3.5 hours. At this point, the campaign has raised over $1.1 million (€800,000) and has three more days to go. $349 (€254) will get you the Structure Sensor for the iPad or, alternatively, the "Hacker Kit," which makes it possible to use the scanner with any computer you want.

The Hacker Kit comes with open source drivers for the device and CAD models so you can adapt the scanner to any platform and device. You can get both kits for $379 (€275) if you hurry. The Kickstarter campaign only has until November 1 to go. What's unique about the Structure Sensor is that it doesn't look like any 3D scanner before it. That's not just an issue of aesthetics; the fact that you can mount the scanner to a portable device makes it possible to take it with you everywhere you go. You don't have to get an object in your workspace to scan it.

"With the Structure Sensor attached to your mobile device, you can walk around the world and instantly capture it in a digital form. This means you can capture 3D maps of indoor spaces and have every measurement in your pocket," Occipital explained on the KickStarter page. "You can instantly capture 3D models of objects and people for import into CAD and for 3D printing. You can play mind-blowing augmented reality games where the real world is your game world," it added. 

And, if you were worried that the scanner wouldn't work with the new iPads Apple just announced, the company has already adapted the device to the new design of the iPad Air and the new iPad Mini.

Source article: Softpedia 
More info at Kickstarter












Tuesday, 29 October 2013

Leopoly and Leap Motion are bringing sci-fi movie like virtual sculpting

Leopoly allows users to grab any 3D object in space and simply sculpt them on with their hands as they would do virtual pottery

We are proud to announce, that Leopoly and Leap Motion are full compatible now! Leopoly, the web-based, social 3D sculpting app introduces full Leap Motion compatibility. Leap Motion’s hand and finger motion sensor brings a revolutionary new approach to Leopoly in 3D design, requiring no input device, only our hands to contact and shape any object in space. Take a look for yourself!


What is Leap Motion?
The Leap Motion Controller senses your hands and fingers and follows their every move. It lets them move in all that wide-open space between you and your computer. So you can do almost anything without touching anything. It’s the tiny device that just might change the way you use technology. It’s the world’s most natural technology that just might change the world.

How to use it?
0. Move your to hands over Leap to activate and zoom in/out
1. Pick up a pen or any tool to use it as a pointing device to sculpt or paint
2. Use "c" to calibrate the tool (don't release till you have find the best position for you)
3. Use "g" while moving the pen over the Leap Motion for moving the object or selecting tools on the radial menu
4. Use "h" while moving the pen over the Leap Motion for sculpting or painting

What is Leopoly?
The idea behind the software is to give the easiest and most fun 3D sculpting tool and experience. This new way of virtual sculpting comes packed with new and exciting features like the concept of evolutionary modeling: all users can share creative designs and further shape ones pieces of art they liked.

If they fancy any object to put on their shelf, Leopoly is the easiest, most fun and most intuitive tool to prepare files for 3D printing. Designed objects can be printed directly through the software.
Enjoy it online and free!

Color your 3D world
The app’s latest added feature is real time coloring! Selected objects from the library can be taken to a whole new level by coloring it. The new colored object is exportable to all most any 3D program or even to games. This new feature is even compatible with color 3D printing! 


About Leonar3Do International
Leonar3Do International is a 3D tech startup company, founded in 2010 as a private company, founded in Budapest, Hungary. With its products now sold around the world, Leonar3Do has become a pioneer and award-winning provider of 3D solutions aimed at the educational, healthcare, gaming industry and general business market. For more information about Leonar3Do visit www.leopoly.com and www.Leonar3Do.com.

Could a simple webcam be enough for gesture control in VR?

Extreme Reality is on a pursuit to ‘motionise’ games but their technology has the potential to offer virtual reality gesture control and motion tracking without expensive hardware.

After a stint at Gamescom, Extreme Reality is now on the road to commercialising software that has been in their labs since 2005. The Israeli based software company aims to allow games developers to ‘motionise’ their games without the end user requiring any special hardware other than a webcam. Since September 30th, an SDK has been released which provides full body motion analysis via a webcam.
The company’s USP is being “the only company to provide full-body, software-based, motion analysis and control to any computing device or operating system via a standard camera.”

Talking to 3D Focus, Asaf Barzilay, VP Products and R&D, explains how it works:
“We have a platform agnostic solution that can be easily and widely deployed on a standard 2D webcam which only needs to be VGA and above.  Based on the RGB image we receive, our algorithm analyzes what the user is doing in xyz coordinates in every frame and this information is then used to create a live 3D virtual skeleton of the user in real time. It detects the joints, head, centre of mass, elbows, shoulders ribs etc at centimetre accuracy.  The xyz coordinates of each one of these joints can be very easily compared to the previous frames in order to get really quick and subtle calculations of what the user did.  From there we can take this technology into various different applications, not just games but we have also been looking at gait analysis, interactve experiences in museums, marketing, advergames, brand promotion and interaction of users in stores.”

Leap Motion may appear to be an obvious hardware competitor but Barzilay believes motion control with devices close enough to touch do not make sense…

“We have been focussing on the domain where the user is further away from the camera and cannot touch the devices, about 1.5 metres away or more (up to 5 metres according to website). We have been checking some interaction with smartphones where the user is very close to the phone or with laptops where the user is half a metre away but at the end of the day the user is still able to touch the device." 

So how does Extreme Motion compare in accuracy to the Microsoft Kinect?

“The end user experience is very similar to the kinect but without the hardware cost.” said Barzilay. “We see a person standing in front of the camera, moving freely, being detected and analysed.  Every centimetre of motion is detected. We have taken a few games titles initially built for the Kinect and managed to replace the physical system with our extreme motion software. The games play the same but they can now run of every PC and every iPad.”

One of the games Barzilay is referring to is ProRiders Extreme Edition by VTree Entertainment.  It has been converted from its original game format using traditional keyboard and mouse gameplay to full body motion control. The game uses a player’s body movements to drive the game play, riding a snowboard down snow covered mountains or doing their favorite tricks at mountain terrain parks, all while collecting coins and achievements.

“All of a sudden, instead of it being a very standard keyboard game it was a complete motion game.  We call it the purpose of motionising the game” said Barzilay.

Cick here for demo video

With virtual reality being very much the buzzword right now, it is not surprising that the company sees integration of Extreme Reality software having potential in a virtual environment. 

“Virtual reality still needs some user interaction. Imagine a webcam RGB image and in front of you are four buttons which the user can touch with his virtual hands – This is something we are already working on.” said Barzilay.
He also explained that the technology could be potentially used in the future for avatar creation.  So, in theory, as long as a user is facing a webcam, that information could be transmitted across the world to recreate an avatar in a virtual world seen by a user wearing a head mounted display. 

The company’s revenue model is planned to be two fold – a revenue share with app developers or, if a game is not chargeable to the end user (an advergame for example), the license would be incorporated into the production budget. 

Could a simple webcam be enough for gesture control in VR?

There are several games available with the most recent being the first iPad game – Go Dance by Sega.  Other games include (the incredibly difficult!) fitness/music game Beat Boxer, sports games ProRiders Snow Board and Top Smash plus the free game Pandamania.  A new racing game is expected to be launched soon.  

The company is currently running the Extreme iPad Challenge 2013 – a contest run for game developers to promote motion controlled games for iPad devices. The contest is open to game developers that wish to “motionize” their existing or new iPad games.  The submission deadline is November 15th with a top prize of $10,000 and free marketing on offer.

Visit 
http://www.xtr3d.com for more!
Source artice from 3dfocus

Monday, 30 September 2013

Go 3D Print a Dinosaur!


Chances are you’ve already watched every lecture on paleontology on the internet already, but, in case you missed it, I’ll point you towards a recent talk given by Dr. Kenneth Lacovara, Associate Professor in the Biodiversity Department of Drexel University.  In it, Dr. Lacovara explains how paleontology is still very much the same science as it has always been, digging in uncomfortable climates to excavate fossils of long-dead specimens for later study in a lab. He also points out, though, that one technology has very much pushed the science into the 21st century.  (Hint: it begins with a 3 and ends with an ‘inting’.)

After Dr. Lacovara and his team have spent grueling hours in unforgiving circumstances to drag an old dino bone from the Earth, there’s the trouble of examining it and displaying it in a way that doesn’t damage the quality of the specimen. With 3D scanners and CT machines, however, Lacovara can create digital copies of the Jurassic parts for a variety of applications. Not only do 3D models of the files make for non-degradable representations of the skeletons he uncovers, but such models can be used to simulate evolutionary iterations of dinosaur physiology.  Paleontologists can simulate which dinosaur muscular systems, for instance, will survive under what circumstances and pass on their genes to the next generations so that we might project the course of evolution. Below is an example, which Lacovara discusses in detail in his lecture, of a reconstructed Thoracosaurus neocesariensis (sort of an old-timey crocodile) that moves based on programmed muscle mechanics, as opposed to frame-by-frame manipulation of individual body parts on the part of an animator: http://vimeo.com/14750657

DIGITAL PALEOART: Reconstruction and Restoration from Laser-Scanned Fossils from Evan Boucher on Vimeo.

Of course, digital models have some disadvantages because they don’t exist in the tangible world. 3D printing the models, thus, creates a method for testing the bones in ways that a virtual environment won’t allow. Lacovara pointed out that paleontologists can test out real world mechanics by 3D printing replicas of dinosaur parts and assembling robotic structures out of rubber bands, glue, and processors. They can then test how a dinosaur may have behaved in an actual physical environment.
Dr. lacovara 3D printing dinosaur bones 

3D printing, then, gives experts access to physiological information that would not be available with authentic dinosaur bones, either out of fear of destroying original evidence or because real dinosaur bones are many times larger in size than their 3D-printed counterparts. And, since the models can be transmitted digitally, specimens salvaged and scanned by one team on one side of the world can be sent to another team on the other side to allow for cross-institutional and interdisciplinary collaboration. Of course, it was possible to construct dino replicas in the past, by creating moulds of the original specimens, but, as we’ve learned about casting, moulds don’t always capture the fine detail that 3D-printed models do and will break with time. If museums, on the other hand, have access to a 3D printer, they can print themselves multiple models of varying size, without relying on huge shipping costs and the need to share actual specimens with other institutions.

What’s left then? If we can already print accurate, animatronic replicas of dinosaurs, then my imagination is, naturally, wandering to the process of combining the technology with bioprinting and cloning. That way, we can have semi-organic dinobots ready to rampage small islands or even to fly into space and fight planet eating robots voiced by Orson Wells. No?

For more realistic interpretations of how 3D printing and paleontology go hand-in-hand, watch Dr. Lacovara’s lecture below: http://vimeo.com/74973260

Wednesday, 25 September 2013

New Device to Revolutionize Gaming in Virtual Realities


Head-mounted devices, which display three dimensional images according one's viewing direction, allowing the users to lose themselves in computer generated worlds are already commercially available. However, it has not yet been possible to walk through these virtual realities, without at some point running into the very real walls of the room. A team of researchers at the Vienna University of Technology has now built a "Virtualizer," which allows for an almost natural walk through virtual spaces. The user is fixated with a belt in a support frame, the feet glide across a low friction surface. Sensors pick up these movements and feed the data into the computer. The team hopes that the Virtualizer will enter the market in 2014.


Digitized Motion
Various ideas have been put forward on the digitalization of human motion. Markers can be attached to the body, which are then tracked with cameras -- this is how motion capture for animated movies is achieved. For this, however, expensive equipment is needed, and the user is confined to a relatively small space. Prototypes using conveyor belts have not yet yielded satisfactory results.
Tuncay Cakmak, a student at TU Vienna, had a much better idea; when the feet slide across a smooth low-friction surface, almost natural walking movements are possible without in fact changing one's position. Together with some other students and virtual reality expert Hannes Kaufmann (TU Vienna), he developed the "Virtualizer."
In the Virtualizer's metal frame, the user is kept in place with a belt. The smooth floor plate contains sensors, picking up every step. Rotations of the body are registered by the belt. "Coming to terms with the low friction takes a little bit of practice," says Tuncay Cakmak, "but soon one can run across the smooth sensor plate quite naturally."

Run, Look, Duck, Jump

The Virtualizer can be used with standard 3D headgear, which picks up the users viewing direction and displays 3D pictures accordingly. This is independent from the leg motion, therefore running into one direction and looking into another becomes possible.

Moving through virtual realities using a keyboard or a joystick can lead to a discrepancy between visual perception and other body sensations. This is a problem for the brain: "Many people become nauseous in such situations. This is called 'cybersickness'," says Tuncay Cakmak. In the Virtualizer, however, the displayed visual data is in line with one's physical motion. The feeling of presence in the virtual world is stronger, and it becomes easier to assess distances and proportions. In addition, movement in the Virtualizer has an element of physical exercise.

Entering the market
The prototype developed at TU Vienna already works very well -- only some minor adjustments are still to be made. The Virtualizer has already caused some a stir. "Some major companies have already expressed their interest -- for us, however, it is important that the technological development remains in our hands," says Tuncay Cakmak.

The Virtualizer is scheduled to enter the market as soon as 2014. The price cannot be determined yet. "Our first priority is to create a high quality product, but of course we want to offer it at the lowest possible price," says Cakmak. "Our product should lead virtual reality out of the research labs and into the gamers' living rooms."


Video: www.youtube.com/embed/xNj2raXBeV0?rel=0
Source article: http://www.sciencedaily.com/releases/2013/09/130924091526.htm?utm_source=feedburner&utm_medium=email&utm_campaign=Feed%3A+sciencedaily%2Fcomputers_math%2Fvirtual_reality+%28ScienceDaily%3A+Computers+%26+Math+News+--+Virtual+Reality%29

Monday, 23 September 2013

Ever wondered what Sculpteo's 3D Printing Cloud Engine is all about?

There are sometimes misunderstandings about what’s Sculpteo’s 3D Printing Cloud Engine. So we’ve imagined a new video lo let you know what’s behind this factory of the future. The short clip tells you the story of Emily, the Glucose Sweet Design’s shop owner, who embedded all Sculpteo’s tools on her website: http://www.youtube.com/watch?v=Qpe2Hlzrgso


That way, you’ll be able to fully understand the potential of our tools wether you use it directly on our website or embedded on your website. You’ll also get to understand the whole production process from picking an object online to its fabrication and delivery.

Source: http://blog.sculpteo.com/2013/09/16/ever-wondered-what-our-3d-printing-cloud-engine-is-all-about/

Tuesday, 10 September 2013

The rise of a new 3D desktop scanner

DIMBODY is a 3d scanner, a digitalizer that allows you to take a physical object, and turn it into a digital 3D model on your computer.

DIMBODY is based on a triangulation between a laser plane  and a CMOS monochromatic sensor, it acquires point coordinates of an object (point cloud). The point cloud is then transformed in a 3d surface by the software and saved as a STL file. Subsequently you can print it in your 3d printer, or scale, then transform, or modify it in a CAD software. Thanks to the rotating turret DIMBODY accuracy is outstanding and  thanks to its rotating platform the scanning phase is totally automated.

Technical Specifications

5 Mpixel monocromatic CMOS sensor
red line laser 
max object size: 300 x 300 x 300mm (12x12x12 inches)
scanning time: 24 min at ULTRA resolution ( 10'000'000 points), 14 min at SUPER resolution (3'000'000 points), 8 min at NORMAL (1'000'000 points)
accuracy. +-0.1mm (ULTRA), +-0.2(SUPER) +- 0.4 (NORMAL)
USB2 

Open Source

All software, hardware and electronics components developed by us will be released as open source. Control electronic cards are based on arduino-compatible microcontroller. You can use arduino ide to modify , or to develop a completely new one.

DIMBODY believes in open source software and hardware developing. We are working on a arduino-compatible electronics, which design will be released and free.  At the end of this campaign clients could buy even only single parts, (like this arduino based control card)
At the same way control software will be released as open source, so you or the community can modify it as you like.
Communication control between DIMBODY ad pc will be published and every one can utilize personal software to manage point clouds and mesh generation

Why a rotating turret lead to a better mesh for complex models?

The rotating turret is the core value of DIMBODY. 
Without a rotating turret, if you try to scan a body like this one:
you will find that some parts like proboscis or ears will intercept the laser plane and it prevent the laser to correctly touch hidden areas. So you will have no points in that areas, and some holes in the resulting mesh.
But DIMBODY is smart, and can see things from different point of view. 

Scanning process consist of two different phases.  
In the first phase platform turns at a 10°, 15°, 20° or 30° steps, and then the turret start to take a complete scanning (100 to 400 frames), and the process is repeated until the complete rotation of the object (360°)

When all the the frames are acquired we have a lot more information than all other competitors using fixed lasers. 

Source article at: http://www.indiegogo.com/projects/dimbody-3d-desktop-scanner