Powered by Blogger.
RSS

An end to AWS' public cloud dominance? Cloud Foundry offers vision of multi-cloud era

After years of Amazon Web Services dominating the public cloud infrastructure market, a multi-cloud approach is gaining in popularity with many businesses.

The ability to move workloads between AWS, Microsoft Azure, Google Cloud Platform, or whichever supplier is most appropriate for a specific application, is vital in avoiding the lock-in that enterprises fear. 

It is a subject that is of huge importance to Cloud Foundry, the open source platform as a service that can be built on a range of cloud providers. 

“We see a world of cloud computing that is ubiquitous and flexible, that supports multi-cloud environments,” Ramji told attendees at the Cloud Foundry Summit in Frankfurt this week.

“It is portable and interoperable, enabling users to go where they want. This is actually a revolutionary concept in cloud computing, that the user should have control over their applications as they come and go.”

As cloud computing has become more widely accepted in recent years, moving workloads to a public infrastructure as a service - and the resultant benefits of greater agility and potentially lower costs - no longer offers an edge over rivals, who are likely to be doing exactly the same more often than not.

“You come to a stage where your competitive advantage from choosing AWS, for example, is starting to erode,” said Dan Young, CEO of UK Cloud Foundry specialists EngineerBetter.

“It has become much more ubiquitous and the skills have started to diffuse a bit more throughout the industry. Everyone else has got the same capabilities as you, more or less.”

At the same time, a decade of Amazon Web Service dominance has, until fairly recently at least, resulted in a near-monopoly. Although Microsoft Azure has grown immensely, and more recently Google’s corporate cloud strategy has solidified, this raises the spectre of lock-in. Read next: Microsoft Azure vs Amazon AWS public cloud comparison

Young said: “Many people will have lived through a couple of decades of Microsoft and VMware and Oracle and these buying decisions we make. And sometimes this begins to feel very familiar when we are looking at cloud as well.”

German car manufacturer Volkswagen is building a Cloud Foundry PaaS on top of OpenStack to support development of new customer-facing applications and does not want to be tied to one vendor.

“The reason we chose Cloud Foundry is that we have a chance for a multi-cloud environment,” Roy Sauer, Volkswagen's head of Group IT Architecture and Technology told Computerworld UK.

"We have implemented on a small scale in an on-premise cloud in the Volkswagen data centres and we want to link to several public cloud providers like AWS, IBM or Azure,” Sauer said.

"[We will have our] own data centres for critical data, for secure data, and several public cloud providers because we have to have this global footprint. And even to swap from one public provider to another if it's necessary.” 

BOSH makes multi-cloud easier for ops teams

A central part of Cloud Foundry’s multi-cloud aims is the BOSH open source deployment and moinitoring tool, which also supports other platforms such as Hadoop and OpenStack. Ramji highlighted Google as one of the cloud providers building a Cloud Provider Interface (CPI) for BOSH.

“BOSH is our platform for platforms,” said Ramji. “This is what gives us its multi-cloud capability, so we can support all these different clouds.”

This makes life easier for ops teams to connect more easily between the big cloud players, and also could open the market up more to smaller niche or regional cloud providers.

EngineerBetter’s Young described BOSH as “the travel adaptor for cloud”.

“I will be able to run exactly the same deployment and the same releases on a different cloud,” he said.

Open source as a foundation for multi-cloud?

One of the main focuses of Cloud Foundry’s Ramji keynote was that open source software is a “positive sum game..where the more of us who play the game the better the game itself becomes”.

Ramji highlighted the growth stats around Cloud Foundry, with 31,000 code commits, 2,500 contributors and 130 core committers. It has also added Cisco and ComCast to its membership such as GE Digital.

As an open source foundation, freedom of choice is at the heart of Cloud Foundry’s pitch.  At the same time, there are concerns that one commercial vendor, Dell-EMC-owned Pivotal - which created an open source foundation for Cloud Foundry - is so clearly dominant.

Young acknowledged that this is an area which needs improvement: “It is going to get better and you can see the rate of growth in that membership and all we can do is open more dojos and encourage people to become [code] committers and try and change it.”

Nevertheless, he claimed that growing the open source community around is vital to offering the choice that has perhaps been lacking in the cloud market in the past. 

“This is where open source steps into the picture,” he said.

“What it is effectively doing is allowing us to effectively create a competitive market space in cloud.

“Whereas we had a very captive market space in the early days of AWS, it is much more in keeping with Sam Ramji's idea of this positive sum game to have a competitive marketplace by using things like Cloud Foundry where everyone is put on a more level playing field.

“Having things like BOSH and Cloud Foundry together wrapped in this idea that you have not just one company in control of things but over 60 different companies in control of things.”

  • Digg
  • Del.icio.us
  • StumbleUpon
  • Reddit
  • RSS

HPE targets devops-ready organisations with composable Synergy system

This week Hewlett Packard Enterprise (HPE) hosted an enormous event in east London’s Lee Valley Park at the Olympic venue for cycling, to talk up its ‘composable’ infrastructure, Synergy.

As well as showboating appreciative customers, executives told Computerworld UK the technology is moving out of the beta stage and will directly benefit organisations making the most of devops.

HPE announced its composable infrastructure last year, the idea being a flexible, scalable system that bridges the gap between traditional and new IT architecture, and the company rolled out the red carpet in an aggressive attempt to get partners in the channel and elsewhere on board.

“If you look at traditional infrastructure it’s fairly rigid, it takes a long time to set up infrastructure, get it all right, then deploy,” says Paul Miller, VP of marketing of Converged Data Center Infrastructure. “If you look at the new world and all that’s happening with all the new applications coming out to support the digital economy, that’s what we’re focusing on to build Synergy.”

That means, Miller says, focusing on two particular groups – although of course, existing HPE customers looking for an upgrade might be wooed to Synergy as a complementary technology.

“We talk about one of the catchphrases: more dev, less ops,” Miller says. “That’s a mantra that a lot of customers want. We enable that through Synergy.”

HP IT is using Synergy in its devops environment – and it allows developers to request building applications in a virtualisation pool through the API, and get it, without having to actually interact with the infrastructure.

“They’re programming the infrastructure through the software defined interface,” he says. “If they want to get a container, they talk to Docker, Docker talks to the infrastructure and says this is what the developer needs, and provides them with a total Docker environment to do the development. If they want to do it on bare metal, it’s the same thing.”

“With Synergy you set it up to be what you want at the time you want it – it’s not sitting there, and it’s not overprovisioned, and it’s not using stuff,” he says. “And since it’s all in your own house, you can move into production very quickly and get all the same benefits that you do on the dev side. So this whole world of dev and ops being core to the future, that’s what we built Synergy to do.”

See also: Our guide to Composable infrastructure. What is it and how can it help your business? 

But traditional architecture is also a space where HPE believes it can achieve success with Synergy.

According to Miller, customers that are interested in devops but don’t yet have the investment need to lower their costs first, to get that buy-in. Since Synergy can cut down on overprovisioning but is also scalable, he explains that it could be a good first step for organisations that need to cut costs on traditional architecture.

“If you look at applications like a web application, which is fairly traditional today, most people overprovision by 50 percent,” he says. “SQL farms, 40 percent, Exchange, 30 percent. You have all these islands of overprovisioning, but because Synergy is flexible, you can put all those applications in the same box and overprovision once, not four or five times for each one of your apps. That and the IT operational efficiency is to cost reduce traditional apps, free up capacity, free up resources, to invest more in the developer community.”

Antonio Neri, the executive VP and general manager for HPE, boasted in his keynote that the infrastructure at the back of Synergy is “100 percent software defined”.

“What that means is the software intelligence is embedded in the fabric of that infrastructure,” he explained. “With an aggressive set of APIs you can go and treat that infrastructure as a code. It scales multiple racks, it controls an entire pool and obviously increases speed. The question is how Synergy can help you and your developers to develop applications faster.”

“We can deploy infrastructure much faster, you can deploy that pool of resources, you don’t need to configure anything, it just makes those APIs available and the developer can access the other resources through the API layer,” he said.

“And more than 50 tasks are automated in the processes. You have the composable API, which means not only you but also partners can go compose their own services. You can compose your services to the APIs so you can do that work for the customers. You may have your own services on top of those APIs.”

“HP Oneview is the brains behind the infrastructure. What we have done with Oneview which is our infrastructure management layer, we have actually taken it inside the fabric and provide that unified set of APIs so everybody can develop at their own speed.”

  • Digg
  • Del.icio.us
  • StumbleUpon
  • Reddit
  • RSS

Algorithm could enable visible-light-based imaging for medical devices, autonomous vehicles

  • In experiments, the researchers fired a laser beam through a “mask” — a thick sheet of plastic with slits cut through it in a certain configuration, such as the letter A  — and then through a 1.5-centimeter “tissue phantom,” a slab of material designed to mimic the optical properties of human tissue for purposes of calibrating imaging systems. Light scattered by the tissue phantom was then collected by a high-speed camera, which could measure the light’s time of arrival.

    In experiments, the researchers fired a laser beam through a “mask” — a thick sheet of plastic with slits cut through it in a certain configuration, such as the letter A — and then through a 1.5-centimeter “tissue phantom,” a slab of material designed to mimic the optical properties of human tissue for purposes of calibrating imaging systems. Light scattered by the tissue phantom was then collected by a high-speed camera, which could measure the light’s time of arrival.

    Image courtesy of the researchers.

    Full Screen
  • An illustration shows the researchers’ experimental setup. The data captured by the camera can be thought of as a movie — a two-dimensional image that changes over time.

    An illustration shows the researchers’ experimental setup. The data captured by the camera can be thought of as a movie — a two-dimensional image that changes over time.

    Illustration courtesy of the researchers.

    Full Screen
  • MIT researchers have developed a technique for recovering visual information from light that has scattered because of interactions with the environment — such as passing through human tissue.

    The technique could lead to medical-imaging systems that use visible light, which carries much more information than X-rays or ultrasound waves, or to computer vision systems that work in fog or drizzle. The development of such vision systems has been a major obstacle to self-driving cars.

    In experiments, the researchers fired a laser beam through a “mask” — a thick sheet of plastic with slits cut through it in a certain configuration, such as the letter A  — and then through a 1.5-centimeter “tissue phantom,” a slab of material designed to mimic the optical properties of human tissue for purposes of calibrating imaging systems. Light scattered by the tissue phantom was then collected by a high-speed camera, which could measure the light’s time of arrival.

    From that information, the researchers’ algorithms were able to reconstruct an accurate image of the pattern cut into the mask.

    An imaging algorithm from the MIT Media Lab's Camera Culture group compensates for the scattering of light. The advance could potentially be used to develop optical-wavelength medical imaging and autonomous vehicles.

    Video: Camera Culture Group

    “The reason our eyes are sensitive only in this narrow part of the spectrum is because this is where light and matter interact most,” says Guy Satat, a graduate student at the MIT Media Lab and first author on the new paper. “This is why X-ray is able to go inside the body, because there is very little interaction. That’s why it can’t distinguish between different types of tissue, or see bleeding, or see oxygenated or deoxygenated blood.”

    The imaging technique’s potential applications in automotive sensing may be even more compelling than those in medical imaging, however. Many experimental algorithms for guiding autonomous vehicles are highly reliable under good illumination, but they fall apart completely in fog or drizzle; computer vision systems misinterpret the scattered light as having reflected off of objects that don’t exist. The new technique could address that problem.

    Satat’s coauthors on the new paper, published today in Scientific Reports, are three other members of the Media Lab’s Camera Culture group: Ramesh Raskar, the group’s leader, Satat’s thesis advisor, and an associate professor of media arts and sciences; Barmak Heshmat, a research scientist; and Dan Raviv, a postdoc.

    Expanding circles

    Like many of the Camera Culture group’s projects, the new system relies on a pulsed laser that emits ultrashort bursts of light, and a high-speed camera that can distinguish the arrival times of different groups of photons, or light particles. When a light burst reaches a scattering medium, such as a tissue phantom, some photons pass through unmolested; some are only slightly deflected from a straight path; and some bounce around inside the medium for a comparatively long time. The first photons to arrive at the sensor have thus undergone the least scattering; the last to arrive have undergone the most.

    Where previous techniques have attempted to reconstruct images using only those first, unscattered photons, the MIT researchers’ technique uses the entire optical signal. Hence its name: all-photons imaging.

    The data captured by the camera can be thought of as a movie — a two-dimensional image that changes over time. To get a sense of how all-photons imaging works, suppose that light arrives at the camera from only one point in the visual field. The first photons to reach the camera pass through the scattering medium unimpeded: They show up as just a single illuminated pixel in the first frame of the movie.

    The next photons to arrive have undergone slightly more scattering, so in the second frame of the video, they show up as a small circle centered on the single pixel from the first frame. With each successive frame, the circle expands in diameter, until the final frame just shows a general, hazy light.

    The problem, of course, is that in practice the camera is registering light from many points in the visual field, whose expanding circles overlap. The job of the researchers’ algorithm is to sort out which photons illuminating which pixels of the image originated where.

    Cascading probabilities

    The first step is to determine how the overall intensity of the image changes in time. This provides an estimate of how much scattering the light has undergone: If the intensity spikes quickly and tails off quickly, the light hasn’t been scattered much. If the intensity increases slowly and tails off slowly, it has.

    On the basis of that estimate, the algorithm considers each pixel of each successive frame and calculates the probability that it corresponds to any given point in the visual field. Then it goes back to the first frame of video and, using the probabilistic model it has just constructed, predicts what the next frame of video will look like. With each successive frame, it compares its prediction to the actual camera measurement and adjusts its model accordingly. Finally, using the final version of the model, it deduces the pattern of light most likely to have produced the sequence of measurements the camera made.

    One limitation of the current version of the system is that the light emitter and the camera are on opposite sides of the scattering medium. That limits its applicability for medical imaging, although Satat believes that it should be possible to use fluorescent particles known as fluorophores, which can be injected into the bloodstream and are already used in medical imaging, as a light source. And fog scatters light much less than human tissue does, so reflected light from laser pulses fired into the environment could be good enough for automotive sensing.

    “People have been using what is known as time gating, the idea that photons not only have intensity but also time-of-arrival information and that if you gate for a particular time of arrival you get photons with certain specific path lengths and therefore [come] from a certain specific depth in the object,” says Ashok Veeraraghavan, an assistant professor of electrical and computer engineering at Rice University. “This paper is taking that concept one level further and saying that even the photons that arrive at slightly different times contribute some spatial information.”

    “Looking through scattering media is a problem that’s of large consequence,” he adds. But he cautions that the new paper does not entirely solve it. “There’s maybe one barrier that’s been crossed, but there are maybe three more barriers that need to be crossed before this becomes practical,” he says.


    • Digg
    • Del.icio.us
    • StumbleUpon
    • Reddit
    • RSS

    13 of the best email marketing software 2016: Oracle, Salesforce.com, IBM and more - what's the best enterprise email marketing service?

    • Digg
    • Del.icio.us
    • StumbleUpon
    • Reddit
    • RSS

    Travis Perkins uses Splunk’s flexible cyber security monitoring to protect against customer data breaches

    When it comes to security monitoring and protecting its business against cyber attack, building supplies retailer Travis Perkins turned to US analytics company Splunk, adopting its Software-as-a-Service (SaaS) cloud offering to monitor both its on-premise and cloud systems.

    Nick Bleech, head of information security at Travis Perkins, said that flexibility and the underlying data model is what set Splunk apart from the alternatives in the security analytics market.

    Flexibility

    The major retailer, which owns 19 brands including Wickes and Tool Station, has already moved a great deal of its data to the cloud, including customer, operations, stock and logistics information. However, some data is still locked on-premise in its many physical locations across the UK.

    Bleech was given a simple remit when he started at the company three years ago: "My brief was very simple. We have taken that leap of faith with the cloud and you have to crystallise and mitigate that risk for us. My brief was to take what there was by way of a security function and bring that up to date."

    Read next: Travis Perkins cuts service desk response times for 24,000 employees with ServiceNow ITSM

    Travis Perkins stores the majority of its logistics, warehouse management, stock and customer data in the AWS cloud. It has moved to Google enterprise apps and sent its SAP Hybris ecommerce platform into the cloud. 

    Although the plan was to have everything in the cloud by 2020, Bleech still needed a security monitoring tool capable of traversing cloud and on-premises data from its network of 2,000 physical locations across the UK.

    "The security monitoring challenge therefore was hybrid," Bleech explained. "We are going hell for leather for the cloud but having to reach back into on-premise."

    "So I thought: 'we need to look for something flexible and adaptable'. I compared Splunk with IBM and HP and smaller players and we went for Splunk on a pilot basis using hardware that was sitting dormant."

    "In the end we ran a nine-month pilot, which proved the flexibility was there, connecting to new apps and cloud services and also the stuff that was legacy and would be around for years to come."

    Travis Perkins was an early adopter of Splunk's SaaS cloud product and continues to use it.

    Security monitoring

    One of Bleech's first initiatives was to move on from a stuttering deployment of NitroSecurity (recently spun off by Intel) and into security monitoring with Splunk. Aside from flexibility, what set Splunk apart from the alternatives was its data model, specifically giving security analysts the ability to dig into a threat after the event.

    He explained: "After remediation action has been done analysts can assess the status of that remediation. So having that complete data history without having to do any complicated navigation around the data model is the difference with the Splunk way of doing things."

    Threat landscape

    Travis Perkins uses Splunk to protect itself against malware and ransomware attacks on customer data. According to Bleech, the business doesn't tend to get targeted with zero-day attacks.

    Read next: Travis Perkins cuts costs and increases vehicle safety with telematics

    Bleech also focused on getting Travis Perkins PCI-compliant so that it can't be hit with a Target-level breach of customer card details.

    "Nonetheless the attackers will be after things like personal data, bank details, trade customers account details," Bleech said. "Even getting PCI put to bed we still have sensitive financial data we need to protect."

    Machine learning

    Travis Perkins is eying Splunk's new machine learning capabilities too. Instead of having a room full of security analysts staring at dashboards, Bleech envisions Splunk providing a way of surfacing these insights automatically with the help of machine learning algorithms. 

    Read next: Splunk brings machine learning capabilities into its tools and launches toolkit for customer's own algorithms

    "We are seeing this Hollywood scenario for security folks where you get a succession of small things occurring that start to build up that you would have ignored," he said. "So pattern recognition and anomaly detection becomes important."

    • Digg
    • Del.icio.us
    • StumbleUpon
    • Reddit
    • RSS

    14 power couples to inspire you and your other half

    They say it takes two to tango and that might just be the case when it comes to hitting the floor within the creative industry. These couples combined their talents of illustration, design, photography and more to create some of the most exciting offerings packed full of inspiration and of course, lots of love. 

    01. Strange

    Husband and wife duo Gavin and Jane Strange recently launched their online shop, aptly titled Strange

    Despite being a full-time senior designer at Aardman, Gavin Strange has an array of side projects on the go, his most recent being a new online shop, Strange, set up with his wife Jane. Launched via a pop-up store in Bristol in August, Strange sells an array of carefully curated products. 

    "You can never have too many side-projects!" says Strange. "But it was nothing more than a nice idea until we returned home from The DO Lectures all inspired, and thought: ‘Let’s just do it, together, as a joint venture.’ We didn’t have any capital or a business plan, just an excitement to make it happen."

    And that's exactly what they've done, having recently released their first collection, titled Rockmount, which includes a mug, cushion, tea towel and necklace and is inspired by the duo's adopted rescue racing greyhound.

    02. 123KLAN

    123KLAN was founded back in 1992 by husband and wife design duo Scien and Klor

    123KLAN is originally a French graffiti art crew founded back in 1992 by husband and wife Scien and Klor. Based in Montréal, Canada, the design duo crafted an hybrid style that gained them swift recognition, influenced by the various 90’s graffiti art expressed in Europe and New York. Since then, they have branched out, creating and producing a range of their favourite items through their BANDIT1SM brand.

    03. Huddle Formation

    design power couples

    Ben and Fi O'Brien have worked together for almost a decade

    Huddle Formation is a multi-disciplinary creative studio that collaborate, make, illustrate and play to bring colourful ideas to life. Formed of husband Ben the Illustrator and wife Fi O'Brien, the pair combine their talents of textile design and gorgeous illustration to bring art direction, branding and product design to the table.

    "We have worked together for almost a decade on a multitude of commercial projects and self-initiated product lines and are now super happy to bring everything together into one formation, the Huddle Formation," explains Ben O'Brien.

    04. TADO

    design power couples

    TADO started with a Flash animation based on the story of Willow pattern china plates

    UK-based artists Mike and Katie, aka TADO, have designed everything from a Judge Death plush for 2000AD to a cereal brand for Sainsbury's. In our in-depth interview with the pair, they told us of their meeting, living together and why Japan is one of their favourite places in the world.

    "We met during the second year on the Leeds Met Graphic Design course. A tutor of ours suggested we tried collaborating on some projects… and the rest is history!" they explain.

    05. Misc Adventures

    design power couples

    Andrew and Emma use Miscelleaneous Adventures as a way of getting creative outdoors

    Created by illustrator and craftsman Andrew Groves, Miscellaneous Adventures aims to get screen-focused designers out in the open. A place where design and illustration meet traditional craft and outdoor skills, it's chance to get your hands dirty and make some pretty wonderful wooden items within a beautiful enviroment.

    Girlfriend and embroidery expert Emma Ruth Hughes joined officially in the spring of 2013 to assist with the organisation and running of the workshops. The pair were also joined by fellow woodsman and surfer, Oliver Last late last year.

    06. Kozyndan

    design power couples

    This couple are obsessed with the sea, seen with their beautiful underwater photography

    This husband-and-wife duo work collaboratively to create highly detailed paintings and drawings for both illustration and fine art. They are obsessed with the ocean and being underwater, stating that they 'hope to someday come to rest at the bottom of the sea and slowly be devoured by deep creatures over many years.'

    They exhibited across the world, including shows in Los Angeles, Seattle, Melbourne and Toronto. Their underwater photographs are particularly extraordinary.

    07. G'Day Byron Bay

    design power couples

    An Italian couple in Australia, they certainly love Byron Bay!

    This small, perfectly-formed team are made up of Italian boyfriend Ivano Salonia and Portuguese girlfriend Ana Rita Sousa. Based in Portugal, they offer a variety of creative services, including graphic and web design, art direction, creative consultancy, branding, video production, photography and illustration.

    They've also just launched a new photography project - I Love When You Smile, that sees the couple embark on a new passion together.

    08. LouLou & Tummie

    best design couples

    With the help of their beloved pooch, LouLou and Tummie produce cute and colourful designs

    This adorable Dutch illustration duo spend their days building an ever expanding empire of colourful graphics and characters. With the help of their beloved dog, they crave happiness and all things cute in their designs that can be found in magazines, books, advertisements, plush, papertoys, on walls and interiours, t-shirts and shoes.

    09. Chubby

    design power couples

    Comic book artists Jack and Donya with their awesome kitty Molly

    As hugely popular comic artists in their own right, boyfriend and girlfriend Jack Teagle and Donya Todd came together to create Chubby. A collaboration that combines their brilliant illustrative talents, they make clothing and comics for hot dogs and cool cats.

    So far they've created some inspiring apparel as well as some pretty wonderful comic sketches that prove when you combine your creative forces, wonderful things can happen.

    10. Hello DODO

    design power couples

    Hello DODO aim to make people smile with their adorable range of screen prints and more

    When they're not posing for wonderful, fancy dress photos, Hello DODO are playful printmakers Ali and Jam. A couple based by the seaside in Brighton, their designs aim is to make people smile. Creating hand printed screen prints, relief prints and tote bags as well as greetings cards, the corners of your mouth will be rising in no time.

    11. Pygmy Cloud

    design power couples

    Diana and Dave make the kind of toys that adults and children alike will love

    Pygmy Cloud is a little brand of home decor, plushies and accessories run by couple Diana and Dave. Every product is designed in London for adults and children alike, with adorable animals, mountains, beards and beasties making their way onto the soft, gorgeous offerings.

    "Being from England, we love talking about the weather - but we like to put a happy spin on dreary British rain and clouds in our products. We only reserve grumpiness for our bears!" they explain.

    12. DesignosaurYEAH

    design power couples

    Karli and Jaques of DesignosaurYEAH doing their best raptor impressions

    Creating fun, dinosaur inspired laser cut jewellery in Plexiglass, Perspex and Cherry Wood, Brighton based couple Karli and Jaques are influenced by all things bright, colourful and brash. Their range includes necklaces, brooches, rings and more that almost everyone will want a piece of.

    13. Crispin Finn

    design power couples

    Anna Fidalgo and Roger Kelly skip the red, white and blue for some monochrome

    Crispin Finn are a London based couple who work exclusively in the colours red, white and blue. Patriotic Anna Fidalgo and Roger Kelly have worked together since 2008, creating illustration, 
design, screen prints, stationery and homewares. Their minimal approach to graphic design and illustration has proved a massive hit across the world.

    14. Everywhere We Shoot

    design power couples

    The pair would meet up after school before realising each other's love for design

    Everywhere We Shoot are as cool as they come. Made up of couple Ryan Vergara and Garovs Garrovillo, their moniker is not just the duo’s name but also a statement of intent, a manifesto of sorts in praise of the ambulant imagination.

    "We were two kids who would meet up at a fast food joint near school, just to hang out. As a result, we ended up smitten not only with each other, but also with each other’s good taste," they explain.

    Do you know a design power couple? Let us know in the comments box below!

    • Digg
    • Del.icio.us
    • StumbleUpon
    • Reddit
    • RSS

    User-friendly language for programming efficient simulations

  • Simulations written in Simit, a new programming language, were dozens or even hundreds of times as fast as those written in existing simulation languages. But they required only one-tenth as much code as meticulously hand-optimized simulations that could achieve similar execution speeds.

    Simulations written in Simit, a new programming language, were dozens or even hundreds of times as fast as those written in existing simulation languages. But they required only one-tenth as much code as meticulously hand-optimized simulations that could achieve similar execution speeds.

    Image: MIT News

    Full Screen
  • Computer simulations of physical systems are common in science, engineering, and entertainment, but they use several different types of tools.

    If, say, you want to explore how a crack forms in an airplane wing, you need a very precise physical model of the crack’s immediate vicinity. But if you want to simulate the flexion of an airplane wing under different flight conditions, it’s more practical to use a simpler, higher-level description of the wing.

    If, however, you want to model the effects of wing flexion on the crack’s propagation, or vice versa, you need to switch back and forth between these two levels of description, which is difficult not only for computer programmers but for computers, too.

    A team of researchers from MIT’s Computer Science and Artificial Intelligence Laboratory, Adobe, the University of California at Berkeley, the University of Toronto, Texas A&M, and the University of Texas have developed a new programming language that handles that switching automatically.

    In experiments, simulations written in the language were dozens or even hundreds of times as fast as those written in existing simulation languages. But they required only one-tenth as much code as meticulously hand-optimized simulations that could achieve similar execution speeds.

    “The story of this paper is that the trade-off between concise code and good performance is false,” says Fredrik Kjolstad, an MIT graduate student in electrical engineering and computer science and first author on a new paper describing the language. “It’s not necessary, at least for the problems that this applies to. But it applies to a large class of problems.”

    Indeed, Kjolstad says, the researchers’ language has applications outside physical simulation, in machine learning, data analytics, optimization, and robotics, among other areas. Kjolstad and his colleagues have already used the language to implement a version of Google’s original PageRank algorithm for ordering search results, and they’re currently collaborating with researchers in MIT’s Department of Physics on an application in quantum chromodynamics, a theory of the “strong force” that holds atomic nuclei together.

    “I think this is a language that is not just going to be for physical simulations for graphics people,” says Saman Amarasinghe, Kjolstad’s advisor and a professor of electrical engineering and computer science (EECS). “I think it can do a lot of other things. So we are very optimistic about where it’s going.”

    Kjolstad presented the paper in July at the Association for Computing Machinery’s Siggraph conference, the major conference in computer graphics. His co-authors include Amarasinghe; Wojciech Matusik, an associate professor of EECS; and Gurtej Kanwar, who was an MIT undergraduate when the work was done but is now an MIT PhD student in physics.

    Graphs vs. matrices

    As Kjolstad explains, the distinction between the low-level and high-level descriptions of physical systems is more properly described as the distinction between descriptions that use graphs and descriptions that use linear algebra.

    In this context, a graph is a mathematical structure that consists of nodes, typically represented by circles, and edges, typically represented as line segments connecting the nodes. Edges and nodes can have data associated with them. In a physical simulation, that data might describe tiny triangles or tetrahedra that are stitched together to approximate the curvature of a smooth surface. Low-level simulation might require calculating the individual forces acting on, say, every edge and face of each tetrahedron.

    Linear algebra instead represents a physical system as a collection of points, which exert forces on each other. Those forces are described by a big grid of numbers, known as a matrix. Simulating the evolution of the system in time involves multiplying the matrix by other matrices, or by vectors, which are individual rows or columns of numbers.

    Matrix manipulations are second nature to many scientists and engineers, and popular simulation software such as MatLab provides a vocabulary for describing them. But using MatLab to produce graphical models requires special-purpose code that translates the forces acting on, say, individual tetrahedra into a matrix describing interactions between points. For every frame of a simulation, that code has to convert tetrahedra to points, perform matrix manipulations, then map the results back onto tetrahedra. This slows the simulation down drastically.

    So programmers who need to factor in graphical descriptions of physical systems will often write their own code from scratch. But manipulating data stored in graphs can be complicated, and tracking those manipulations requires much more code than matrix manipulation does. “It’s not just that it’s a lot of code,” says Kjolstad. “It’s also complicated code.”

    Automatic translation

    Kjolstad and his colleagues’ language, which is called Simit, requires the programmer to describe the translation between the graphical description of a system and the matrix description. But thereafter, the programmer can use the language of linear algebra to program the simulation.

    During the simulation, however, Simit doesn’t need to translate graphs into matrices and vice versa. Instead, it can translate instructions issued in the language of linear algebra into the language of graphs, preserving the runtime efficiency of hand-coded simulations.

    Unlike hand-coded simulations, however, programs written in Simit can run on either conventional microprocessors or on graphics processing units (GPUs), with no change to the underlying code. In the researchers’ experiments, Simit code running on a GPU was between four and 20 times as fast as on a standard chip.

    “One of the biggest frustrations as a physics simulation programmer and researcher is adapting to rapidly changing computer architectures,” says Chris Wojtan, a professor at the Institute of Science and Technology Austria. “Making a simulation run fast often requires painstakingly specific rearrangements to be made to the code. To make matters worse, different code must be written for different computers. For example, a graphics processing unit has different strengths and weaknesses compared to a cluster of CPUs, and optimizing simulation code to perform well on one type of machine will usually result in sub-optimal performance on a different machine.”

    “Simit and Ebb” — another experimental simulation language presented at Siggraph — “aim to handle all of these frustratingly specific optimizations automatically, so programmers can focus their time and energy on developing new algorithms,” Wojtan says. “This is especially exciting news for physics simulation researchers, because it can be difficult to defend creative and raw new ideas against traditional algorithms which have been thoroughly optimized for existing architectures.”

    This work was supported by the National Science Foundation and by the Defense Advanced Research Projects Agency SIMPLEX program.


    • Digg
    • Del.icio.us
    • StumbleUpon
    • Reddit
    • RSS

    Students unlock the secrets of cryptography

  • Sophia Yakoubov lectures the 2016 LLCipher class on public key encryption.

    Sophia Yakoubov lectures the 2016 LLCipher class on public key encryption.

    Photo: Jon Barron

    Full Screen
  • "Split up into groups of three," directed Sophia Yakoubov, associate staff in the Secure Resilient Systems and Technology Group at MIT Lincoln Laboratory and instructor of the LLCipher cryptography workshop. "Within each group, the person sitting on the left is Alice, the person on the right is Bob, and the person in the middle is Eve. Alice must write a secret message in a notebook and pass it to Bob. Eve must figure out Alice's message and intercept everything that Alice and Bob pass to each other. Alice and Bob each have a lock and matching key, however, they cannot exchange their keys. How can Alice pass her secret message to Bob so that Eve is unable to unlock and view the secret, and only Bob can read it?"

    The 13 high school students participating in the workshop glanced at one another until one brave student addressed the entire class, starting a flurry of conversation: "Any ideas?"

    Thus began one of the many hands-on challenges that students tackled at the LLCipher workshop held in August at the MIT campus in Cambridge, Massachusetts, and MIT Lincoln Laboratory in Lexington, Massachusetts. LLCipher is a one-week program that introduces students to modern cryptography, a theoretical approach to securing data such as Alice’s secret message. The program begins with lessons in abstract algebra and number theory that students use to understand theoretical cryptography during lessons later in the workshop.

    "I decided that LLCipher was for me when I researched the course topics," says student Evan Hughes. "As I made my way down the topic list, I didn’t understand many of the concepts, so I immediately applied to the program."

    Because of student feedback from LLCipher's inaugural year in 2015, Yakoubov extended each lesson from two to six hours. "Many students said they wanted more time on learning," says Yakoubov. "Specifically, they wanted to learn more than one cryptography technique and apply those techniques to 'real-world' scenarios, rather than just learn theory." This year, in addition to the El Gamal public key cryptosystem, students learned the RSA public key cryptosystem. RSA is one of the most common methods to secure data and uses slightly different math from El Gamal. Both RSA and El Gamal use modular arithmetic, a type of math in which integers "wrap around" upon reaching a certain value, i.e., the modulus, similar to a clock that uses 12 numbers to represent 24 hours in one day. El Gamal uses a very large prime number as a modulus; RSA uses a very large composite number, i.e., a whole number that can be divided evenly by numbers other than 1 or itself, with a secret factorization.

    To reinforce the techniques and allow students to apply the theory, Yakoubov, along with the help of Uri Blumenthal and Jeff Diewald of the Secure Resilient Systems and Technology Group, created an online platform that includes El Gamal- and RSA-based challenges. "With these exercises, we are able to show students examples of flawed cryptography so that they can see how easily it can be broken," says Yakoubov. "Students can visualize huge numbers and see why concepts like randomization are so important to effective encryption." The platform is used throughout the course and includes six challenges that bolster teamwork and creativity.  

    "Learning about public key encryption is fun because it is so complicated and secretive," says student Garrett Mallinson. "I like creating codes that no one else can break or unlock — this is like what characters do on television shows in just 45 minutes."

    During the final day of the course, students toured several Lincoln Laboratory facilities, such as the anechoic chambers and the Flight Test Facility. "I enjoyed the tour around Lincoln Laboratory," says Hughes. "We always hear about theoretical concepts at school, so it is inspiring to see people applying and making the things we hear about."

    After the tour, students listened to a guest lecture from Emily Shen of the Secure Resilient Systems and Technology Group on a more specialized cryptography topic. Shen explained secure multiparty computation, a tool that allows multiple users with secret inputs to compute a joint function on their inputs without having to reveal anything beyond the output of the joint function. To demonstrate the concept, students participated in an activity to find out whether the majority of the group likes pie or cake without each student revealing his or her preference. First, the group assigned pie and cake a binary representation — 0 for pie and 1 for cake. The group also picked a modulus larger than the size of the group; in this case, the modulus was 14. The first participant secretly chose an auxiliary value between 0 and 13, added his vote, 0 or 1, to that value, and then used modular arithmetic to get a new value. For example, if he chose an auxiliary value of 13 and his vote was 1, he took the remainder modulo of 14 to get a total of 0. He then passed on the sum to the next student. This pattern continued until the last student gave her value to the original participant, who then subtracted the secret auxiliary number from the last value. The remaining value represented the amount of votes for cake and indicated whether the majority of the group likes cake or pie.

    "Cryptography is a tool that is very important. It's an interesting intersection of math and computer science to which people are not often exposed," says Shen. "I want kids to learn about this field and hopefully find it exciting." Yakoubov found that the students benefited from the new features, particularly the applied challenges, of the LLCipher program. She hopes that students realize that math can be fun and can be applied to complex and exciting real-life problems.

    Following the program, students indicated that they were interested in taking computer science courses in college and hope to aim for careers in science, technology, engineering, and math fields.  "LLCipher helped us understand the cryptography-based concepts that we see in our everyday lives, such as encryption messages and functions on our personal computers," says student Brandon Chu. "At the end of the program, everything came together and made sense, which was really exciting. We were doing things that seemed impossible at first glance. I definitely feel smarter and more empowered now than when we started."


    • Digg
    • Del.icio.us
    • StumbleUpon
    • Reddit
    • RSS

    Faster parallel computing

  • Researchers have designed a new programming language that lets application developers manage memory more efficiently in programs that deal with scattered data points in large data sets. In tests on several common algorithms, programs written in the new language were four times as fast as those written in existing languages.

    Researchers have designed a new programming language that lets application developers manage memory more efficiently in programs that deal with scattered data points in large data sets. In tests on several common algorithms, programs written in the new language were four times as fast as those written in existing languages.

    Image: Christine Daniloff/MIT

    Full Screen
  • In today’s computer chips, memory management is based on what computer scientists call the principle of locality: If a program needs a chunk of data stored at some memory location, it probably needs the neighboring chunks as well.

    But that assumption breaks down in the age of big data, now that computer programs more frequently act on just a few data items scattered arbitrarily across huge data sets. Since fetching data from their main memory banks is the major performance bottleneck in today’s chips, having to fetch it more frequently can dramatically slow program execution.

    This week, at the International Conference on Parallel Architectures and Compilation Techniques, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) are presenting a new programming language, called Milk, that lets application developers manage memory more efficiently in programs that deal with scattered data points in large data sets.

    In tests on several common algorithms, programs written in the new language were four times as fast as those written in existing languages. But the researchers believe that further work will yield even larger gains.

    The reason that today’s big data sets pose problems for existing memory management techniques, explains Saman Amarasinghe, a professor of electrical engineering and computer science, is not so much that they are large as that they are what computer scientists call “sparse.” That is, with big data, the scale of the solution does not necessarily increase proportionally with the scale of the problem.

    “In social settings, we used to look at smaller problems,” Amarasinghe says. “If you look at the people in this [CSAIL] building, we’re all connected. But if you look at the planet scale, I don’t scale my number of friends. The planet has billions of people, but I still have only hundreds of friends. Suddenly you have a very sparse problem.”

    Similarly, Amarasinghe says, an online bookseller with, say, 1,000 customers might like to provide its visitors with a list of its 20 most popular books. It doesn’t follow, however, that an online bookseller with a million customers would want to provide its visitors with a list of its 20,000 most popular books.

    Thinking locally

    Today’s computer chips are not optimized for sparse data — in fact, the reverse is true. Because fetching data from the chip’s main memory bank is slow, every core, or processor, in a modern chip has its own “cache,” a relatively small, local, high-speed memory bank. Rather than fetching a single data item at a time from main memory, a core will fetch an entire block of data. And that block is selected according to the principle of locality.

    It’s easy to see how the principle of locality works with, say, image processing. If the purpose of a program is to apply a visual filter to an image, and it works on one block of the image at a time, then when a core requests a block, it should receive all the adjacent blocks its cache can hold, so that it can grind away on block after block without fetching any more data.

    But that approach doesn’t work if the algorithm is interested in only 20 books out of the 2 million in an online retailer’s database. If it requests the data associated with one book, it’s likely that the data associated with the 100 adjacent books will be irrelevant.

    Going to main memory for a single data item at a time is woefully inefficient. “It’s as if, every time you want a spoonful of cereal, you open the fridge, open the milk carton, pour a spoonful of milk, close the carton, and put it back in the fridge,” says Vladimir Kiriansky, a PhD student in electrical engineering and computer science and first author on the new paper. He’s joined by Amarasinghe and Yunming Zhang, also a PhD student in electrical engineering and computer science.

    Batch processing

    Milk simply adds a few commands to OpenMP, an extension of languages such as C and Fortran that makes it easier to write code for multicore processors. With Milk, a programmer inserts a couple additional lines of code around any instruction that iterates through a large data collection looking for a comparatively small number of items. Milk’s compiler — the program that converts high-level code into low-level instructions — then figures out how to manage memory accordingly.

    With a Milk program, when a core discovers that it needs a piece of data, it doesn’t request it — and a cacheful of adjacent data — from main memory. Instead, it adds the data item’s address to a locally stored list of addresses. When the list is long enough, all the chip’s cores pool their lists, group together those addresses that are near each other, and redistribute them to the cores. That way, each core requests only data items that it knows it needs and that can be retrieved efficiently.

    That’s the high-level description, but the details get more complicated. In fact, most modern computer chips have several different levels of caches, each one larger but also slightly less efficient than the last. The Milk compiler has to keep track of not only a list of memory addresses but also the data stored at those addresses, and it regularly shuffles both around between cache levels. It also has to decide which addresses should be retained because they might be accessed again, and which to discard. Improving the algorithm that choreographs this intricate data ballet is where the researchers see hope for further performance gains.

    “Many important applications today are data-intensive, but unfortunately, the growing gap in performance between memory and CPU means they do not fully utilize current hardware,” says Matei Zaharia, an assistant professor of computer science at Stanford University. “Milk helps to address this gap by optimizing memory access in common programming constructs. The work combines detailed knowledge about the design of memory controllers with knowledge about compilers to implement good optimizations for current hardware.”


    • Digg
    • Del.icio.us
    • StumbleUpon
    • Reddit
    • RSS

    Solving network congestion

  • PhD Ezzeldin Hamed, professor Dina Katabi, and visiting researcher Hariharan Rahul developed MegaMIMO to address spectrum crunch.

    PhD Ezzeldin Hamed, professor Dina Katabi, and visiting researcher Hariharan Rahul developed MegaMIMO to address spectrum crunch.

    Photo: Jason Dorfman/MIT CSAIL

    Full Screen
  • MegaMIMO enables multiple access points to transmit data at the same time, on the same frequency, without creating interference.

    MegaMIMO enables multiple access points to transmit data at the same time, on the same frequency, without creating interference.

    Photo: Jason Dorfman/MIT CSAIL

    Full Screen
  • There are few things more frustrating than trying to use your phone on a crowded network. With phone usage growing faster than wireless spectrum, we’re all now fighting over smaller and smaller bits of bandwidth. Spectrum crunch is such a big problem that the White House is getting involved, recently announcing both a $400 million research initiative and a $4 million global competition devoted to the issue.

    But researchers from MIT’s Computer Science and Artificial Intelligence Lab (CSAIL) say that they have a possible solution. In a new paper, a team led by professor Dina Katabi demonstrate a system called MegaMIMO 2.0 that can transfer wireless data more than three times faster than existing systems while also doubling the range of the signal.

    The soon-to-be-commercialized system’s key insight is to coordinate multiple access points at the same time, on the same frequency, without creating interference. This means that MegaMIMO 2.0 could dramatically improve the speed and strength of wireless networks, particularly at high-usage events like concerts, conventions and football games.

    “In today’s wireless world, you can’t solve spectrum crunch by throwing more transmitters at the problem, because they will all still be interfering with one another,” says Ezzeldin Hamed, a PhD student who is lead author on a new paper on the topic. “The answer is to have all those access points work with each other simultaneously to efficiently use the available spectrum.”

    To test MegaMIMO 2.0’s performance, the researchers created a mock conference room with a set of four laptops that each roamed the space atop Roomba robots. The experiments found that the system could increase the devices’ data-transfer speed 330 percent.

    MegaMIMO 2.0’s hardware is the size of a standard router, and consists of a processor, a real-time baseband processing system, and a transceiver board.

    Katabi and Hamed co-wrote the paper with Hariharan Rahul SM '99, PhD '13, an alum of Katabi’s group and visiting researcher with the group, as well as visiting student Mohammed A. Albdelghany. Rahul will present the paper at next week’s conference for the Association for Computing Machinery's Special Interest Group on Data Communications (SIGCOMM 16).

    How it works

    The main reason that your smartphone works so speedily is multiple-input multiple-output (MIMO), which means that it uses several transmitters and receivers at the same time. Radio waves bounce off surfaces and therefore arrive at the receivers at slightly different times; devices with multiple receivers, then, are able to combine the various streams to transmit data much faster. For example, a router with three antennas works twice as fast as one with two antennas.

    But in a world of limited bandwidth, these speeds are still not as fast as they could be, and so in recent years researchers have searched for the wireless industry’s Holy Grail: being able to coordinate several routers at once so that they can triangulate the data even faster and more consistently.

    “The problem is that, just like how two radio stations can’t play music over the same frequency at the same time, multiple routers cannot transfer data on the same chunk of spectrum without creating major interference that muddies the signal,” says Rahul.

    For the CSAIL team, the missing piece to the puzzle was a new technique for coordinating multiple transmitters by synchronizing their phases. The team developed special signal-processing algorithms that allow multiple independent transmitters to transmit data on the same piece of spectrum to multiple independent receivers without interfering with each other.

    “Since spectrum is scarce, the only way to improve wireless capacity is to add more access points and use some sort of distributed MIMO solution,” says Sachin Katti, an associate professor of electrical engineering and computer science at Stanford University who was not involved in the research. “While there has long been skepticism that this could ever work in practice, Katabi’s team has demonstrated that they can solve the many practical challenges of distributed MIMO networks.”

    The team compared MegaMIMO 2.0’s performance against both a traditional WiFi system, as well as MegaMIMO 1.0, in which the user has to actively provide information (“explicit channel feedback”) about the different frequencies.

    Rahul says that the group’s technology can also be applied cellular networks, meaning that it could solve similar congestion issues for people who actually want to use their phones to make calls. He says the team plans to expand MegaMIMO 2.0 to be able to coordinate dozens of routers at once, which would allow for even faster data-transfer speeds.

    “This work offers a completely new way to deliver WiFi in campuses and enterprises,” says Katti. “Whereas current solutions often have slow, spotty performance, this technology has the potential to deliver high-capacity connectivity to each and every user.” 



    The work was funded by the National Science Foundation and supported by members of the MIT Center for Wireless Networks and Mobile Computing.


    • Digg
    • Del.icio.us
    • StumbleUpon
    • Reddit
    • RSS

    Toward practical quantum computers

  • Researchers from MIT and MIT Lincoln Laboratory report an important step toward practical quantum  s, with a paper describing a prototype chip that can trap ions in an electric field and, with built-in optics, direct laser light toward each of them.

    Researchers from MIT and MIT Lincoln Laboratory report an important step toward practical quantum computers, with a paper describing a prototype chip that can trap ions in an electric field and, with built-in optics, direct laser light toward each of them.

    Full Screen
  • Quantum computers are largely hypothetical devices that could perform some calculations much more rapidly than conventional computers can. Instead of the bits of classical computation, which can represent 0 or 1, quantum computers consist of quantum bits, or qubits, which can, in some sense, represent 0 and 1 simultaneously.

    Although quantum systems with as many as 12 qubits have been demonstrated in the lab, building quantum computers complex enough to perform useful computations will require miniaturizing qubit technology, much the way the miniaturization of transistors enabled modern computers.

    Trapped ions are probably the most widely studied qubit technology, but they’ve historically required a large and complex hardware apparatus. In today’s Nature Nanotechnology, researchers from MIT and MIT Lincoln Laboratory report an important step toward practical quantum computers, with a paper describing a prototype chip that can trap ions in an electric field and, with built-in optics, direct laser light toward each of them.

    “If you look at the traditional assembly, it’s a barrel that has a vacuum inside it, and inside that is this cage that’s trapping the ions. Then there’s basically an entire laboratory of external optics that are guiding the laser beams to the assembly of ions,” says Rajeev Ram, an MIT professor of electrical engineering and one of the senior authors on the paper. “Our vision is to take that external laboratory and miniaturize much of it onto a chip.”

    Caged in

    The Quantum Information and Integrated Nanosystems group at Lincoln Laboratory was one of several research groups already working to develop simpler, smaller ion traps known as surface traps. A standard ion trap looks like a tiny cage, whose bars are electrodes that produce an electric field. Ions line up in the center of the cage, parallel to the bars. A surface trap, by contrast, is a chip with electrodes embedded in its surface. The ions hover 50 micrometers above the electrodes.

    Cage traps are intrinsically limited in size, but surface traps could, in principle, be extended indefinitely. With current technology, they would still have to be held in a vacuum chamber, but they would allow many more qubits to be crammed inside.

    “We believe that surface traps are a key technology to enable these systems to scale to the very large number of ions that will be required for large-scale quantum computing,” says Jeremy Sage, who together with John Chiaverini leads Lincoln Laboratory’s trapped-ion quantum-information-processing project. “These cage traps work very well, but they really only work for maybe 10 to 20 ions, and they basically max out around there.”

    Performing a quantum computation, however, requires precisely controlling the energy state of every qubit independently, and trapped-ion qubits are controlled with laser beams. In a surface trap, the ions are only about 5 micrometers apart. Hitting a single ion with an external laser, without affecting its neighbors, is incredibly difficult; only a few groups had previously attempted it, and their techniques weren’t  practical for large-scale systems.

    Getting onboard

    That’s where Ram’s group comes in. Ram and Karan Mehta, an MIT graduate student in electrical engineering and first author on the new paper, designed and built a suite of on-chip optical components that can channel laser light toward individual ions. Sage, Chiaverini, and their Lincoln Lab colleagues Colin Bruzewicz and Robert McConnell retooled their surface trap to accommodate the integrated optics without compromising its performance. Together, both groups designed and executed the experiments to test the new system.

    “Typically, for surface electrode traps, the laser beam is coming from an optical table and entering this system, so there’s always this concern about the beam vibrating or moving,” Ram says. “With photonic integration, you’re not concerned about beam-pointing stability, because it’s all on the same chip that the electrodes are on. So now everything is registered against each other, and it’s stable.”

    The researchers’ new chip is built on a quartz substrate. On top of the quartz is a network of silicon nitride “waveguides,” which route laser light across the chip. Above the waveguides is a layer of glass, and on top of that are niobium electrodes with tiny holes in them to allow light to pass through. Beneath the holes in the electrodes, the waveguides break into a series of sequential ridges, a “diffraction grating” precisely engineered to direct light up through the holes and concentrate it into a beam narrow enough that it will target a single ion, 50 micrometers above the surface of the chip.

    Prospects

    With the prototype chip, the researchers were evaluating the performance of the diffraction gratings and the ion traps, but there was no mechanism for varying the amount of light delivered to each ion. In ongoing work, the researchers are investigating the addition of light modulators to the diffraction gratings, so that different qubits can simultaneously receive light of different, time-varying intensities. That would make programming the qubits more efficient, which is vital in a practical quantum information system, since the number of quantum operations the system can perform is limited by the “coherence time” of the qubits.

    “As far as I know, this is the first serious attempt to integrate optical waveguides in the same chip as an ion trap, which is a very significant step forward on the path to scaling up ion-trap quantum information processors [QIP] to the sort of size which will ultimately contain the number of qubits necessary for doing useful QIP,” says David Lucas, a professor of physics at Oxford University. “Trapped-ion qubits are well-known for being able to achieve record-breaking coherence times and very precise operations on small numbers of qubits. Arguably, the most important area in which progress needs to be made is technologies which will enable the systems to be scaled up to larger numbers of qubits. This is exactly the need being addressed so impressively by this research.”

    “Of course, it's important to appreciate that this is a first demonstration,” Lucas adds. “But there are good prospects for believing that the technology can be improved substantially. As a first step, it's a wonderful piece of work.”


    • Digg
    • Del.icio.us
    • StumbleUpon
    • Reddit
    • RSS

    Reach in and touch objects in videos with “Interactive Dynamic Video”

  • To simulate objects, researchers analyzed video clips to find “vibration modes” at different frequencies that each represent distinct ways that an object can move. By identifying these modes’ shapes, the researchers can begin to predict how these objects will move in new situations.

    To simulate objects, researchers analyzed video clips to find “vibration modes” at different frequencies that each represent distinct ways that an object can move. By identifying these modes’ shapes, the researchers can begin to predict how these objects will move in new situations.

    Image: Abe Davis/MIT CSAIL

    Full Screen
  • Using traditional cameras and algorithms, IDV looks at the tiny, almost invisible vibrations of an object to create video simulations that users can virtually interact with.

    Using traditional cameras and algorithms, IDV looks at the tiny, almost invisible vibrations of an object to create video simulations that users can virtually interact with.

    Image: Abe Davis/MIT CSAIL

    Full Screen
  • We learn a lot about objects by manipulating them: poking, pushing, prodding, and then seeing how they react.

    We obviously can’t do that with videos — just try touching that cat video on your phone and see what happens. But is it crazy to think that we could take that video and simulate how the cat moves, without ever interacting with the real one?

    Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have recently done just that, developing an imaging technique called Interactive Dynamic Video (IDV) that lets you reach in and “touch” objects in videos. Using traditional cameras and algorithms, IDV looks at the tiny, almost invisible vibrations of an object to create video simulations that users can virtually interact with.

    Interactive Dynamic Video demonstration from the MIT Computer Science and Artificial Intelligence Laboratory

    Video: MIT CSAIL

    "This technique lets us capture the physical behavior of objects, which gives us a way to play with them in virtual space,” says CSAIL PhD student Abe Davis, who will be publishing the work this month for his final dissertation. “By making videos interactive, we can predict how objects will respond to unknown forces and explore new ways to engage with videos.”

    Davis says that IDV has many possible uses, from filmmakers producing new kinds of visual effects to architects determining if buildings are structurally sound. For example, he shows that, in contrast to how the popular Pokemon Go app can drop virtual characters into real-world environments, IDV can go a step beyond that by actually enabling virtual objects (including Pokemon) to interact with their environments in specific, realistic ways, like bouncing off the leaves of a nearby bush.

    He outlined the technique in a paper he published earlier this year with PhD student Justin G. Chen and professor Fredo Durand.

    How it works

    The most common way to simulate objects’ motions is by building a 3-D model. Unfortunately, 3-D modeling is expensive, and can be almost impossible for many objects. While algorithms exist to track motions in video and magnify them, there aren’t ones that can reliably simulate objects in unknown environments. Davis’ work shows that even five seconds of video can have enough information to create realistic simulations.

    To simulate the objects, the team analyzed video clips to find “vibration modes” at different frequencies that each represent distinct ways that an object can move. By identifying these modes’ shapes, the researchers can begin to predict how these objects will move in new situations.

    “Computer graphics allows us to use 3-D models to build interactive simulations, but the techniques can be complicated,” says Doug James, a professor of computer science at Stanford University who was not involved in the research. “Davis and his colleagues have provided a simple and clever way to extract a useful dynamics model from very tiny vibrations in video, and shown how to use it to animate an image.”

    Davis used IDV on videos of a variety of objects, including a bridge, a jungle gym, and a ukelele. With a few mouse-clicks, he showed that he can push and pull the image, bending and moving it in different directions. He even demonstrated how he can make his own hand appear to telekinetically control the leaves of a bush.

    “If you want to model how an object behaves and responds to different forces, we show that you can observe the object respond to existing forces and assume that it will respond in a consistent way to new ones,” says Davis, who also found that the technique even works on some existing videos on YouTube.

    Applications

    Researchers say that the tool has many potential uses in engineering, entertainment, and more.

    For example, in movies it can be difficult and expensive to get CGI characters to realistically interact with their real-world environments. Doing so requires filmmakers to use green-screens and create detailed models of virtual objects that can be synchronized with live performances.

    But with IDV, a videographer could take video of an existing real-world environment and make some minor edits like masking, matting, and shading to achieve a similar effect in much less time — and at a fraction of the cost.

    Engineers could also use the system to simulate how an old building or bridge would respond to strong winds or an earthquake.

    “The ability to put real-world objects into virtual models is valuable for not just the obvious entertainment applications, but also for being able to test the stress in a safe virtual environment, in a way that doesn’t harm the real-world counterpart,” says Davis.

    He says that he is also eager to see other applications emerge, from studying sports film to creating new forms of virtual reality.

    “When you look at VR companies like Oculus, they are often simulating virtual objects in real spaces,” he says. “This sort of work turns that on its head, allowing us to see how far we can go in terms of capturing and manipulating real objects in virtual space.”

    This work was supported by the National Science Foundation and the Qatar Computing Research Institute. Chen also received support from Shell Research through the MIT Energy Initiative.


    • Digg
    • Del.icio.us
    • StumbleUpon
    • Reddit
    • RSS

    Ramesh Raskar awarded $500,000 Lemelson-MIT Prize

  • Imaging scientist and social impact inventor Ramesh Raskar of MIT is the 2016 recipient of the $500,000 Lemelson-MIT Prize. A pioneer in the field of vision technologies, Raskar has invented a camera that operates at the speed of light to see around corners and do-it-yourself tools for medical imaging of the eye. Raskar also uses invention and collaboration to respond to current and forward-looking needs in societies around the world through his Emerging Worlds initiative.

    Imaging scientist and social impact inventor Ramesh Raskar of MIT is the 2016 recipient of the $500,000 Lemelson-MIT Prize. A pioneer in the field of vision technologies, Raskar has invented a camera that operates at the speed of light to see around corners and do-it-yourself tools for medical imaging of the eye. Raskar also uses invention and collaboration to respond to current and forward-looking needs in societies around the world through his Emerging Worlds initiative.

    Photo: Len Rubenstein

    Full Screen
  • Ramesh Raskar, founder of the Camera Culture research group at the MIT Media Lab and associate professor of media arts and sciences at MIT, is the recipient of the 2016 $500,000 Lemelson-MIT Prize. Raskar is the co-inventor of radical imaging solutions including femtophotography, an ultra-fast imaging system that can see around corners; low-cost eye-care solutions for the developing world; and a camera that allows users to read pages of a book without opening the cover. Raskar seeks to catalyze change on a massive scale by launching platforms that empower inventors to create solutions to improve lives globally.

    Raskar has dedicated his career to linking the best of the academic and entrepreneurial worlds with young engineers, igniting a passion for impact inventing. He is a pioneer in the fields of imaging, computer vision and machine learning and his novel imaging platforms offer an understanding of the world that far exceeds human ability. Raskar has mentored more than 100 students, visiting students, interns, and postdocs, who, with his guidance and support, have been able to kick-start their own highly successful careers.

    “Raskar is a multi-faceted leader as an inventor, educator, change maker and exemplar connector,” said Stephanie Couch, executive director of the Lemelson-MIT Program. “In addition to creating his own remarkable inventions, he is working to connect communities and inventors all over the world to create positive change.”

    The Lemelson-MIT Prize honors outstanding mid-career inventors improving the world through technological invention and demonstrating a commitment to mentorship in science, technology, engineering, and mathematics (STEM). The prize is made possible through the support of The Lemelson Foundation, the world’s leading funder of invention in service of social and economic change. Over the next three years, Raskar will be investing a portion of the prize money to support the development of young inventors.

    “We are thrilled to honor Ramesh Raskar, whose breakthrough research is impacting how we see the world,” said Dorothy Lemelson, chair of The Lemelson Foundation. “Ramesh’s femtophotography work not only has the potential to transform industries ranging from internal medicine to transportation safety, it is also helping to inspire a new generation of inventors to tackle the biggest problems of our time.”

    Associate Professor Ramesh Raskar is the 2016 winner of the Lemelson-MIT Prize, awarded to outstanding mid-career inventors who have developed a patented product or process of significant value to society that has been adopted for practical use, or has a high probability of being adopted.

    Video: Camera Culture Group

    Making the invisible visible

    In 2012, Raskar co-created femtophotography, an advanced form of photography allowing cameras to see around corners. The technology, currently in development for commercialization, uses ultrafast imaging to capture light at 1 trillion frames per second, allowing the camera to create slow motion videos of light in motion. Raskar and his team have received significant funding from sponsors including the U.S. Defense Advanced Research Projects Agency (DARPA), the National Science Foundation, and MIT to further develop the idea of using "scattered light imaging" to see around corners.

    Potential future applications include: avoiding car collisions at blind spots; detecting survivors in fire and rescue situations; and performing endoscopy and medical imaging to eliminate the need of an X-ray. Raskar is continuing this research to make the seemingly impossible possible — from reading a book without opening the cover to capturing images of out-of-sight objects using sound waves.

    A vision for improved eye-care in the developing world

    Raskar is the co-founder of EyeNetra, an inexpensive, disruptive eye-care platform that spun out of Media Lab research. EyeNetra enables on-demand eye testing in remote locations via a hand-held technology that snaps onto a mobile device. When looking into the binocular the user is provided with interactive cues to rapidly calculate a prescription for eyeglasses. The technology was created to eliminate the need for expensive diagnostic tools in the developing world; the young company has performed eye-tests for hundreds of thousands of subjects and is currently active in the U.S., Brazil, and India.

    Raskar’s team has also worked on many areas of preventable blindness, low vision, and diagnostics at MIT. In 2013, he and his colleagues launched LVP-MITRA in Hyderabad, India, a center where hundreds of young inventors have been co-inventing next generation screening, diagnostic, and therapeutic tools for eye-care.

    Empowering social impact among youth and entrepreneurs

    Raskar is also the founder of the Emerging Worlds initiative, a year-round effort focused on solving some of the world’s most pressing problems and impacting billions worldwide. This initiative, based at MIT, links corporate members, government organizations, educational institutes, and venture partners. The members — MIT researchers, young innovators, and entrepreneurs — work in very specific integrated ecosystems to spot problems, probe solutions, grow adoption, and scale the deployment.

    This methodology was recently used at the Kumbhathon sandbox for innovations at Kumbh Mela, a gathering of 30 million people, and at the Digital Impact Square, a multi-million-dollar living lab and open co-innovation center. Raskar has mentored several teams that span crowd-steering via use of cell tower data to display heat maps of crowd movements, stations to test vital statistics using portable instruments, and an analytics-based system to detect impending epidemic outbreaks in real-time.

    Launching co-innovation pathways for young inventors

    “Everyone has the power to solve problems and through peer-to-peer co-invention and purposeful collaboration, we can solve problems that will impact billions of lives,” Raskar says. He plans to use a portion of the Lemelson-MIT Prize money to launch a new effort using peer-to-peer invention platforms that offer new approaches for helping young people in multiple countries to co-invent in a collaborative way. Visit redx.io to learn more or to apply.

    Raskar will speak at EmTech MIT, the annual conference on emerging technologies hosted by MIT Technology Review at the MIT Media Lab on Tuesday, Oct. 18.

    Seeking nominees for 2017 $500,000 Lemelson-MIT Prize

    The Lemelson-MIT Program is now seeking nominations for the 2017 $500,000 Lemelson-MIT Prize. Please contact the Lemelson-MIT Program at awards-lemelson@mit.edu for more information or visit the prize website.

    The Lemelson-MIT Program celebrates outstanding inventors and inspires young people to pursue creative lives and careers through invention. Jerome H. Lemelson, one of the most prolific inventors in U.S. history, and his wife Dorothy founded the Lemelson-MIT Program at MIT in 1994. It is funded by The Lemelson Foundation and administered by the School of Engineering at MIT, an institution with a strong ongoing commitment to creating meaningful opportunities for K-12 STEM education.

    Based in Portland, Oregon, The Lemelson Foundation uses the power of invention to improve lives. Inspired by the belief that invention can solve many of the biggest economic and social challenges of our time, the foundation helps the next generation of inventors and invention-based businesses to flourish. The Lemelson Foundation was established in the early 1990s by prolific inventor Jerome Lemelson and his wife Dorothy. To date, the foundation has made grants totaling more than $200 million in support of its mission.


    • Digg
    • Del.icio.us
    • StumbleUpon
    • Reddit
    • RSS

    An autonomous fleet for Amsterdam

  • The new ROBOAT project will investigate how urban waterways can be used to improve the city’s function and quality of life.

    The new ROBOAT project will investigate how urban waterways can be used to improve the city’s function and quality of life.

    Photo: SENSEable City Lab

    Full Screen
  • A collaboration between researchers from MIT and partner institutions in the Netherlands seeks to design and deploy a fleet of autonomous boats on the canals of Amsterdam.

    A collaboration between researchers from MIT and partner institutions in the Netherlands seeks to design and deploy a fleet of autonomous boats on the canals of Amsterdam.

    Photo courtesy of AMS.

    Full Screen
  • MIT has signed an agreement to partner with the Amsterdam Institute for Advanced Metropolitan Systems (AMS) in the Netherlands. Research from MIT and other members of the AMS consortium will use the city of Amsterdam as a living laboratory to confront urban challenges. ROBOAT is the first and flagship project for the collaboration.

    MIT has signed an agreement to partner with the Amsterdam Institute for Advanced Metropolitan Systems (AMS) in the Netherlands. Research from MIT and other members of the AMS consortium will use the city of Amsterdam as a living laboratory to confront urban challenges. ROBOAT is the first and flagship project for the collaboration.

    Photo courtesy of AMS.

    Full Screen
  • The ROBOAT project will be led at MIT by an interdisciplinary team: (clockwise, from top left) Daniela Rus, professor of electrical engineering and   science and director of the   Science and Artificial Intelligence Laboratory; Dennis Frenchman, the Class of 1922 Professor of Urban Design and Planning and director of the DesignX program in the School of Architecture and Planning; Andrew Whittle, the Edmund K. Turner Professor in Civil Engineering in the Department of Civil and Environmental Engineering; and Carlo Ratti, professor of the practice of urban technologies in the Department of Urban Studies and Planning.

    The ROBOAT project will be led at MIT by an interdisciplinary team: (clockwise, from top left) Daniela Rus, professor of electrical engineering and computer science and director of the Computer Science and Artificial Intelligence Laboratory; Dennis Frenchman, the Class of 1922 Professor of Urban Design and Planning and director of the DesignX program in the School of Architecture and Planning; Andrew Whittle, the Edmund K. Turner Professor in Civil Engineering in the Department of Civil and Environmental Engineering; and Carlo Ratti, professor of the practice of urban technologies in the Department of Urban Studies and Planning.

    Full Screen
  • MIT has signed an agreement to engage in research collaborations with the Amsterdam Institute for Advanced Metropolitan Solutions (AMS) in the Netherlands. The collaboration’s flagship project, led by researchers from multiple departments at MIT, will be to develop a fleet of autonomous boats for the city’s canals.

    Based in Amsterdam, the AMS Institute brings together a consortium of public and private partners to tackle complex urban challenges such as water, energy, waste, food, data, and mobility. MIT joins with two research institutions in the Netherlands — the Delft University of Technology and Wageningen University and Research Center — as the core academic partners who will use the city as a living laboratory and test bed.

    An interdisciplinary team from MIT has assembled to lead the collaboration’s first project: ROBOAT, an effort to develop a fleet of autonomous boats, or “roboats,” to investigate how urban waterways can be used to improve the city’s function and quality of life.

    The ROBOAT project will develop a logistics platform for people and goods, superimposing a dynamic infrastructure over one the world’s most famous water cities. “This project imagines a fleet of autonomous boats for the transportation of goods and people that can also cooperate to produce temporary floating infrastructure, such as on-demand bridges or stages that can be assembled or disassembled in a matter of hours,” says Carlo Ratti, professor of the practice of urban technologies in the MIT Department of Urban Studies and Planning (DUSP).

    In addition to infrastructure and transport, ROBOAT will also deploy environmental sensing to monitor water quality and offer data for assessing and predicting issues related to public health, pollution, and the environment. “Water is the bearer of life. By focusing on the water system of the city, ROBOAT can create opportunities for new environmental sensing methods and climate adaptation. This will help secure the city’s quality of life and lasting functionality,” says Arjan van Timmeren, professor and scientific director at AMS, who also envisions a multitude of possibilities for a network of roboats, from real-time sensing similar to the MIT Underworlds project to retrieving the 12,000 bicycles or cleaning up the floating waste that ends up in the Dutch city’s canals each year.

    Joining Ratti from MIT as co-principal investigators are Daniela Rus, professor of electrical engineering and computer science and director of the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL); Andrew Whittle, the Edmund K. Turner Professor in Civil Engineering in the Department of Civil and Environmental Engineering; and Dennis Frenchman, the Class of 1922 Professor of Urban Design and Planning and director of the DesignX program in the MIT School of Architecture and Planning.

    At AMS, Van Timmeren and Stephan van Dijk, research program manager, will coordinate the involvement of another 12 groups of researchers from TU Delft and Wageningen UR. Along with the City of Amsterdam, Waternet, the public water utility of Amsterdam and surrounding areas, will participate in the research.

    The first prototypes of autonomous boats, or “roboats,” are expected to be tested in Amsterdam in 2017. The project’s initial phase will last for five years. 

    With nearly one-quarter of the city covered by water, Amsterdam is an ideal place for developing ROBOAT, according to the researchers. The canal system was once the key functional urban infrastructure of the city and today still plays a major role in recreation and tourism. Amsterdam’s waters, including bridges, canals, and the IJ river and its docks, offer plenty of opportunity to help solve current issues with transportation, mobility, and water quality.

    With 80 percent of global economic output generated around coasts, riverbanks, and deltas and 60 percent of the world population living in these areas, researchers anticipate that outcomes from the ROBOAT projects could become a reference for other urban areas around the world and a source of international entrepreneurial initiatives and start-ups in which autonomy enters the marine world.


    • Digg
    • Del.icio.us
    • StumbleUpon
    • Reddit
    • RSS